Patient Reported Outcome Measures (PROMs) are questionnaires completed by patients about aspects of their health status. They are a vital part of learning health systems as they are the primary source of information about important outcomes that are best assessed by patients such as pain, disability, anxiety and depression. The volume of questions can easily become burdensome. Previous techniques reduced this burden by dynamically selecting questions from question item banks which are specifically built for different latent constructs being measured. These techniques analyzed the information function between each question in the item bank and the measured construct based on item response theory then used this information function to dynamically select questions by computerized adaptive testing. Here we extend those ideas by using Bayesian Networks (BNs) to enable Computerized Adaptive Testing (CAT) for efficient and accurate question selection on widely-used existing PROMs. BNs offer more comprehensive probabilistic models of the connections between different PROM questions, allowing the use of information theoretic techniques to select the most informative questions. We tested our methods using five clinical PROM datasets, demonstrating that answering a small subset of questions selected with CAT has similar predictions and error to answering all questions in the PROM BN. Our results show that answering 30% -75% questions selected with CAT had an average area under the receiver operating characteristic curve (AUC) of 0.92 (min: 0.8 - max: 0.98) for predicting the measured constructs. BNs outperformed alternative CAT approaches with a 5% (min: 0.01% - max: 9%) average increase in the accuracy of predicting the responses to unanswered question items.