Reducing the question burden of patient reported outcome measures using Bayesian networks


Yuceturk H., Gulle H., TUNCER ŞAKAR C., Joyner C., Marsh W., ÜNAL E., ...Daha Fazla

JOURNAL OF BIOMEDICAL INFORMATICS, cilt.135, 2022 (SCI-Expanded) identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 135
  • Basım Tarihi: 2022
  • Doi Numarası: 10.1016/j.jbi.2022.104230
  • Dergi Adı: JOURNAL OF BIOMEDICAL INFORMATICS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Aerospace Database, Applied Science & Technology Source, BIOSIS, Biotechnology Research Abstracts, CAB Abstracts, CINAHL, Communication Abstracts, Compendex, Computer & Applied Sciences, EMBASE, INSPEC, MEDLINE, Metadex, Veterinary Science Database, Civil Engineering Abstracts
  • Anahtar Kelimeler: Patient reported outcome measures, Bayesian networks, Computerized adaptive testing, Questionnaire burden, LOW-BACK-PAIN, FEAR-AVOIDANCE BELIEFS, ITEM RESPONSE THEORY, QUALITY-OF-LIFE, HEALTH-STATUS, CENTRAL SENSITIZATION, FOOT-HEALTH, VALIDATION, DISABILITY
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

Patient Reported Outcome Measures (PROMs) are questionnaires completed by patients about aspects of their health status. They are a vital part of learning health systems as they are the primary source of information about important outcomes that are best assessed by patients such as pain, disability, anxiety and depression. The volume of questions can easily become burdensome. Previous techniques reduced this burden by dynamically selecting questions from question item banks which are specifically built for different latent constructs being measured. These techniques analyzed the information function between each question in the item bank and the measured construct based on item response theory then used this information function to dynamically select questions by computerized adaptive testing. Here we extend those ideas by using Bayesian Networks (BNs) to enable Computerized Adaptive Testing (CAT) for efficient and accurate question selection on widely-used existing PROMs. BNs offer more comprehensive probabilistic models of the connections between different PROM questions, allowing the use of information theoretic techniques to select the most informative questions. We tested our methods using five clinical PROM datasets, demonstrating that answering a small subset of questions selected with CAT has similar predictions and error to answering all questions in the PROM BN. Our results show that answering 30% -75% questions selected with CAT had an average area under the receiver operating characteristic curve (AUC) of 0.92 (min: 0.8 - max: 0.98) for predicting the measured constructs. BNs outperformed alternative CAT approaches with a 5% (min: 0.01% - max: 9%) average increase in the accuracy of predicting the responses to unanswered question items.