Analysis of predictive performance and reliability of classifiers for quality assessment of medical evidence revealed important variation by medical area

Simon Šuster, School of Computing and Information Systems
Timothy Baldwin, School of Computing and Information Systems & Mohamed bin Zayed University of Artificial Intelligence
Karin Verspoor, School of Computing and Information Systems

Abstract

Objectives: A major obstacle in deployment of models for automated quality assessment is their reliability. To analyze their calibration and selective classification performance. Study Design and Setting: We examine two systems for assessing the quality of medical evidence, EvidenceGRADEr and RobotReviewer, both developed from Cochrane Database of Systematic Reviews (CDSR) to measure strength of bodies of evidence and risk of bias (RoB) of individual studies, respectively. We report their calibration error and Brier scores, present their reliability diagrams, and analyze the risk–coverage trade-off in selective classification. Results: The models are reasonably well calibrated on most quality criteria (expected calibration error [ECE] 0.04–0.09 for EvidenceGRADEr, 0.03–0.10 for RobotReviewer). However, we discover that both calibration and predictive performance vary significantly by medical area. This has ramifications for the application of such models in practice, as average performance is a poor indicator of group-level performance (e.g., health and safety at work, allergy and intolerance, and public health see much worse performance than cancer, pain, and anesthesia, and Neurology). We explore the reasons behind this disparity. Conclusion: Practitioners adopting automated quality assessment should expect large fluctuations in system reliability and predictive performance depending on the medical area. Prospective indicators of such behavior should be further researched.