Local Interpretability of Calibrated Prediction Models: A Case of Type 2 Diabetes Mellitus Screening Test

Image credit: ACM

Abstract

Machine Learning (ML) models are often complex and difficult to interpret due to their “black-box” characteristics. Interpretability of a ML model is usually defined as the degree to which a human can understand the cause of decisions reached by a ML model. Interpretability is of extremely high importance in many fields of healthcare due to high levels of risk related to decisions based on ML models. Calibration of the ML model outputs is another issue often overlooked in the application of ML models in practice. This paper represents an early work in examination of prediction model calibration impact on the interpretability of the results. We present a use case of a patient in diabetes screening prediction scenario and visualize results using three different techniques to demonstrate the differences between calibrated and uncalibrated regularized regression model.

Publication
In DSHealth 2020: 2020 KDD Workshop on Applied Data Science for Healthcare: Trustable and Actionable AI for Healthcare, August 24, San Diego, CA, United States
Primož Kocbek
Primož Kocbek
PhD Student

My research interests include statistical models and machine learning techniques with applications in healthcare. My specific areas of interest include temporal data analysis, interpretability of prediction models, stability of algorithms, advanced machine learning methods on massive datasets, e.g. deep neural networks.

Leona Cilar Budler
Leona Cilar Budler
PhD

My research interests include mental health, nursing research, and health informatics. Specific areas of interest include adolescent mental health, psychometric testing of questionnaires, questionnaire localization, and quantitative data analysis.

Gregor Štiglic
Gregor Štiglic
Associate Professor and head of Research Institute

My research interests include predictive models in healthcare, interpretability of complex models.

Related