Machine Learning Algorithms in Suicide Prevention: Clinician Interpretations as Barriers to Implementation

Abstract [from journal]

Objective: Machine learning algorithms in electronic medical records can classify patients by suicide risk, but no research has explored clinicians' perceptions of suicide risk flags generated by these algorithms, which may affect algorithm implementation. The objective of this study was to evaluate clinician perceptions of suicide risk flags.

Methods: Participants (n = 139; 68 with complete data) were mental health clinicians recruited to complete online surveys from October 2018 to April 2019.

Results: Most participants preferred to know which features resulted in a patient receiving a suicide flag (94.12%) and reported that knowing those features would influence their treatment (88.24%). Clinicians were more likely to report that some algorithm features (increased thoughts of suicide) would alter their clinical decisions more than others (age, physical health conditions; χ² = 270.84, P < .001). Clinicians were more likely to report that they would create a safety/crisis response plan in response to a suicide risk flag compared to other interventions (χ² = 227.02, P < .001), and 21% reported that they would complete a no-suicide contract following a suicide risk flag.

Conclusions: Clinicians overwhelmingly reported that suicide risk flags in electronic medical records would alter their clinical decision making. However, clinicians' likelihood of acting in response to a suicide risk flag was tied to which features were highlighted rather than the presence of the risk flag alone. Thus, the utility of a suicide risk algorithm will be reduced if clinical features underlying the algorithm are hidden from clinicians or if clinicians do not view the clinical features as intuitively meaningful predictors of suicide risk.