Categories
Everyday Life Politics and Public Policy

Healthcare applications of predictive algorithms: A possible solution to physicians’ biases?  

Artificial Intelligence has long held the promise of improving predictions in healthcare solutions, a promise which is now becoming a reality in many settings, first and foremost the US, where in the past years many commercial algorithms have received FDA regulatory approval for broad clinical use. Indeed, the application of Machine Learning, a type of AI which allows software applications to become increasingly accurate at predicting outcomes through the usage of historical data, is particularly relevant for improving the prediction of healthcare outcomes. If traditional models employed in the past mainly focused on unbiased estimation and aimed at accurately inferring causality, hence individuating the causes of certain adverse health effects, machine learning offers instead highly accurate methods for prediction of outcomes but with no underlying claim of causality, only of correlation. ML-based predictive analytic models are constructed by inputting to the software numerous relevant variables under the form of a testing sample, such as patients’ age, gender, mass, temperature, pre-existing conditions, current conditions, and more, to allow the software to then construct the best model to fit the data it has received, correlating very high numbers of variables in a highly non-linear fashion, understanding which elements correlate with outcomes, such as, for instance, the onset of Alzheimer disease or of a stroke, and proving healthcare professionals with an increasingly accurate predicting tool.  

As such, ML could potentially aid clinicians and healthcare staff to improve health outcomes and to maximize the use of the often constrained resources available, by informing them beforehand about potential events, therefore allowing to make more informed choices on how to proceed. The importance of being one step ahead in predicting adverse events is particularly crucial to professionals dealing with patients in intensive care, emergency care, or surgery, for their life depends on the possibility of providing a timely and accurate reaction. The range of healthcare applications of predictive algorithms is however very broad, and real life uses of predictive AI include (but are not limited to) the identification of patients which would most benefit from testing for a certain condition, hence reducing over testing and under testing, the detection of early signs of patient deterioration, the delivery of preliminary care for at-risk patients, and the prediction of patients’ behaviours, such as the failure to show up to appointments. Overall, developing patient-centric models which rely on individual data for accurate predictions bears the promise of improving patients’ outcomes by providing a holistic healthcare approach and a personalized service provision.  

An element of interest is that the predictive power of algorithms on which patients are at risk of developing a certain condition can be a powerful tool in countering doctors’ conscious and unconscious biases in their testing and treatment decisions. For instance, let us consider heart attacks, which are a major source of morbidity and mortality. Many patients who experience an attack were in a hospital ED just prior to the attack but did not receive testing or treatment, and there are different reasons for why this might be the case. First of all, the symptoms which indicate the insurgence of a heart attack are not exclusive to heart attacks only, for chest pain is also present is patients experiencing acute anxiety and panic attacks, and medical staff is not always capable of distinguishing which patient accusing chest pain is at risk. Because of bounded rationality and time pressures, and the high cost of testing for adverse conditions leading to an attack (such as coronary blockage), a patient at risk might not be tested, and then later admitted under urgency conditions. Additionally, doctors might generally be more or less likely to prescribe a precautionary test based on the age, gender, and race of their patients. This could be because of straight up discrimination, because of the better ability of some patients to verbally communicate their symptoms, or because of unconscious biases of the medical staff, which might attribute more relevance to the symptoms communicated by patients with whom they can empathize better, or which they perceive as more reliable. Moreover, studies show that in the US there are significant race and socio-economic disparities in access to testing because low rates of testing in the past can lead to low probability to testing in the present, because the patient’s medical record has no documentation of past adverse test results or treatments. 

It becomes clear that the impartiality and the very high predictive ability of a ML algorithm holds the potential to overcome all these biases arising from the lack of objectivity in human judgement and to solve the issue of undertesting at risk patients, together with many more promising possibilities. However, algorithms are only impartial as the datasets they have been inputted with. As such, regulators and professional bodies should always consider that there is the possibility that predictive algorithms could systematically bias against certain groups of patients. This issue arises from the incompleteness of ML training sample datasets, as some types of patients are systematically under-represented in the clinical datasets used due to limited access to care, or even by design, as in electronic medical records hospitals only include only those who were actually treated following admission. Just as pharmaceutical drugs approved after clinical trials are subject to post market access controls, predictive algorithms should also be subject to periodical audits after regulatory approval, especially considering that an algorithm’s systematic bias against certain groups may only emerge when deployed across large populations. For instance, if patient severity is assessed considering past healthcare tests, and test costs is used as a proxy of the number and detail of tests, this will lead to biased proxy because richer people have higher healthcare spending. At the same time, the advantage with respect to human bias arises from the fact that all errors in ML evaluation are explicit and can be singled out following an accurate audit. 

In conclusion, ML predictive algorithms can be a very relevant benchmark against which to evaluate the biases of medical staff, but they should also be carefully regulated and closely monitored and audited. Eventually, however, the liability of medical decisions remains on the doctors. While they cannot ignore the algorithms prediction, they shouldn’t however either be passive to them.  

Sources

Bonderud, D. “How Predictive Modeling in healthcare Boosts Patient Care”. Health Tech Magazine, 16 Apr. 2021.  

Deloitte Predictive analytics in health care, 19 July 2019. 

Parikh et al. “Regulation of Predictive Analytics in Medicine.” Science Vol 363 Issue 6429,  22 Feb. 2019. 

Robinson, J. “Diagnostic in the Era of Machine Learning”, University of California, Berkeley.  

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s