The Case for Algorithmic Stewardship for Artificial Intelligence and Machine Learning Technologies

Stephanie Eaneff, Ziad Obermeyer, Atul J. Butte

Journal of the American Medical Association
September 14, 2020

Introduction:

The first manual on hospital administration, published in 1808, described a hospital steward as “an individual who [is] honest and above reproach,” with duties including the purchasing and management of hospital materials. Today, a steward’s job can be seen as ensuring the safe and effective use of clinical resources. The Joint Commission, for instance, requires antimicrobial stewardship programs to support appropriate antimicrobial use, including by monitoring antibiotic prescribing and resistance patterns.

A similar approach to “algorithmic stewardship” is now warranted. Algorithms, or computer-implementable instructions to perform specific tasks, are available for clinical use, including complex artificial intelligence (AI) and machine learning (ML) algorithms and simple rule-based algorithms. More than 50 AI/ML algorithms have been cleared by the US Food and Drug Administration for uses that include identifying intracranial hemorrhage from brain computed tomographic scans and detecting seizures in real time. Algorithms are also used to inform clinical operations, such as predicting which patients will “no show” for scheduled appointments. More recently, algorithms that predict in-hospital mortality have been proposed to inform ventilator allocation during the coronavirus disease 2019 pandemic.

Although the use of algorithms in health care is not new, newer emerging algorithms are increasingly complex. Historically, many simple rule-based algorithms and clinical calculators could be clearly communicated, calculated, and checked by a single person. However, many new algorithms, including predictive and AI/ML algorithms, incorporate far more data and require more complicated logic than could possibly be calculated by a single person. The complexity of these algorithms requires a new level of discipline in quality control.

When used appropriately, some algorithms can improve the diagnosis and management of disease. For example, algorithms that detect diabetic retinopathy from retinal images hold promise for improving the diagnosis of diabetic retinopathy, a leading cause of vision loss. However, algorithms also have the potential to exacerbate existing systems of structural inequality, as highlighted by recent research that detected racial bias in an algorithm that could potentially affect millions of patients.

As the US Food and Drug Administration reassesses its regulatory framework for AI/ML algorithms, health systems must also develop oversight frameworks to ensure that algorithms are used safely, effectively, and fairly. Such efforts should focus particularly on complex and predictive algorithms that necessitate additional layers of quality control. Health systems that use predictive algorithms to provide clinical care or support operations should designate a person or group responsible for algorithmic stewardship. This group should be advised by clinicians who are familiar with the language of data, patients, bioethicists, scientists, and safety and regulatory organizations. In this Viewpoint, drawing from best practices from other areas of clinical practice, several key considerations for emerging algorithmic stewardship programs are identified.



Featured Fellows

Stephanie Eaneff

BIDS Alum – Data Science Health Innovation Fellow