Ensuring Fairness in Machine Learning to Advance Health Equity.

Icon for Silverchair Information Systems Icon for PubMed Central Related Articles

Ensuring Fairness in Machine Learning to Advance Health Equity.

Ann Intern Med. 2018 Dec 18;169(12):866-872

Authors: Rajkomar A, Hardt M, Howell MD, Corrado G, Chin MH

Abstract
Machine learning is used increasingly in clinical care to improve diagnosis, treatment selection, and health system efficiency. Because machine-learning models learn from historically collected data, populations that have experienced human and structural biases in the past-called protected groups-are vulnerable to harm by incorrect predictions or withholding of resources. This article describes how model design, biases in data, and the interactions of model predictions with clinicians and patients may exacerbate health care disparities. Rather than simply guarding against these harms passively, machine-learning systems should be used proactively to advance health equity. For that goal to be achieved, principles of distributive justice must be incorporated into model design, deployment, and evaluation. The article describes several technical implementations of distributive justice-specifically those that ensure equality in patient outcomes, performance, and resource allocation-and guides clinicians as to when they should prioritize each principle. Machine learning is providing increasingly sophisticated decision support and population-level monitoring, and it should encode principles of justice to ensure that models benefit all patients.

PMID: 30508424 [PubMed – in process]