Oops? Interdisciplinary Stories of Sociotechnical Error| When Faulty AI Falls Into the Wrong Hands: The Risks of Erroneous AI-Driven Healthcare Decisions

Eugene Jang

Abstract


This article investigates the consequences and implications of using artificial intelligence (AI) models in healthcare decision making. Specifically, it discusses a lawsuit in which a private healthcare company (UnitedHealthcare) allegedly used an erroneous AI model to deny coverage for its patients’ medical services. While previous studies have found that AI can unintentionally produce biased outputs due to inadequate training data or statistical logics, this example is notable, as the insurer was accused of intentionally using a faulty AI model to their advantage. Furthermore, the victims in this case are elderly individuals who are particularly vulnerable to the negative impacts of AI biases. Although there has been a long history of health insurance companies rejecting medical payouts, the implementation of AI in decision-making processes has accelerated this trend and created loopholes for fraudulent practices. This article illustrates the detrimental consequences of erroneous AI-based healthcare decisions through a specific example, discusses how AI complicates the issue of liability and responsibility, and calls to the need for improved transparency and accountability in AI regulation.


Keywords


artificial intelligence (AI), algorithmic bias, error, healthcare, ageism

Full Text:

PDF