Rethinking Artificial Intelligence: Algorithmic Bias and Ethical Issues| Questioning Artificial Intelligence: How Racial Identity Shapes the Perceptions of Algorithmic Bias
Growing concerns indicate that automated decision-making (ADM) may discriminate against certain social groups, but little is known about how social identities of people influence their perception of biased automated decisions. Focusing on the context of racial disparity, this study examined if individuals’ social identities (white vs. People of Color) and social contexts that entail discrimination (discrimination target: the self vs. the other) affect the perceptions of ADM. A randomized controlled experiment (N = 604) demonstrated that a participant’s social identity significantly moderated the effects of the discrimination target on the perceptions of ADM. Among POC participants, ADM that discriminates against the subject decreased their perceived fairness and trust in ADM, whereas among white participants opposite patterns were observed. The findings imply that social disparity and inequality, and different social groups’ lived experiences of the existing discrimination and injustice should be at the center of understanding how people make sense of biased algorithms.