User Perceptions and Trust of Explainable Machine Learning Fake News Detectors

Jieun Shin, Sylvia Chan-Olmsted

Abstract


The goal of the study was to explore the factors that explain users’ trust and usage intent of the leading explainable artificial intelligence (AI) fake news detection technology. Toward this end, we examined the relationships between various human factors and software-related factors using a survey. The regression models showed that users’ trust levels in the software were influenced by both individuals’ inherent characteristics and their perceptions of the AI application. Users’ adoption intention was ultimately influenced by trust in the detector, which explained a significant amount of the variance. We also found that trust levels were higher when users perceived the application to be highly competent at detecting fake news, be highly collaborative, and have more power in working autonomously. Our findings indicate that trust is a focal element in determining users’ behavioral intentions. We argue that identifying positive heuristics of fake news detection technology is critical for facilitating the diffusion of AI-based detection systems in fact-checking.


Keywords


AI, fake news, media literacy, trust, explainability

Full Text:

PDF