Carnegie Mellon University
Browse

Human Trust in Artificial Intelligence Review

Download (573.05 kB)
journal contribution
posted on 2022-04-06, 18:59 authored by Ella GliksonElla Glikson, Anita WoolleyAnita Woolley

Artificial Intelligence (AI) characterizes a new generation of machines capable of interacting with the environment and aiming to simulate human intelligence. By challenging previously established assumption that technology is fully deterministic and by threatening to change the overall workforce structure, AI provides new theoretical and empirical questions that should be addressed by organizational researchers. The way AI-associated changes are being integrated within organizations depends on workers’ trust in AI. This review presents the empirical research on human trust in AI conducted in multiple disciplines over the last twenty years. Based on the reviewed literature, we organize the existing research into a framework that addresses AI representation as a continuum of embodiment on the one hand and level of machine intelligence on the other. The continuum starts with AI that is fully embedded within an application, such as a search-engine, and ends with a fully physically present robotic entity operated by AI. As the level of machine intelligence and control over the work increases, so does the level of required trust. The features of AI representation and machine intelligence have important implications for the antecedents of trust that in turn play a critical role in enabling AI integration into organizational routines.

History

Publisher Statement

This is the author's version of: Glikson, E., & Woolley, A. W. (2020). Human Trust in Artificial Intelligence: Review of Empirical Research. Academy of Management Annals, 14(2), 627–660

Date

2020-03-01

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC