<p>Artificial Intelligence (AI) characterizes a new
generation of machines capable of interacting with the environment and aiming to simulate human intelligence. By challenging previously
established assumption that technology is fully deterministic and by threatening
to change the overall workforce structure, AI provides new theoretical and
empirical questions that should be addressed by organizational researchers.
The way AI-associated changes are being integrated within organizations depends
on workers’ trust in AI. This review presents the empirical research on human <b><i>trust</i></b>
in AI conducted in multiple disciplines over the last twenty years. Based on
the reviewed literature, we organize the existing research into a framework that
addresses AI representation as a continuum of embodiment on the one hand and
level of machine intelligence on the other. The continuum starts with AI that
is fully embedded within an application, such as a search-engine, and ends with
a fully physically present robotic entity operated by AI. As the level of
machine intelligence and control over the work increases, so does the level of required
trust. The features of AI representation and machine intelligence have
important implications for the antecedents of trust that in turn play a
critical role in enabling AI integration into organizational routines. </p>
This is the author's version of: Glikson, E., & Woolley, A. W. (2020). Human Trust in Artificial Intelligence: Review of Empirical Research. Academy of Management Annals, 14(2), 627–660