Carnegie Mellon University
Browse
Robinson_measuring.trust.ai_2023.10.13.pdf (299.19 kB)

Measuring the Trustworthiness of AI Systems

Download (299.19 kB)
Version 2 2023-10-18, 22:23
Version 1 2023-10-18, 16:09
media
posted on 2023-10-18, 22:23 authored by Katherine-Marie RobinsonKatherine-Marie Robinson, Carol SmithCarol Smith, Alexandrea Scott-Van DeusenAlexandrea Scott-Van Deusen

The ability of artificial intelligence (AI) to partner with the software engineer, doctor, or warfighter depends on whether these end users trust the AI system to partner effectively with them and deliver the outcome promised. To build appropriate levels of trust, expectations must be managed for what AI can realistically deliver. In this podcast from the SEI’s AI Division, Carol Smith, a senior research scientist specializing in human-machine interaction, joins design researchers Katie Robinson and Alex Steiner, to discuss how to measure the trustworthiness of an AI system as well as questions that organizations should ask before determining if it wants to employ a new AI technology. 

History

Copyright Statement

Audiovisual published 2023 via Software Engineering Institute, Carnegie Mellon University

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC