Carnegie Mellon University
Browse

Undermining Trust in Learning and Inference Systems

Download (3.36 MB)
thesis
posted on 2023-09-12, 19:43 authored by Dustin UpdykeDustin Updyke

We are inclined to trust computers as we would any other tool, through motivations based upon repeated reliability, and the execution of defined tasks whose functions are well understood. With a laptop in every hand, and a cell phone in every pocket, it is hard to recall a time when we could not depend upon computers to help us live better lives. We have become dependent upon computer systems that help us navigate city traffic, interact with healthcare professionals, and stay connected with family members across the globe. 

However, the rise of learning and inference systems change the fundamental relationship between us and machine, in that the computer is now less of a tool and more of a teammate. These types of systems, including artificial intelligence systems, make important decisions on our behalf or replace the need for our intervention entirely. In addition, while the activities performed by learning and inference systems are expanding, they are not always well defined, nor do we often know how such a system makes decisions and executes tasks. Still, when computers operate as we expect, they can improve our lives in many ways, and we can hardly envision how we ever did without them. 

But computers also operate in ways that we don’t expect. They are subject to an array of security concerns. When a computer provides surprising results, perhaps these can directly affect our trust in such as system. If we come to not trust a system that we reply upon, what happens?

Since we depend upon computers for so much in the modern age, we should assume that trust will likely be the target of adversaries. What is particularly worrisome is that these attacks can essentially modify our expected results but without informing us of what exactly has changed, or how this has occurred. Let’s consider several historical examples of how this scenario might evolve. 

History

Date

2023-04-01

Degree Type

  • Master's Thesis

Department

  • Philosophy

Degree Name

  • Master of Science (MS)

Advisor(s)

Kevin Zollman, David Danks

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC