Imagine riding to work in your self-driving car. As you approach a stop sign, instead of stopping, the car speeds up and goes through the stop sign because it interprets the stop sign as a speed limit sign. How did this happen? Even though the car’s machine learning (ML) system was trained to recognize stop signs, someone added stickers to the stop sign, which fooled the car into thinking it was a 45-mph speed limit sign. This simple act of putting stickers on a stop sign is one example of an adversarial attack on MLsystems. In this SEI Blog post, I examine how ML systems can be subverted and, in this context, explain the concept of adversarial machine learning. I also examine the motivations of adversaries and what researchers are doing to mitigate their attacks. Finally, I introduce a basic taxonomy delineating the ways in which an ML model can be influenced and show how this taxonomy can be used to inform models that are robust against adversarial actions.
Publisher Statement
This material is based upon work funded and supported by the Department of Defense under Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center. The view, opinions, and/or findings contained in this material are those of the author(s) and should not be construed as an official Government position, policy, or decision, unless designated by other documentation. References herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by Carnegie Mellon University or its Software Engineering Institute.
This report was prepared for the SEI Administrative Agent AFLCMC/AZS 5 Eglin Street Hanscom AFB, MA 01731-2100.
NO WARRANTY. THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL IS FURNISHED ON AN "AS-IS" BASIS. CARNEGIE MELLON UNIVERSITY MAKES NO WARRANTIES OF ANY KIND, EITHER EXPRESSED OR IMPLIED, AS TO ANY MATTER INCLUDING, BUT NOT LIMITED TO, WARRANTY OF FITNESS FOR PURPOSE OR MERCHANTABILITY, EXCLUSIVITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL. CARNEGIE MELLON UNIVERSITY DOES NOT MAKE ANY WARRANTY OF ANY KIND WITH RESPECT TO FREEDOM FROM PATENT, TRADEMARK, OR COPYRIGHT INFRINGEMENT.
[DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution.Copyright Statement
Copyright 2023 Carnegie Mellon University