In the development of AI systems for mission applications, it is essential to recognize the kinds of weaknesses and vulnerabilities unique to modern AI models. This is important for design, implementation, and test and evaluation (T&E) for AI models and AI-based systems. The October 2023 Executive Order on AI highlights the importance of red teams, and we can expect that these weaknesses and vulnerabilities will be a focus of attention for any T&E activity. This blog post examines a number of specific weaknesses and vulnerabilities associated with modern artificial intelligence (AI) models that are based on neural networks. These neural models include machine learning (ML) and generative AI, particularly large language models (LLMs). We focus on three aspects:
- Triggers, including both attack vectors for deliberate adversarial action (exploiting vulnerabilities) and intrinsic limitiations due to the statistical nature of the models (manifestations from weaknesses)
- The nature of operational consequences, including the kinds of potential failures or harms in operations
- Methods to mitigate them, including both engineering and operational actions
This is the second installment in a four-part series of blog posts focused on AI for critical systems where trustworthiness - based on checkable evidence - is essential for operational acceptance. The four parts are relatively independent of each other and address this challenge in stages.
Publisher Statement
This material is based upon work funded and supported by the Department of Defense under Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center. The view, opinions, and/or findings contained in this material are those of the author(s) and should not be construed as an official Government position, policy, or decision, unless designated by other documentation. References herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by Carnegie Mellon University or its Software Engineering Institute.
This report was prepared for the SEI Administrative Agent AFLCMC/AZS 5 Eglin Street Hanscom AFB, MA 01731-2100.
NO WARRANTY. THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL IS FURNISHED ON AN "AS-IS" BASIS. CARNEGIE MELLON UNIVERSITY MAKES NO WARRANTIES OF ANY KIND, EITHER EXPRESSED OR IMPLIED, AS TO ANY MATTER INCLUDING, BUT NOT LIMITED TO, WARRANTY OF FITNESS FOR PURPOSE OR MERCHANTABILITY, EXCLUSIVITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL. CARNEGIE MELLON UNIVERSITY DOES NOT MAKE ANY WARRANTY OF ANY KIND WITH RESPECT TO FREEDOM FROM PATENT, TRADEMARK, OR COPYRIGHT INFRINGEMENT.
[DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution.Copyright Statement
Copyright 2024 Carnegie Mellon University.