Carnegie Mellon University
Browse

Dangers of AI for Insider Risk Evaluation (D.A.R.E.)

Download (398.35 kB)
report
posted on 2024-11-19, 14:25 authored by Austin WhisnantAustin Whisnant

The goal of artificial intelligence (AI) is to build machines capable of simulating human thoughts and actions. There are many implementations of AI, and some of the most well-known are computer vision, robotics, machine learning (ML), and natural language processing (NLP). Each of these implementations uses different methods, inputs, features, parameters, and outputs, to solve a specific problem.  Typically, the problems being solved have a very narrow application, and the implementation chosen is highly tailored to the task at hand, such as the following examples:

• computer vision for detecting obstacles in the road [Janai 2020]

• NLP for speech recognition [Kamath 2019]

• deep learning for detecting breast cancer [Chan 2020]

This type of AI is called “narrow AI,” and it is programmed to operate within a predefined set of parameters, rules, and context. The models developed for narrow AI applications cannot be used for other tasks, even if the tasks are very similar. For example, a model programmed to tell the difference between images of dogs and cats would not be able to detect different dog breeds, just as a model programmed to detect customer bank fraud would not be able to predict bank employee fraud. The categories of AI most relevant to the insider risk domain are ML and NLP.  ML uses algorithms trained to find patterns in large datasets by analyzing different features or attributes of the data. For example, the dog versus cat model might analyze ear shape and muzzle length. Once the algorithm has been adjusted to find the correct patterns, it can then be used to predict, cluster, or classify additional data.


History

Publisher Statement

This material is based upon work funded and supported by the Department of Defense under Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center. The view, opinions, and/or findings contained in this material are those of the author(s) and should not be construed as an official Government position, policy, or decision, unless designated by other documentation. References herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by Carnegie Mellon University or its Software Engineering Institute. This report was prepared for the SEI Administrative Agent AFLCMC/AZS 5 Eglin Street Hanscom AFB, MA 01731-2100. NO WARRANTY. THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL IS FURNISHED ON AN "AS-IS" BASIS. CARNEGIE MELLON UNIVERSITY MAKES NO WARRANTIES OF ANY KIND, EITHER EXPRESSED OR IMPLIED, AS TO ANY MATTER INCLUDING, BUT NOT LIMITED TO, WARRANTY OF FITNESS FOR PURPOSE OR MERCHANTABILITY, EXCLUSIVITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL. CARNEGIE MELLON UNIVERSITY DOES NOT MAKE ANY WARRANTY OF ANY KIND WITH RESPECT TO FREEDOM FROM PATENT, TRADEMARK, OR COPYRIGHT INFRINGEMENT. [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution.

Copyright Statement

Copyright 2024 Carnegie Mellon University.

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC