Dangers of AI for Insider Risk Evaluation (D.A.R.E.)
The goal of artificial intelligence (AI) is to build machines capable of simulating human thoughts and actions. There are many implementations of AI, and some of the most well-known are computer vision, robotics, machine learning (ML), and natural language processing (NLP). Each of these implementations uses different methods, inputs, features, parameters, and outputs, to solve a specific problem. Typically, the problems being solved have a very narrow application, and the implementation chosen is highly tailored to the task at hand, such as the following examples:
• computer vision for detecting obstacles in the road [Janai 2020]
• NLP for speech recognition [Kamath 2019]
• deep learning for detecting breast cancer [Chan 2020]
This type of AI is called “narrow AI,” and it is programmed to operate within a predefined set of parameters, rules, and context. The models developed for narrow AI applications cannot be used for other tasks, even if the tasks are very similar. For example, a model programmed to tell the difference between images of dogs and cats would not be able to detect different dog breeds, just as a model programmed to detect customer bank fraud would not be able to predict bank employee fraud. The categories of AI most relevant to the insider risk domain are ML and NLP. ML uses algorithms trained to find patterns in large datasets by analyzing different features or attributes of the data. For example, the dog versus cat model might analyze ear shape and muzzle length. Once the algorithm has been adjusted to find the correct patterns, it can then be used to predict, cluster, or classify additional data.