<p dir="ltr">Imagine a military surveillance system trained to identify specific vehicles in desert environments. One day, this system is deployed in a snowy mountain region and begins misidentifying civilian vehicles as military targets. Or consider an artificial intelligence (AI) medical diagnosis system for battlefield injuries that encounters a novel type of wound it was never trained on, but it confidently -- and incorrectly -- recommends a standard treatment protocol. These scenarios highlight a critical challenge in artificial intelligence: how do we know when an AI system is operating outside its intended knowledge boundaries? This is the critical domain of out-of-distribution (OoD) detection—identifying when an AI system is facing situations it wasn't trained to handle. Through our work here in the SEI’s AI Division, particularly in collaborating with the Office of the Under Secretary of Defense for Research and Engineering (OUSD R&E) to establish the Center for Calibrated Trust Measurement and Evaluation (CaTE), we’ve seen firsthand the critical challenges facing AI deployment in defense applications.</p>
NO WARRANTY. THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL IS FURNISHED ON AN "AS-IS" BASIS. CARNEGIE MELLON UNIVERSITY MAKES NO WARRANTIES OF ANY KIND, EITHER EXPRESSED OR IMPLIED, AS TO ANY MATTER INCLUDING, BUT NOT LIMITED TO, WARRANTY OF FITNESS FOR PURPOSE OR MERCHANTABILITY, EXCLUSIVITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL. CARNEGIE MELLON UNIVERSITY DOES NOT MAKE ANY WARRANTY OF ANY KIND WITH RESPECT TO FREEDOM FROM PATENT, TRADEMARK, OR COPYRIGHT INFRINGEMENT.
[DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution.