We investigate the performance of the isolation forest anomaly detection algorithm under data poisoning. We design an experiment using an empirical cybersecurity dataset called BETH, and we report model performance as the training data are incrementally poisoned. We find that, while it may be feasible for attackers to use data poisoning to prevent an anomaly detection model from alerting to their attack, we also find that the isolation forest contains some robustness against this style of attack. Finally, we acknowledge the limitations of our experiment and provide recommendations for future research
History
Date
2024-05-03
Degree Type
Master's Thesis
Department
Heinz College of Information Systems and Public Policy
Degree Name
Master of Science in Information Security Policy and Management (MSISPM)