Carnegie Mellon University
Browse

SegSub: Evaluating Robustness to Knowledge Conflicts and Hallucinations in Vision-Language Models

Version 3 2025-05-14, 01:25
Version 2 2025-02-19, 21:16
Version 1 2025-01-28, 20:43
dataset
posted on 2025-05-14, 01:25 authored by Peter CarragherPeter Carragher

This research introduces \segsub, a framework for applying targeted image perturbations to investigate VLM resilience against knowledge conflicts. Our analysis reveals distinct vulnerability patterns: while VLMs are robust to parametric conflicts (20% adherence rates), they exhibit significant weaknesses in identifying counterfactual conditions (<30% accuracy) and resolving source conflicts (<1% accuracy). Correlations between contextual richness and hallucination rate (r = -0.368, p = 0.003) reveal the kinds of images that are likely to cause hallucinations. Through targeted fine-tuning on our benchmark dataset, we demonstrate improvements in VLM knowledge conflict detection, establishing a foundation for developing hallucination-resilient multimodal systems in information-sensitive environments.

History

Date

2025-01-28

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC