Carnegie Mellon University
Browse

Explainable Deep Machine Learning for Medical Image Analysis

Download (37.45 MB)
thesis
posted on 2023-09-12, 20:32 authored by Alex GaudioAlex Gaudio

Explanations justify the development and adoption of algorithmic solutions for prediction problems in medical image analysis. This thesis introduces two guiding principles for creating and exploiting explanations of deep networks and medical image data. The first guiding principle is to use explanations to expose inefficiencies in the design of models and image datasets. The second principle is to leverage tools of compression and fixed-weight methods that minimize learning to make more efficient and effective models and more usable medical image datasets. The outcome is more effective deep learning in medical image analysis. Application of these guiding principles in different settings results in five main contributions: (a) improved understanding of biases present in deep networks and medical images, (b) improved predictive and computational performance of predictive models, (c) creation of ante-hoc models that are interpretable by design, (d) creation of smaller image datasets, and (e) improved visual privacy. This thesis falls within the scope of the TAMI project for Transparent Artificial Machine Intelligence and focuses on explainable artificial intelligence (XAI) for medical image data. 

History

Date

2023-08-07

Degree Type

  • Dissertation

Department

  • Electrical and Computer Engineering

Degree Name

  • Doctor of Philosophy (PhD)

Advisor(s)

Asim Smailagic, Aurelio Campilho

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC