Domain Adapted Visual Representation Learning for Machine Perception
Our objective is to enhance the generalization capabilities of existing machine perception models and achieve diverse domain alignments through adept representation learning. Many established approaches for perception tasks, encompassing object classification, detection, tracking, and rendering, often confront diverse domain changes that curtail their adaptability to novel domains. We categorize these changes into three types: 1) alterations in pose and viewpoint, 2) variations in visual capture conditions, and 3) diversity in modalities. Initially, models trained on specific viewpoints may falter when faced with viewpoints outside their training range. Second, changes in visual data capture conditions, encompassing changes in illumination or image resolution, can erode the generalization of trained models. Third, employing pre-trained models across distinct modalities, such as RGB, Lidar point clouds, Radar maps, or text embeddings, can lead to performance degradation. In this thesis, we propose to perform domain alignment to handle the aforementioned domain changes.
The first segment of this thesis outlines our approach to performing domain alignment without the need for arduously training extensive models across multiple domains. We advocate for efficient handling of each type of change through visual representation learning techniques, utilizing models with minimal network parameters and judicious training data. This process, known as domain adaptation, unfolds in three stages. Initially, for pose and viewpoint variation, we propose acquiring viewpoint-invariant or pose-invariant representations, relevant to tasks like Re-ID, object tracking, and 3D face rendering. Subsequently, to mitigate the impact of changes in visual capture conditions, we harness semi-supervised and adversarial learning methods for tasks such as object detection and Re-ID. Lastly, to address cross-modal domain changes, we leverage self-training strategies to cultivate modality-agnostic representations for object detection.
The second part of this thesis extends our domain-aligning framework to manage scenarios involving more than two forms of domain changes. To concurrently handle viewpoint variation and diverse modalities, we devise models capable of learning viewinvariant representations for multiple modalities within the realm of 3D human pose estimation and rendering. Moreover, to combat changes arising from changes in resolution and diverse modalities in physical devices (e.g., ADC signals and Radar’s RGB images), we advocate for the acquisition of super-resolution representations using models featuring complex values. Broadly, this thesis delves into the intricacies of perception tasks affected by domain changes and provides pragmatic solutions to address these challenges in real-world contexts.
- Electrical and Computer Engineering
- Doctor of Philosophy (PhD)