Robust and Scalable Perception For Autonomy
Autonomous mobile robots have the potential to drastically improve the quality of our daily life. For example, self-driving vehicles could make transportation safer and more affordable. To safely navigate complex environments, such robots need a perception system that translates raw sensory data to high-level understanding. This thesis focuses on two fundamental challenges toward building
such perception systems via machine learning: robustness and scalability. First, how can we learn a perception system that is robust to different types of variance in sensory data? For example, the sensory data of an object may look
completely different depending on the distance and the presence of occlusion. Also, a perception system may encounter objects it has never seen during learning. To capture such variances, we develop approaches that make use of novel characterizations of context, visibility, and geometric prior. Second, how can we rearchitect perception that requires less human supervision during learning? For example, standard perception software stacks build perceptual modules to recognize objects and forecast their movements. Training these modules requires object labels such as trajectories and semantic categories. To learn from large-scale unlabeled logs, we explore freespace supervision as an alternative to the predominant object supervision. We integrate freespace self-supervision with motion planners and demonstrate promising results.
- Robotics Institute
- Doctor of Philosophy (PhD)