3D Object Detection with Depth Completion for Autonomous Driving via Camera and LIDAR Fusion
As the core module of the autonomous driving application, the perception system provides the status of the surrounding environment and is essential to safe motion planning and decision-making. To achieve high-performance perception, a robust solution is mounting multiple high-definition sensors on the autonomous car, including dense LIDAR, RGB cameras, and RADAR. However, this sensor configuration is not affordable for commercial-level autonomous cars and small robots. To solve this problem, we propose a low-cost perception system that fuses sparse LIDAR and an RGB camera to achieve a level of sensing capability similar to that achieved by high-definition LIDAR. In this system, we propose a sensor fusion-based depth information enhancement approach that completes the sparse depth map and achieves a dense point cloud of the surrounding scene. This approach is further improved by the inductive-fusion technique and linear inverse problem, which achieves more accurate and efficient depth prediction. Integrating the depth completion algorithm into the full perception system further improves the accuracy of end-to-end 3D object detection. In all, we achieve a robust and efficient perception system that has comparable sensor capability with the high-definition sensor combination. Our system will reduce the cost of commercial autonomous vehicles while achieving reasonable perception performance and be able to be extended to other indoor and outdoor robots.
History
Date
2022-12-15Degree Type
- Dissertation
Department
- Electrical and Computer Engineering
Degree Name
- Doctor of Philosophy (PhD)