Carnegie Mellon University
Browse

3D Lensless Imaging – Theory, Hardware, and Algorithms

Download (41.05 MB)
thesis
posted on 2023-04-17, 19:01 authored by Yi HuaYi Hua

Lensless cameras enable us to see in many challenging scenarios. They can image in narrow spaces where lenses do not fit, operate at wavelengths where lenses do not work, and make ultra-wide field-of-view microscopes. Despite their novel capabilities, current lensless cameras have limited imaging quality that restricts their practicality. These limitations can be attributed to the conditioning and complexity of the inverse problem that lensless imagers must solve to obtain the scene. 

A common design in lensless imaging is that of a thin attenuating mask placed before a sensor. For a scene restricted to a front-parallel plane, the image formation model can be approximated as a 2D convolution between the plane’s texture and a scaled version of the mask pattern, and the ensuing inverse problem has efficient solutions. However, scenes of more complex geometry, such as those spanning a large depth range, pose a difficult and under-determined inverse problem. This thesis aims to develop lensless imaging techniques to effectively and efficiently photograph 3D scenes with an extended depth range. To that end, we make the following contributions to the theory, hardware, and algorithms of 3D lensless imaging. 

First, we present a theoretical analysis of the spatial and axial resolution limits of a mask-based lensless camera, which provides an understanding of the performance of various camera designs. Specifically, we derive the closed-form expression of a 3D modulation transfer function as a function of the mask pattern, and connect the parameters of the mask to the camera’s achievable spatio-axial resolution. 

Second, we introduce programmable masks in lensless imagers to increase the number of measurements by capturing multiple frames while displaying different mask patterns. This upgrade in hardware allows computational focusing at a given depth, such that the resulting measurements are well approximated as a result of 2D convolution, even when the scene extends over a large depth range. As a result, the texture corresponding to a specific depth can be recovered with an efficient deconvolution method with fewer artifacts. 

Finally, we present an inverse rendering approach to the reconstruction problem, which requires a joint solution of the texture and shape of the scene. This approach solves the inverse problem under a physically realistic and differentiable forward model. It allows us to faithfully represent scenes as surfaces instead of volumetric albedo functions as is commonly used in previous works, and avoids reconstruction artifacts arising from model mismatch. 

Together, those three contributions provide a fundamental advance to 3D lensless imaging 

History

Date

2023-02-13

Degree Type

  • Dissertation

Department

  • Electrical and Computer Engineering

Degree Name

  • Doctor of Philosophy (PhD)

Advisor(s)

Aswin Sankaranarayanan

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC