Tesla announces proposed $1.5 billion offering of senior notes
MAHLE to present highly efficient 48V electric vehicle concept at IAA: MEET

Stanford, UCSD team develops “4D” camera; use in autonomous vehicles

Engineers at Stanford University and the University of California San Diego (UCSD) have developed a monocentric lens with multiple sensors using microlens arrays, allowing light field (LF) capture with an unprecedented field of view (FOV)—a camera that generates four-dimensional images and can capture 138 degrees of information.

The new camera—the first single-lens, wide field of view, light field (LF) camera—could generate information-rich images and video frames that will enable robots to better navigate the world and understand certain aspects of their environment, such as object distance and surface texture. The researchers also see this technology being used in autonomous vehicles and augmented and virtual reality technologies. Researchers presented their new technology at the computer vision conference CVPR 2017 in July.

Light field (LF) capture and processing are important in an expanding range of computer vision applications, offering rich textural and depth information and simplification of conventionally complex tasks. Although LF cameras are commercially available, no existing device offers wide field-of-view (FOV) imaging. This is due in part to the limitations of fisheye lenses, for which a fundamentally constrained entrance pupil diameter severely limits depth sensitivity.

In this work we describe a novel, compact optical design that couples a monocentric lens with multiple sensors using microlens arrays, allowing LF capture with an unprecedented FOV. Leveraging capabilities of the LF representation, we propose a novel method for efficiently coupling the spherical lens and planar sensors, replacing expensive and bulky fiber bundles. We construct a single-sensor LF camera prototype, rotating the sensor relative to a fixed main lens to emulate a wide-FOV multi-sensor scenario. Finally, we describe a processing toolchain, including a convenient spherical LF parameterization, and demonstrate depth estimation and post-capture refocus for indoor and outdoor panoramas with 15 × 15 × 1600 × 200 pixels (72 MPix) and a 138° FOV.

—Dansereau et al.

Comparing LF camera lenses: left to right, a conventional Cooke triplet, a fisheye lens, and a monocentric lens. While conventional optics work well, they do not support large FOVs. Fisheye lenses scale to 180° and beyond but have fundamentally limited entrance pupils, making them unsuitable for LF capture. Monocentric lenses support both a wide FOV and a wide aperture but present a curved focal surface. This new study introduces an LF processing approach to coupling the spherical lens with planar sensor arrays. Dansereau et al. Click to enlarge.

The project is a collaboration between the labs of electrical engineering professors Gordon Wetzstein at Stanford and Joseph Ford at UC San Diego.

UC San Diego researchers designed a spherical lens that provides the camera with an extremely wide field of view, encompassing nearly a third of the circle around the camera. Ford’s group had previously developed the spherical lenses under the DARPA “SCENICC” (Soldier CENtric Imaging with Computational Cameras) program to build a compact video camera that captures 360-degree images in high resolution, with 125 megapixels in each video frame. In that project, the video camera used fiber optic bundles to couple the spherical images to conventional flat focal planes, providing high-performance but at high cost.

The new camera uses a version of the spherical lenses that eliminates the fiber bundles through a combination of lenslets and digital signal processing. Combining the optics design and system integration hardware expertise of Ford’s lab and the signal processing and algorithmic expertise of Wetzstein’s lab resulted in a digital solution that not only leads to the creation of these extra-wide images but enhances them.

The new camera also relies on a technology developed at Stanford called light field photography, which is what adds a fourth dimension to this camera—it captures the two-axis direction of the light hitting the lens and combines that information with the 2D image. Another noteworthy feature of light field photography is that it allows users to refocus images after they are taken because the images include information about the light position and direction. Robots could use this technology to see through rain and other things that could obscure their vision.

One of the things you realize when you work with an omnidirectional camera is that it’s impossible to focus in every direction at once—something is always close to the camera, while other things are far away. Light field imaging allows the captured video to be refocused during replay, as well as single-aperture depth mapping of the scene. These capabilities open up all kinds of applications in VR and robotics.

—Joseph Ford

While this camera can work like a conventional camera at far distances, it is also designed to improve close-up images. Examples where it would be particularly useful include robots that have to navigate through small areas, landing drones and self-driving cars. As part of an augmented or virtual reality system, its depth information could result in more seamless renderings of real scenes and support better integration between those scenes and virtual components.

The camera is currently at the proof-of-concept stage and the team is planning to create a compact prototype to test on a robot. This research was funded by the NSF/Intel Partnership on Visual and Experiential Computing.




Vehicles equipped with back up camera should have 180+ degrees capabilities.

The comments to this entry are closed.