GAC Motor coming to Detroit show with EV concept targeted at US market
NREL team demos process for integrated conversion of algal biomass to fuel and chemical products

MIT team boosts resolution of Time-of-Flight depth sensors 1,000-fold; cascaded Time of Flight

In a new paper appearing in IEEE Access, members of the Camera Culture group at MIT’s Media Lab present a new approach to time-of-flight (ToF) imaging that increases its depth resolution 1,000-fold—the type of resolution that could make self-driving cars practical. The new approach could also enable accurate distance measurements through fog, which has proven to be a major obstacle to the development of self-driving cars.

A Time-of-Flight camera gauges distance by measuring the time it takes light projected into a scene to bounce back to a sensor. In such devices, the power of an integrated laser is amplitude-modulated at megahertz frequencies and demodulated using a specialized imaging sensor to obtain subcentimeter range precision. To use a similar architecture but to obtain micrometer range precision, the MIT researchers developed a computational technique (“cascaded Time of Flight”) for use by correlation ToF imagers.

A correlation ToF imager uses a strobed light source that modulates its intensity in a high-frequency, sinusoidal pattern. When light reflects back to the camera, a phase shift is observed in the received intensity signal. This phase shift is directly proportional to both the distance traveled and the chosen frequency of strobed illumination.

Higher frequencies are known to improve the depth resolution of correlation ToF. As such, there has been a continued quest by hardware engineers to build high-frequency correlation ToF imagers. … At the time of this writing, the commodity ToF imager with the highest achievable frequency is the Microsoft Kinect which operates at 120 MHz. Unfortunately, silicon advances in image sensors are not likely to result in the ability to detect GHz frequencies anytime soon (henceforth, the “GHz gap”). Therefore, our aim in this paper is to incorporate heterodyning into the correlation ToF pipeline, to achieve high depth resolution. We are concerned with very low heterodyne frequencies, on the order of a few Hertz, to motivate future work where a video framerate sensor could be used for measurements.

—Kadambi and Raskar (2017)

At a range of 2 meters, existing time-of-flight systems have a depth resolution of about a centimeter. That’s good enough for the assisted-parking and collision-detection systems on today’s cars. However, explained Achuta Kadambi, a joint PhD student in electrical engineering and computer science and media arts and sciences and first author on the paper, as range increases, resolution goes down exponentially.

Let’s say you have a long-range scenario, and you want your car to detect an object further away so it can make a fast update decision. You may have started at 1 centimeter, but now you’re back down to [a resolution of] a foot or even 5 feet. And if you make a mistake, it could lead to loss of life.

—Achuta Kadambi

At distances of 2 meters, the MIT researchers’ system, by contrast, has a depth resolution of 3 micrometers. Kadambi also conducted tests in which he sent a light signal through 500 meters of optical fiber with regularly spaced filters along its length, to simulate the power falloff incurred over longer distances, before feeding it to his system. Those tests suggest that at a range of 500 meters, the MIT system should still achieve a depth resolution of only a centimeter.

Kadambi is joined on the paper by his thesis advisor, Ramesh Raskar, an associate professor of media arts and sciences and head of the Camera Culture group.

With time-of-flight imaging, the longer the light burst, the more ambiguous the measurement of how far it has traveled; light-burst length is one of the factors that determines system resolution.

The other factor, however, is detection rate. Modulators, which turn a light beam off and on, can switch a billion times a second, but today’s detectors can make only about 100 million measurements a second. Detection rate is what limits existing time-of-flight systems to centimeter-scale resolution.

There is, however, another imaging technique that enables higher resolution, Kadambi said: interferometry, in which a light beam is split in two, and half of it is kept circulating locally while the other half—the “sample beam”—is fired into a visual scene.

The reflected sample beam is recombined with the locally circulated light, and the difference in phase between the two beams—the relative alignment of the troughs and crests of their electromagnetic waves—yields a very precise measure of the distance the sample beam has traveled. However, interferometry requires careful synchronization of the two light beams.

You could never put interferometry on a car because it’s so sensitive to vibrations. We’re using some ideas from interferometry and some of the ideas from LIDAR, and we’re really combining the two here.

—Achuta Kadambi

Kadambi and Raskar are also using some ideas from acoustics. If two singers, for example, are slightly out of tune—one producing a pitch at 440 hertz and the other at 437 hertz—the interplay of their voices will produce another tone, the frequency of which is the difference between those of the notes they’re singing—in this case, 3 hertz.

The same is true with light pulses. If a time-of-flight imaging system is firing light into a scene at the rate of a billion pulses a second, and the returning light is combined with light pulsing 999,999,999 times a second, the result will be a light signal pulsing once a second—a rate easily detectable with a commodity video camera. And that slow “beat” will contain all the phase information necessary to gauge distance.

But rather than try to synchronize two high-frequency light signals—as interferometry systems must—Kadambi and Raskar simply modulate the returning signal, using the same technology that produced it in the first place. That is, they pulse the already pulsed light. The result is the same, but the approach is much more practical for automotive systems.

The fusion of the optical coherence and electronic coherence is very unique. We’re modulating the light at a few gigahertz, so it’s like turning a flashlight on and off millions of times per second. But we’re changing that electronically, not optically. The combination of the two is really where you get the power for this system.

—Ramesh Raskar

Gigahertz optical systems are naturally better at compensating for fog than lower-frequency systems. Fog is problematic for time-of-flight systems because it scatters light—deflecting the returning light signals so that they arrive late and at odd angles. Trying to isolate a true signal in all that noise is too computationally challenging to do on the fly.

With low-frequency systems, scattering causes a slight shift in phase, one that simply muddies the signal that reaches the detector. But with high-frequency systems, the phase shift is much larger relative to the frequency of the signal. Scattered light signals arriving over different paths will actually cancel each other out: The troughs of one wave will align with the crests of another.

Theoretical analyses performed at the University of Wisconsin and Columbia University suggest that this cancellation will be widespread enough to make identifying a true signal much easier.

Resources

Comments

mahonj

You do not need anything like that level of resolution to do autonomous driving.
The lidar resolution we have now is enough. We need lower cost lidar and lidar that can see through rain and fog (or just use radar).
Also, how are you going to get enough laser power to see out 500m?
I'll believe they can see through fg at optical wavelengths when I see it.
Meanwhile, I feel they have solved the wrong problem (cost of lidar).

Engineer-Poet

It doesn't matter what you "need", it's going to find uses.  If 1 foot resolution is good enough for vehicles, somebody will find a way to make it cheaper.  1 cm resolution will lead to new applications in things like mapping from drones.

One of the interesting things about the ultra-fine resolution is that it can be used to track not just the speed but also the acceleration of other vehicles in real time.  Spotting a vehicle beginning to slow as the driver comes off the throttle is a leading indicator even before the brake lights come on.  This would enable platooning with drafting, among other things.

HarveyD

I fully agree with E-P on this one.

Lower cost and increased resolution sensors will be available and in use on most new vehicles by 2025 or so.

On board redundant sub-miniature computers with AI will also be used to quickly process data and drive equipped vehicles safely to destinations.

Arnold

Thinking about possible xtra cost from increased requirement for processing and IPR, has led to think that there could be much greater savings owing to a reduced number of lidar still offering greater resolution.. especially if the expected ability to see around corners is realised.
The number of GPS, usonic, cameras and lidars (very exe) along with processing and the associated power drain must be a big part of the costs.
If each the resolution increases X factor 3 then it follows that the costs should reduce.

Engineer-Poet

After thinking about this some more, I suspect that we're going to see this getting used on human-driven vehicles as well.  Imagine the enhanced vision possibilities of LIDAR which can use time of flight as well as raw reflection intensity to spot objects in fog or snow.  The car's HUD could paint a virtual landscape over the actual one.

This really is a game-changing invention.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)