Thai researchers find waste chicken fat a good low-cost feedstock for renewable diesel
EPA, ECOS and Motor Vehicle Industry Associations to sign memorandum of understanding (MOU) for the reduction of copper in brake pads

New system uses monocular camera instead of expensive laser scanners for automated vehicle navigation with comparable performance

A doctoral candidate in computer science and engineering at the University of Michigan has developed a new software system that could reduce the high cost of laser scanners used in self-driving and automated cars by enabling the vehicles to navigate using a single monocular camera with the same level of accuracy as the laser scanners at a fraction of the cost. His paper detailing the system recently was named best student paper at the Conference on Intelligent Robots and Systems in Chicago.

Ryan Wolcott’s system builds on the navigation systems used in other self-driving cars that are currently in development, including Google’s vehicle. These use three-dimensional laser scanning technology to create a real-time map of their environment, then compare that real-time map to a pre-drawn map stored in the system. By making thousands of comparisons per second, they’re able to determine the vehicle's location within a few centimeters.

Wolcott’s system uses the same approach, with one crucial difference—his software converts the map data into a three-dimensional picture much like a video game. The car’s navigation system can then compare these synthetic pictures with the real-world pictures streaming in from a conventional video camera.

The laser scanners used by most self-driving cars in development today cost tens of thousands of dollars, and I thought there must be a cheaper sensor that could do the same job. Cameras only cost a few dollars each and they’re already in a lot of cars. So they were an obvious choice.

—Ryan Wolcott

Ryan Eustice, a U-M associate professor of naval architecture and marine engineering who is working with Wolcott on the technology, said one of the key challenges was designing a system that could process a massive amount of video data in real time.

To do the job, the team again turned to the world of video games, building a system out of graphics processing technology that’s well known to gamers. The system is inexpensive, yet able to make thousands of complex decisions every second.

The team has successfully tested the system on the streets of downtown Ann Arbor. While they kept the car under manual control for safety, the navigation system successfully provided accurate location information. Further testing is slated for this year at U-M’s new M City test facility, set to open this summer.

The system won’t completely replace laser scanners, at least for now—they are still needed for other functions such as long-range obstacle detection. But the researchers say it’s an important step toward building lower-cost navigation systems. Eventually, their research may also help self-driving vehicle technology move past map-based navigation and pave the way to systems that see the road more like humans do.

Map-based navigation is going to be an important part of the first wave of driverless vehicles, but it does have limitations—you can’t drive anywhere that’s not on the map. Putting cameras in cars and exploring what we can do with them is an early step toward cars that have human-level perception.

—Ryan Eustice

The camera-based system still faces many of the same hurdles as laser-based navigation, including how to adapt to varying weather conditions and light levels, as well as unexpected changes in the road.

This work was supported by a grant from Ford Motor Co. via the Ford-UM Alliance under award N015392; Wolcott was supported by the SMART Scholarship for Service Program by the US Department of Defense.

Resources

Comments

kalendjay

Birds have such bad perspective sense that that is why they bob their heads when they walk. Yes, it works.

The comments to this entry are closed.