Audi displays new AI project at NIPS: mono camera with semantic segmenting and depth estimates creates precise 3D model of environment
Audi is exhibiting an innovative pre-development project to support autonomous driving at the NIPS 2017 conference in Long Beach, California, this week. A project team from the Audi subsidiary Audi Electronics Venture (AEV) developed a mono camera that uses artificial intelligence to generate an extremely precise 3D model of the environment. This technology makes it possible to capture the exact surroundings of the car.
A conventional front camera acts as the sensor. It captures the area in front of the car within an angle of about 120 degrees and delivers 15 images per second at a resolution of 1.3 megapixels. These images are then processed in a neural network. This is where semantic segmenting occurs, in which each pixel is classified into one of 13 object classes. This enables the system to identify and differentiate other cars, trucks, houses, road markings, people and traffic signs.
The system also uses neural networks for distance information. The visualization is performed here via ISO lines—virtual boundaries that define a constant distance. This combination of semantic segmenting and estimates of depth produces a precise 3D model of the actual environment.
Audi engineers had previously trained the neural network with the help of “unsupervised learning.” In contrast to supervised learning, unsupervised learning is a method of learning from observations of circumstances and scenarios that does not require pre-sorted and classified data.
The neural network received numerous videos to view of road situations that had been recorded with a stereo camera. As a result, the network learned to understand rules independently, which it uses to produce 3D information from the images of the mono camera. The project of AEV holds great potential for the interpretation of traffic situations, Audi suggests.
Along with the AEV, two partners from the Volkswagen Group are also presenting their own AI topics at the Audi booth for this year’s NIPS:
The Fundamental AI Research department within the Group IT’s Data:Lab focuses on unsupervised learning and optimized control through variational inference, an efficient method for representing probability distributions.
The Audi team from the Electronics Research Laboratory of Belmont, California, are demonstrating a solution for purely AI-based parking and driving in parking lots and on highways. In this process, lateral guidance of the car is completely carried out through neural networks. The AI learns to independently generate a model of the environment from camera data and to steer the car. This approach requires no highly precise localization or highly precise map data.
In developing autonomous driving cars, Audi is benefiting from a large network in the artificial intelligence field of technology. The network includes companies in the hotspots of Silicon Valley, in Europe and in Israel.
In 2016, Audi became the first automobile manufacturer to participate at NIPS with its own exhibition booth. The brand appears again this year as a sponsor of NIPS and is seeking to further develop its network in California. AI specialists can also learn about employment opportunities with Audi there.
The new Audi A8 is the first production car developed for conditional automated driving at Level 3 (SAE). The Audi AI traffic jam pilot handles the task of driving in slow-moving traffic up to 60 km/h (37.3 mph), provided that laws in the market allow it and the driver selects it. A requirement for automated driving is a mapped image of the environment that is as precise as possible at all times. Artificial intelligence is a key technology for this.
The 31st Annual Conference on Neural Information Processing Systems (NIPS) takes place December 4 to 9.