Harley-Davidson 2020 LiveWire electric motorcycle available for pre-order at $29,799; 2 new electric concepts
NUS study finds correlation between PM2.5 pollution and employee productivity

NVIDIA showcases DRIVE Localization platform; localizing AVs within centimeters to HD maps worldwide

At CES 2019 last week, NVIDIA showcased DRIVE Localization—an open, scalable platform for vehicles to position themselves on high-definition maps with unprecedented robustness and accuracy, using mass-market sensors.

It’s vital for a self-driving car to be able to pinpoint its location within centimeters so it can understand its surroundings and establish a sense of the road and lane structures. This enables it to detect when a lane is forking or merging, plan lane changes and determine lane paths even when markings aren’t clear.

DRIVE Localization makes that precise positioning possible by matching semantic landmarks in the vehicle’s environment with features from HD maps to determine exactly where it is in real time. By leveraging mass-market sensors the platform is cost effective, enabling use in personal cars.

At the core of DRIVE Localization is the NVIDIA DRIVE Xavier SoC, the world’s first auto-grade processor dedicated to autonomous driving. Architected to satisfy such demanding computational needs, Xavier contains a deep learning accelerator (DLA) and a CUDA engine.

In combination with the NVIDIA software platform, these processors operate fast inference for deep neural networks and high-performance parallel processing for computer vision algorithms, both of which are leveraged in the DRIVE Localization module.

Rather than relying on expensive lidar technology, DRIVE Localization gathers data from low-cost sensors on the vehicle: a front camera, a GNSS (Global Navigation Satellite System) receiver, an IMU (Inertial Measurement Unit) and the vehicle’s speedometer.

Xavier’s high-bandwidth sensor ingestion and processing pipelines enable NVIDIA’s dedicated deep neural networks to analyze data instantaneously and detect semantic features in a variety of weather and lighting conditions, such as lane boundaries, signs, poles and road edges.

The DRIVE Localization module then overlays a third-party map with the sensor data, evaluating thousands of viewpoints in parallel. It performs this process faster than real time, scoring a massive array of vantage points from the map to one frame of visual data to find the most precise position and orientation.

This requires an enormous amount of parallel computational horsepower, which the Xavier SoC delivers with efficiency.

While Xavier makes it possible for localization to work in real time, the system must also be able to perform this process wherever the car drives. To do so, NVIDIA turned to partners that have dedicated years and vast resources to building HD maps of the world’s roads—among them, Baidu, HERE, NavInfo, TomTom and Zenrin, which have made their maps compatible with DRIVE Software for global localization.

Additionally, NVIDIA is expanding the mapping ecosystem with its DRIVE Maps format, allowing any map vendor to convert to this format to leverage DRIVE Localization directly, laying the foundation to access further capabilities of the system.

Comments

The comments to this entry are closed.