Audi staging 550-mile piloted drive from Silicon Valley to Las Vegas for CES in Audi A7 Sportback concept
OnStar bringing commerce to connected cars; also providing feedback on driving

NVIDIA introduces DRIVE automotive computers at CES; teraflops of processing for autonomous driving and cockpit visualization

At CES in Las Vegas, NVIDIA introduced its DRIVE line of automotive computers, equipped with powerful capabilities for computer vision, deep learning and advanced cockpit visualization. NVIDIA will offer two car computers: NVIDIA DRIVE PX, for developing auto-pilot capabilities, and NVIDIA DRIVE CX, for creating the most advanced digital cockpit systems.

The NVIDIA DRIVE PX auto-pilot development platform provides the technical foundation for cars with completely new features that draw heavily on recent developments in computer vision and deep learning. DRIVE PX leverages the new NVIDIA Tegra X1 mobile super chip, which is built on NVIDIA’s latest Maxwell GPU architecture and delivers more than one teraflops of processing power, giving it more horsepower than the world’s fastest supercomputer of 15 years ago.

Maxwell is NVIDIA’s next-generation architecture for CUDA (Compute Unified Device Architecture, a parallel computing platform and programming model invented by NVIDIA) compute applications. Maxwell introduces an all-new design for the Streaming Multiprocessor (SM) that dramatically improves energy efficiency. Improvements to control logic partitioning, workload balancing, clock-gating granularity, compiler-based scheduling, number of instructions issued per clock cycle, and many other enhancements allow the Maxwell SM (also called SMM) to far exceed Kepler SMX efficiency.

The 256-core Tegra X1 provides twice the performance of its predecessor, the Tegra K1, which is based on the previous-generation Kepler architecture and debuted at last year’s Consumer Electronics Show.


Tegra X1’s specifications include:

  • 256-core Maxwell GPU
  • 8 CPU cores (4x ARM Cortex A57 + 4x ARM Cortex A53)
  • 60 fps 4K video (H.265, H.264, VP9)
  • 1.3 gigapixel of camera throughput
  • 20nm process

Tegra processors are built for embedded products, mobile devices, autonomous machines and automotive applications.

We see a future of autonomous cars, robots and drones that see and learn, with seeming intelligence that is hard to imagine. They will make possible safer driving, more secure cities and great conveniences for all of us. To achieve this dream, enormous advances in visual and parallel computing are required. The Tegra X1 mobile super chip, with its one teraflops of processing power, is a giant step into this revolution.

—Jen-Hsun Huang, CEO and co-founder, NVIDIA

DRIVE PX, featuring two Tegra X1 super chips, has inputs for up to 12 high-resolution cameras, and can process up to 1.3 gigapixels per second.

Its computer vision capabilities can enable Auto-Valet, allowing a car to find a parking space and park itself, without human intervention. While current systems offer assisted parallel parking in a specific spot, NVIDIA DRIVE PX can allow a car to discover open spaces in a crowded parking garage, park autonomously and then later return to pick up its driver when summoned from a smartphone.

The deep learning capabilities of DRIVE PX enable a car to learn to differentiate various types of vehicles—for example, discerning an ambulance from a delivery van, a police car from a regular sedan, or a parked car from one about to pull into traffic. As a result, a self-driving car can detect subtle details and react to the nuances of each situation, like a human driver.

The NVIDIA DRIVE CX cockpit computer is a complete solution with hardware and software to enable advanced graphics and computer vision for navigation, infotainment, digital instrument clusters and driver monitoring. It also enables Surround-Vision, which provides an undistorted top-down, 360-degree view of the car in real time—solving the problem of blind spots—and can completely replace a physical mirror with a digital smart mirror.

Available with either Tegra X1 or Tegra K1 processors, and complete road-tested software, the DRIVE CX can power up to 16.8 million pixels on multiple displays—more than 10 times that of current model cars.

Audi and NVIDIA share a common belief‎that machine learning is a powerful enhancement to our zFAS Piloted Driving technology [earlier post]. Thus, Audi sees DRIVE PX as a crucial tool for further research and development.

—Ricky Hudi, executive vice president of Electrical/Electronics Development at Audi AG

Both NVIDIA DRIVE PX and DRIVE CX platforms include a range of software application modules from NVIDIA or third-party solutions providers. The DRIVE PX auto-pilot development platform and DRIVE CX cockpit computer will be available in the second quarter of 2015.



Amazing development for a rather low cost very high performance complex CPU. If EV batteries had followed, the world would already be using 1000 to 2000 Wh/Kg units. Many extended range BEVs would sell for around $20K.


This is information processing and thus is subject to Moore's law. In fact, it goes well beyond Moore's law because there are many parallelizable processes in machine vision.

Batteries are not information processing and so are subject to the laws of mass production (which can still be impressive - see solar panels for an incredible example).

Thomas Pedersen

How nice that this processor can distinguish a police car from a regular sedan so you can make sure to get out of its way if needed... ;-)


I don't need a car that can spot police, I want one that runs dependably and won't cost a fortune.


One day (not that far away) SJCs wish will become reality.

Will driverless vehicles be able to avoid reckless high speed young drivers?

Or, could vehicles with drivers be restricted to specific lanes or roads?

The comments to this entry are closed.