At CES, NVIDIA outlined its functional safety architecture for NVIDIA DRIVE, its AI autonomous vehicle platform, which uses redundant and diverse functions to enable vehicles to operate safely, even in the event of faults related to the operator, environment or systems.
The NVIDIA DRIVE architecture enables automakers to build and deploy self-driving cars and trucks that are functionally safe and can be certified to international safety standards, such as ISO 26262. NVIDIA DRIVE provides a holistic safety platform that includes process, technologies and simulation systems, including:
Process: Sets out the steps for establishing a pervasive safety methodology for the design, management and documentation of the self-driving system.
Processor Design and Hardware Functionality: Incorporates a diversity of processors to achieve fail operation capabilities. These include NVIDIA-designed IP related to NVIDIA Xavier covering CPU and GPU processors, deep learning accelerator, image processing ISP, computer vision PVA, and video processors, all at the highest quality and safety standards. Included are lockstep processing and error-correcting code on memory and buses, with built-in testing capabilities. The ASIL-C NVIDIA DRIVE Xavier processor and ASIL-D rated safety microcontroller with appropriate safety logic can achieve the highest system ASIL-D rating.
Software: Integrates safety technology from key partners. NVIDIA DRIVE OS system software integrates BlackBerry QNX’s 64-bit real-time operating system, which is ASIL-D safety certified, along with TTTech’s MotionWise safety application framework, which encapsulates each application and isolates them from each other, while providing real-time computing capability. NVIDIA DRIVE OS offers full support of Adaptive AUTOSAR, the open-standard automotive system architecture and application framework. The NVIDIA toolchain, including the CUDA compiler and TensorRT, uses ISO 26262 Tool Classification Levels to ensure a safe and robust development environment.
Algorithms: The NVIDIA DRIVE AV autonomous vehicle software stack performs functions such as ego-motion, perception, localization and path planning. To realize fail operation capability, each functionality includes a redundancy and diversity strategy. For example, perception redundancy is achieved by fusing LiDAR, camera and radar. Deep learning and computer vision algorithms running on CPU, CUDA GPU, DLA and PVA enhance redundancy and diversity. The NVIDIA DRIVE AV stack is a full backup system to the self-driving stack developed by the automaker, enabling Level 5 autonomous vehicles to achieve the highest level of functional safety.
Virtual Reality Simulation: A self-driving car is an extremely complex system with state-of-the-art technologies. Proving that the system does what it is designed to do—captured by the term SoTIF, or safety of the intended functionality—is a great challenge. It must also do so in a wide range of situations and weather conditions. Road testing is not sufficiently controllable, repeatable, exhaustive or fast, so a realistic simulation environment is essential. NVIDIA has created a virtual reality simulator, called NVIDIA AutoSIM, to test the DRIVE platform and simulate against rare conditions. Running on NVIDIA DGX supercomputers, NVIDIA AutoSIM is repeatable for regression testing and will eventually simulate billions of miles.