HERE, NVIDIA partner on AI technology for HD mapping from cloud to car
GM posts 2016 delivery record in China; 3.87M vehicles, more than 1/3 of global sales

Next-gen Audi A8 to feature MIB2+, series debut of zFAS domain controller, Mobileye image recognition with deep learning; Traffic Jam Pilot

Audi’s next-generation A8, premiering this year, will feature the first implementation of the MIB2+ (Modular Infotainment Platform). The key element in this new implementation of the MIB is NVIDIA’s Tegra K1 processor (earlier post), which makes new functions possible and has the computing power needed to support several high-resolution displays—including the second-generation Audi virtual cockpit. Onboard and online information will merge, making the car part of the cloud to a greater degree than ever.

The A8 also marks the series debut of the the central driver assistance controller (zFAS), which also features the K1; in the future, the X1 processor (earlier post) will be applied in this domain controller. The zFAS, developed in collaboration with TTTech, Mobileye, NVIDIA and Delphi, also integrates a Mobileye image processing chip. (Earlier post.)


Top: First-generation zFAS shown at CES 2014. Bottom: Two views of the second-generation zFAS board—smaller than a tablet computer—shown at CES 2016. Click to enlarge.

In the new Audi A8, Audi and Mobileye are demonstrating their next level of development, with image recognition that uses deep learning methods for the first time. This significantly reduces the need for manual training methods during the development phase. Deep neural networks enable the system to be self-learning when determining which characteristics are appropriate and relevant for identifying the various objects. With this methodology the car can even recognize empty driving spaces, an important prerequisite for safe, piloted driving.

With this supporting technology, Audi will offer for the first time in a series production model the new Traffic Jam Pilot function. This is the first piloted driving function in Audi series production that will enable the driver to let the vehicle take over full control at times. With this step the stage is set to begin the next decade with higher levels of automation in a growing number of driving situations.

MIB2+. The scalable concept of the modular infotainment platform (MIB) makes it possible to update the hardware at short intervals. It lets Audi react quickly and flexibly to the fast pace of innovation in consumer electronics and optimally exploit the potential of new generations of chips. The domain architecture that Audi uses in the MIB is a promising approach for the overall electrical/electronics architecture in the car. In the medium term, a few intelligently networked domain computers will replace the countless controllers to form a central computing unit.

The Modular Infotainment Platform (MIB), which was introduced in 2013, featured the Tegra 2 processor from NVIDIA. The MIB2 followed in the Audi Q7 in 2015. The MIB2 is the foundation for infotainment in current Audi production models. It uses an NVIDIA T 30 processor, a quad-core chip from the Tegra 3 series. With a clock speed of more than 1 GHz and a fast graphics card, it can drive two displays. The Tegra 30 processor works together with a 3D graphics program from the specialist company Rightware to display spectacular three-dimensional images.

The current generation Audi virtual cockpit in the TT Coupé based on MIB2. Click to enlarge.

MIB2+ offers significantly more computing performance to support multiple high-resolution displays. It also merges onboard and online information, making the car more a part of the cloud than ever before. The integration of wireless communication into the car continues to play a decisive role here.

With the MIB2+, this is based on the new LTE Advanced standard. On the one hand, this allows for improved convenience functions, such as faster transfer of online content or better call quality. It also is a prerequisite for the implementation of Car-to-X (C2X) services and, in the longer term, for the realization of swarm intelligence and automated driving.

With LTE Advanced, MIB2+ achieves maximum transmission rates of 300 Mbit/s for download and 50 Mbit/s for upload, making it around three times faster than the previous MIB2. Upgrading of the cellular phone network has already begun in many countries.

Another strength of the MIB2+ is mobile telephony using VoLTE (Voice over LTE), in which data packets are transported via the IP protocol. This new technology improves voice quality, accelerates phone connections and enables simultaneous use of high-resolution, online voice telephony and high-speed data transmission. If network conditions are poor, the Audi wireless communications module can use multiple frequency blocks in the LTE Advanced network simultaneously (carrier aggregation) to establish a fast data connection.

The LTE standard—and in the future LTE Advanced—also plays an important role for Audi’s C2X services. In the medium-term, it will be used to transfer most of the information being sent to the car via wireless networks, such as information about construction zones. This information flows into the new HERE HD Live Map, a digital map serving as the basis for the piloted driving of the future.

With C2X, there are strong market-specific preferences with respect to the base technology. The American market, for example, uses the 802.11p standard, which has already been specified and which Audi has already successfully tested. In other markets such as China, the 5G standard will likely establish itself, Audi said.

Audi is also entering a new dimension in voice control, which is also known as the SDS (speech dialog system). MIB2+ expands the system to include a hybrid solution that incorporates and if need be compares both on-board and online solutions. Online and offline voice control thus augment one another seamlessly.

In online recognition, the driver’s speech input is sent as a data packet to voice recognition software in the cloud over the cellular phone network. If the on-board and the online recognition systems both provide a response, the dialog manager compares them. In choosing the more plausible response, it uses such criteria as the car’s location and previous user queries.

The new voice control system understands many expressions from everyday speech, thus extending the voice control spectrum. Along with point of interest (POI) searches, it includes additional functions such as weather, news and online radio.



Account Deleted

The Tegra K1 is from 2013. Why not use the new PX2 from NVIDIA that Tesla uses and that it 40 times more powerful? Audi newest ADAS is obsolete on arrival. Nor can its software be OTA updated like it can in Tesla's cars. Audi has ten times more resources than Tesla and still they can't get it right with this all important tech. They need to start firing the executives in charge of Audis autopilot program because they are not up to the task.

The comments to this entry are closed.