Oslo Airport first to supply Air BP renewable biojet via main fuel hydrant system; initial batch from Neste
Researchers find some solid-state hydrogen storage materials could serve as less toxic solid propellants for rockets

New QNX software platform enables ADAS and automated driving

QNX Software Systems Limited, a subsidiary of BlackBerry Limited, earlier this month introduced the QNX Platform for ADAS (advanced driver assistance systems), expanding its portfolio of automotive software products. The QNX Platform for ADAS is scheduled for general release in Q2 2016.

Designed for scalability, the platform will enable automotive companies to build a full range of automated driving systems, from informational ADAS modules that provide a 360° surround view of the vehicle, to sensor fusion systems that combine data from multiple sources such as cameras and radar, to high-performance processors that make control decisions in fully autonomous vehicles.

ADAS-1024

Built on operating system (OS) technology proven in safety-critical control systems, the QNX Platform for ADAS can help customers minimize system-level safety certification efforts, while reusing software assets across projects for greater return-on-investment. The platform also provides access to an ecosystem of ADAS technology suppliers that offer complementary hardware and software solutions, enabling faster time-to-market.

Apple moving next door?
The Ottawa Business Journal reported that Apple has leased office space, suggested to be devoted to R&D, in the Kanata Research Park in Ottawa—home to QNX Software Systems.

The core of the new platform is the QNX OS for Safety, an ISO 26262-certified OS that supports all of the automotive safety integrity levels (ASIL), from ASIL A to ASIL D, required by automated driving systems.

The OS is built on a modular microkernel architecture that simplifies the integration of new sensor technologies and purpose-built ADAS processors, while providing fault resilience.

ADASfeatures
QNX Platform for ADAS 1.0. The QNX Platform for ADAS has been built as a foundation to support a variety of ADAS and active safety applications. Click to enlarge.

The QNX Platform for ADAS includes pre-integrated reference implementations for building multi-camera vision systems and V2X applications (vehicle-to-vehicle and vehicle-to-infrastructure communications).

  • The vision system implementation is based on the platform’s multi-camera framework, which simplifies system design by managing the complexities of camera control on behalf of applications.

  • The V2X implementation uses the platform’s remote V2X interface, which handles V2X notifications from surrounding vehicles and infrastructure elements such as traffic signals.

The QNX Platform for ADAS includes a high availability manager to perform rapid recovery from software faults, POSIX APIs to simplify migration of existing ADAS applications, fast boot times to enable “instant on” systems, time partitioning to guarantee CPU cycles for critical software processes, programming interfaces to simplify integration of third-party AUTOSAR environments, as well as secure networking protocols and file systems.

The QNX Platform for ADAS combines our experience in building software for more than 60 million cars with our 30-year history in safety-critical systems and our deep expertise in safety standards like ISO 26262 and IEC 61508. Through this unique pedigree, QNX is ideally positioned to help automakers and tier one suppliers move from research to production in the still-uncharted market of automated driving.

—John Wall, Senior Vice President and Head of QNX Software Systems

Complementary partner technologies. Key hardware and software suppliers are supporting the QNX Platform for ADAS with systems-on-chip (SoCs), V2X modules, vision algorithms, communications middleware, and other complementary technologies. Pre-integrated partner technologies scheduled for the first release of the platform include:

  • SoCs: Intel Atom Processor C2000 product family, Texas Instruments (TI) TDA2x processor family.

  • ADAS platforms: NVIDIA DRIVE Automotive Platform for computer vision, deep learning, sensor fusion.

  • ADAS vision: Itseez ADAS algorithms for pedestrian detection, forward collision warnings, traffic sign recognition, and lane detection; and TI’s vision libraries for front cameras, surround-view systems, sensor fusion, and smart backup cameras.

  • V2X: Cohda Wireless’s widely deployed V2X MK5 On Board Units, software stacks, and applications, with remote interface for data analysis and simulated GPS for performing multi-vehicle simulation.

Also, Luxoft Holding, a provider of software development services and IT solutions, announced a partnership with QNX focused on ADAS systems used by automotive original equipment manufacturers (OEMs) and Tier 1 suppliers.

As part of the initiative, Luxoft will leverage its expertise in road-model based computer vision algorithms for tracking of objects such as vehicles, pedestrians, building facades, and road signs to bring a robust road scene reconstruction engine to the QNX OS platform. In working on the engine, Luxoft will utilize its proprietary CVNAR software framework, which incorporates solutions for augmented reality such as augmented guidance, navigation, points of interest (POIs), and destination highlighting.

Luxoft’s CVNARTM, a computer vision and augmented reality software framework, brings together a unique set of features and capabilities and can be used in two different ways. It can be adapted to a particular OEM’s requirements and present a ready-to-use solution on already supported systems-on-chip (SoCs). Or it can be available as a hardware-independent embedded solution that utilizes both the CPU and the GPU.

The CVNARTM framework can be easily integrated with heads-up and LCD displays, and supports smart glasses. Using eye-tracking algorithms and vehicle’s sensor data, the augmented navigation provides navigational hints, highlights road obstacles and destinations, and more.

QNX Software Systems offers a portfolio of infotainment, telematics, safety, and acoustics solutions deployed in more than 60 million vehicles worldwide. More than 40 automotive OEMs use QNX software, including the QNX Neutrino OS; the QNX OS for Safety (compliant to ISO 26262, up to ASIL D); the QNX CAR Platform for Infotainment; and QNX acoustics middleware, for their head units, digital instrument clusters, advanced driver information systems, hands-free systems, and connectivity modules.

Comments

mahonj

It will be interesting to see how these systems pan out.
It should be easy enough to do, all you have to control is the engine, braking and steering (and indicators, of course).
You will have to hook up foreward facing lidar and radar and stereo (at least) cameras, and some rearward facing radars and GPS.
They should be able to specify a minimum set of sensors and their performances, and have a unified interface to the car controls.
The scene analysis and understanding is probably the hardest part, but this could be used for many car / sensor situations.

Thus, Google could do an "android" on it and provide a platform (like QNX) that any car company could use. Car companies would be well advised to use someone like Google's autonomous driving system because the cost of developing the s/w would be enormous (in the 10's of billions, imo).

Account Deleted

I think the stuff that took decades of development and 10s of billion of R&D dollars was sensors and microprocessors. Now they finally reached a level where they are good enough to make self-driving everything from cars to rockets to vacuum cleaners. The final step of adjusting these sensors and chips for autonomous cars is not that hard or costly. Currently only 50 software engineers and 150 hardware engineers work with Tesla's autopilot. They are done in 2018 and may have grown to 600 people. Tesla will spend a few 100 million USD to finish development of the fully autonomous autopilot. Tesla as I get it only buy sensors and CPU from other companies. They make the circuits and software and datacenter for the autopilot internally.

Arnold

@Henrick,
Your comments re hardware sound close to mark.
The rate of output once the tooling is brought on line can see the cost drop to a very minimal cost each.

When the need for those widgets decreases often owing to being overtaken by improved versions or methods or even market saturation profit margins can be hard to sustain.

That means that producers are always looking to refine and bring invention to market.

It does not mean that the old widgets are useless but they become a cheap source of quality components waiting for engineers and innovators to discover new applications. That can go on for decades -in fact as long as people keep seeking solutions.

So cost should not be of particular ongoing concern.

The software team at the level of system integration needs to be kept small enough to maintain proper communication and understanding of all the arms being developed.

A larger number of small teams are better placed to develop alternative models or approaches to the same problem.

Much of the hardware exists in some form, innovation builds on prior knowledge and experience. It can come from almost anywhere as we see there are many bright tertiary educated students and their mentors, secondary students producing 'out of the box' solutions as well as shed tinkers with determination and vision finding new ways of doing.

Then there is the R&D establishment at it 9 to 5.

Managing the whole boggle is a bit harder to fathom.

A bit like electricity, we feel it, know its there and find very quickly we can't live without it.

To understand we need take it on evidence based trust.

mahonj

@Henrik,
The sensors will be cheap (ish) once they ramp up (Lidar is no joke, but once they get the volume going, the price will fall) radar and vision are already cheap, as is raw processor power.

However, I would not underestimate the image understanding and road reading task ahead of those guys. Getting a system to work on Motorways and clear roads is pretty easy - getting it to work on crowded city and suburban streets will not be easy.
Even Google, the leader of the group, is still crawling around at 25 mph.

I certainly think it will happen, but I do not think they will have anything like free driving by 2018. (But you never know, someone might make a breakthrough).

HarveyD

Human drivers take a minimum of 250 milliseconds to react. Electronic systems can already react twice as fast in early systems and could become much faster soon while humans will not get much faster?

Automated drive will soon get better than the average driver and soon thereafter better than the best human drivers.

Resistance is futile.

Account Deleted

The 2018 timeline is when Musk thinks Tesla will be ready with autonomous autopilot for their cars. He may be too optimistic that has been seen before. However, it fits with the 2018 timeline the CTO of Mobileye has for launching a suite of sensors for fully autonomous driving. Tesla uses sensors from Mobileye. I think Tesla and Mobileye are working very closely together so that Tesla’s engineers are already using prototypes for that 2018 sensor suite. Musk has also taken the unusual step of personally supervising Tesla’s development of the autopilot program. Programmers and engineers in that program is reporting directly to Musk. The autonomous autopilot is Tesla’s highest priority above everything else including Model 3 launch and giga factory.

I know everybody say that Google is the leader in autonomous driving. I also used to think like that but in the past 2 months or so I have grown convinced that Tesla is the leader because they are the only car maker in the world with over 50k cars on the roads (and rapidly growing) that has a partially autonomous autopilot that can receive software updates OTA and that is linked to a back-end data center that monitors the fleet and that can use that data to improve the reliability of Tesla’s autopilot. Nobody else has that. Google only has 60 cars (others have less) but they are not in the hands of real consumers so it does not really count. Tesla’s current autopilot can drive on most roads in most weather conditions. What Tesla lacks is an autopilot that can drive on all roads in nearly all weather conditions and do it significantly safer than the average human driver. I think Tesla can do it by 2018. After that it will (as Harvey say) be a matter of making the system better and better to the point where it practically never fails and can beat even the best human drivers by a very large margin and also do things that humans can’t do like driving in total darkness of extreme fog or rain.

Arnold

Question whether they will have competitions like man vs machine chess? I'll hedge my bet.

Aviation autopilots aren't so good as to land a jet on the Hudson - yet they are very capable of learning but thinking?

JMartin

Arnold: Every technology has had man vs. machine at some point, back to John Henry. Actually back to the horse, but that may not be called technology. At first man wins, but over time, technology dominates. It will again.

SJC

Tech is used as a tool not a replacement. We could dig ditches with 100 people and shovels or use two people and a trencher. People that say "save jobs" don't understand progress. We design, engineer, manufacture, maintain, repair and operate the more efficient machinery instead of drudgery.

HarveyD

Auto-pilots, on large commercial planes, have been able to go (safely) from parking bay to parking bay from any properly equipped airport to another, avoid others in the air and during ground manoeuvers, for the last 20 years or so.

Normal human resistance to change, IATA, ICAO and pilot's unions are the major resistance points.

Local regulations and current human drivers will resist the early use of autonomous drive vehicles, even if they are 4 to 8 times safer. The world may have to live with (human driver made) 1.5 million road fatalities/year for a while.

The comments to this entry are closed.