Project ASLAN launches open-source self-driving software platform
Norsepower unveils first tiltable rotor sail installation with Sea-Cargo agreement

Cambridge Consultants introduces EnfuseNet AI system for autonomous vehicles; leveraging low-cost sensors and cameras

Cambridge Consultants, part of the Capgemini Group, has introduced EnfuseNet, an Artificial Intelligence (AI) system for autonomous vehicles. EnfuseNet fuses data from low-cost sensors and cameras—hardware costing just tens of dollars—to generate high-resolution depth data, the ideal reference point for autonomous systems.

The result is very low-cost, high-resolution depth data that enables vehicle manufacturers and automotive suppliers to rewrite the economics of vehicle autonomy.

The global market is expected to decline by more than 3% during 2020 as a result of the COVID-19 outbreak , and may take years to recover. High technology costs mean the automotive industry has struggled to introduce advanced driver-assistance systems (ADAS) beyond luxury vehicles and into the mass market. Meanwhile, the ‘arms race’ to rack up millions of driven miles to capture real-world training data favors a small group of early leaders, blocking new entrants.

Against this background, Cambridge Consultants developed EnfuseNet, a low-cost, high-resolution vehicle perception technology. EnfuseNet will help vehicle manufacturers and mobility technology providers to realize a critical element of a self-driving system at a much lower cost, and to deliver autonomy to new and larger segments of the automotive industry.

Building an accurate and detailed depth point cloud—a 3D view around the vehicle—is critical for autonomous decision making. Today’s autonomous vehicles resolve depth data using two-dimensional camera inputs combined with LiDAR or radar. LiDAR remains the most accurate approach but with unit costs for mechanical spinning LiDAR devices in the thousands of dollars, the technology is prohibitively expensive beyond the luxury market. Radar is lower cost but does not provide enough depth points to build a high-resolution image.

EnfuseNet takes data from a standard RGB camera and low-resolution depth sensors, which cost in the tens of dollars per device, and applies a neural network to predict depth at a vastly greater resolution than the original input. Uniquely, this depth information is per image pixel, enabling the system to provide depth data and a confidence prediction for every single object in an image.

EnfuseNet was trained with synthetic data in a virtual learning environment, performing impressively when tested with real-world data. This enables OEMs and automotive suppliers to overcome the time, complexity and cost constraints of collecting real-world data to train their ADAS perception algorithms. Generating high-quality depth point clouds, with confidence down to the pixel level, means that EnfuseNet improves explainability and traceability, reducing the risk of ‘black box’ decision making in a safety-critical application.

The underlying model is based on a completely novel architecture that fuses Convolutional Neural Networks (CNNs), Fully Convolutional Neural Networks (FCNs), pretrained elements, transfer and multi-objective learning and other approaches to optimize depth prediction performance.

Comments

Jason Burr

Why not take this a step further. Why not take a cue from how smart devices handle voice controls and offload the actual computational work to a service farm. Siri and Alexa don't work without internet because the actual work is not done on your phone or Echo.

If we are to go to automated driving, then why not have the computational load completed on a server farm and that also serves to connect that cars to each other for better reaction times and integrated traffic flows. The cars have enough processing power to gather data and organize it. Then A.I. can coordinate with the data coming from all the other sources and return a proper course of action.

Initial roll out would be to larger metro markets and major arterial roadways. As infrastructure improved and expanded then auto drive could be used in more places. In the mean time the limited "auto pilot" functions currently available would handle outside the built up areas.

I think if we require helmets for all bicycle riders (a lot of US cities do) then maybe a chip that shows position and size on short range RF would help autonomous cars in detecting them. Like Citizen watch uses motion to keep charged, a small device that only transmits size (kid's helmet) and relative motion or stopped. After a period of no motion shuts off. Similar system would work for smart phones, though even more info could be available if owner wishes to provide.

SJC_1

Cast should be "cost" in the title.

SJC_1

Cast should be "cost" in the title.

SJC_1

Cast should be "cost" in the title.

Dave Gladwin

We're reaching a period where using AI to fuse data from multiple sensors is a practical reality. The human brain does this all the time and we don't even realise it. For example our brain fuses motion sensing with visual information and we only realise it when they are out of sync, e.g. a VR roller coaster ride.
Sensor fusion will provide downstream systems with a much richer data set upon which to work.

Dave Gladwin

To respond to Jason's question, at the moment the consensus in the automotive industry is to conduct all of the ADAS perception, prediction, planning decisions locally, that is in the ADAS ECU. The use of cloud computing is today fraught with technical difficulties, a safety system such as ADAS cannot rely on the cellular network with the uncertain latencies with off board communications in order to make critical real time decisions. In terms of another layer of safety for self-driving vehicles there is certainly a place for vehicle-to-vehicle communications to exchange important safety information such as accidents or road works on a journey, or indeed when a vehicle is experiencing technical issues. V2X and C-V2X standards are being rolled out to provide this capability to new vehicles.

More broadly in our view the automotive market will need to move beyond cellular technology as the means to provide this kind of mission critical communications to ensure safer driving journeys in future. As new satellite constellations are launched and that in turn drives down the costs of satellite communications we expect this along with cellular and other land-based communications infrastructure will serve as the means to provide real mission-critical communication for autonomous vehicles, not just in cities, but all location, on and off road. When that day arrives cloud computing could play an important role for computing requirements in a vehicle.

The comments to this entry are closed.