Pike Research forecasts automotive Li-ion battery market to grow to almost $22B in 2020; China to become global leader in production by 2015
USDA to award $25M for R&D for next-generation biofuels

Continental acquires ASL Vision for 360-degree surround view technology

360-degree surround detection optimally detects the entire vehicle surrounding and helps to safely master complex traffic situations. Source: Continental. Click to enlarge.

International automotive supplier Continental has acquired ASL Vision, a provider of embedded video image processing and transmission solutions based in Lewes, England. ASL has developed the ASL360 Surround View camera system—a multi-camera system that processes video from multiple ultra-wide-angle cameras in a single, high performance Electronic Control Unit (ECU). The companies agreed not to disclose the price for the acquisition.

With the acquisition, Continental is adding “a strategically important building block”, 360-degree surround detection, while at the same time further developing its competence in the camera sector in a targeted way, said Friedrich Angerbauer, Head of the Advanced Driver Assistance Systems (ADAS) Business Unit in the Continental Chassis & Safety Division. Continental is a leading suppliers of advanced driver assistance systems; the development of products and systems for automated driving is one of the central themes of its long-term technology strategy. (Earlier post.)

“Surround view” systems optimally detect the entire vehicle surrounding, and will serve as an addition to the ContiGuard safety concept. ContiGuard integrates active and passive safety systems, which are becoming more effective and comprehensive through surrounding sensors and their coordinated interactions.

In particular, the fusion of radar and camera will allow new and expanded features, and it is an important prerequisite for implementing automated driving.

By acquiring ASL Vision, we are accelerating the existing growth activities in the area of advanced driver assistance systems. “Surround view” will also allow us to safely master complex traffic situations such as lane changes, passing other cars and challenging parking circumstances.

—Dr. Ralf Cramer, Member of the Board of Continental AG and President of the Continental Chassis & Safety Division

As a general rule, a surround-view system consists of three to five cameras as well as an electronic processing unit. ASL Vision offers a range of technological solutions.

One is a camera system like those already available in the passenger vehicle sector, with a display feature for the camera images on the dashboard, that is overlaid with intelligent displays such as the vehicle trajectory display when driving backward. ASL Vision also has a successful system that was developed especially for the aftermarket and is already in use in the industrial sector and in mining, for instance.

The industrial market ASL360 offers the operator a bird’s eye real-time view of the vehicle and its surroundings. The ASL360 surround view system synthesizes a bird’s eye image of the vehicle using multiple ultra-wide-angle cameras mounted on the front, sides and rear of the vehicle. Ordinarily, the fisheye distortion renders the views from such cameras unusable, but ASL360 deploys advanced signal processing to produce usable geometry.

In the future, Continental will work on and further develop intelligent surround view systems with features such as free-space and object recognition, as well as reading curb edges for highly precise parking.

All 53 ASL Vision employees from the headquarters in Lewes, England, as well at its German site in Kronach, Bavaria, will join Continental, and the company is planning to add further engineers both in England and Germany for these technology areas.

The Lewes site will combine the competences in the areas of systems, software and algorithms for surround view. At the same time, the research and development center in Ulm will become an additional site for 360-degree camera systems. In the future, the site will work on issues such as hardware, cameras and application projects.



Those cameras will have a much better view than human drivers ever had. When properly coupled with the vehicle controls it will soon outdo the human drivers and increase road safety.


Transparent 4K displays, clued on the windshield, could keep the human driver better informed of what is happening around the vehicle?


correction....clued one....should read glued on.


The human eye sees only about 2 degrees with clear detail so the mind builds a dynamic model of our surroundings.

The world in front of the car is constantly and instinctively scanned to keep this model as up to date as instinct thinks we need it to be.

This model actually covers the full sphere, front, back, sides, above and below - but except for where we are scanning (typically in front) the model may be wildly out of date, inaccurate or simply "assumed".

When we are fully and consciously aware of poor model accuracy, the bird's-eye-view is a good answer.

But we often assume there is no shelf above us when we stand up or that the garage door is fully up when we walk under it.

We assume there is no car passing us when we change lanes.

We assume our children are not behind us when step back on them, (but not if they do something, however subtle, to make us aware of their presence).

Glancing frequently in the rear view mirror and to each side is an imperfect solution and a poorly learned, non-intuitive habit.

If we plan to change lanes in the next mile, we are usually OK. When we suddenly decide the car ahead is too slow and we must pass, instinct is not a reliable judge of model accuracy.

An available clear 360 degree view is a good start but the info must be integrated into our mental model. A 360 degree bird's eye view does not sound very effective; about like looking at your GPS and wondering if North is up or direction of travel.

A binocular helmet that presented full 180 degree pictures to both eyes (and tracked head movement) would allow the driver to scan with his eyes and turn his head to view his 360 degree surroundings.

If the driver constantly rotated his head 120 degrees to each side his mental model could be exact and accurate – but instinct does not demand we make this effort – and it would be tiresome if we tried to.

Maybe the 360-degree camera system would synthesize noises or tactile input for objects to the side and rear. Our model readily accepts those inputs.

The sudden cry of “mommy” or “daddy”, apparently from behind the car, would instantly stop us from backing into that truck hitch.

Maybe we would make some pre-lane-change tug on the steering wheel and the system would only then make a car noise or truck noise or bite us in the leg if there was a car or truck there - or a scraping noise if a low wall was there - and we would cruise in relative quiet.

But the model must be INSTINCTIVELY updated at whatever rate and complexity is required


Who developed the 360 deg system that Nissan is using in the new Leaf?


Human multiple limitations are responsible for about 80% of the 36,000 road fatalities in USA and 200,000+ worldwide plus 10 times more serious injuries every year. Only major world wars and plagues did better.

Better road safety will be achieved with more driver assistance and eventually with autonomous driver less vehicles.

Specialized 'out of town' race tracks for die hard 'fun' drivers will become a paying business, as golf courses etc.

The comments to this entry are closed.