Government of Canada awards $18.2M for aluminum autoparts and better Li-ion battery management
BIOX commissions 90M gallon biodiesel production facility in Houston

New ultrafast camera for self-driving vehicles and drones

Scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed an ultrafast high-contrast camera that could help self-driving cars and drones see better in extreme road conditions and in bad weather.

Unlike typical optical cameras, which can be blinded by bright light and unable to make out details in the dark, NTU’s new smart camera can record the slightest movements and objects in real time. The new camera records the changes in light intensity between scenes at nanosecond intervals, much faster than conventional video, and it stores the images in a data format that is many times smaller as well.

With a unique in-built circuit, the camera can do an instant analysis of the captured scenes, highlighting important objects and details. Developed by Assistant Professor Chen Shoushun from NTU’s School of Electrical and Electronic Engineering, the new camera named Celex is now in its final prototype phase.

Our new camera can be a great safety tool for autonomous vehicles, since it can see very far ahead like optical cameras but without the time lag needed to analyse and process the video feed. With its continuous tracking feature and instant analysis of a scene, it complements existing optical and laser cameras and can help self-driving vehicles and drones avoid unexpected collisions that usually happens within seconds.

—Asst. Prof. Chen

Chen unveiled the prototype of Celex last month at the 2017 IS&T International Symposium on Electronic Imaging (EI 2017) in the US.

A typical digital camera sensor has several millions pixels—sensor sites that record light information and are used to form a resulting picture. High-speed video cameras that record up to 120 frames or photos per second generate gigabytes of video data, which are then processed by a computer in order for self-driving vehicles to “see” and analyze their environment.

The more complex the environment, the slower the processing of the video data, leading to lag times between “seeing” the environment and the corresponding actions that the self-driving vehicle has to take.

The CeleX sensor allows pixel-parallel image processing at the focal plane and event-driven readout. Each pixel in the sensor can individually monitor the slope of change in light intensity and report an event if a threshold is reached. Row and column arbitration circuits process the pixel events and make sure only one is granted to access the output port at a time in a fairly ordered manner when they receive multiple requests simultaneously. The response time to the pixel event is at nanosecond scale.

As such, the sensor can be tuned to capture motion objects with speed faster than a certain threshold. The speed of the sensor is not limited by any traditional concept such as exposure time, frame rate, etc. It can detect fast motion which is traditionally captured by expensive, high speed cameras running at tens of thousands frames per second and at the same time produces 1000x less data.

The CeleX Chipset is a hardware-implemented video analytics system, which perceives stream of pixels from the sensor and conveys value-added signal processing. The resulting system will be a software-hardware co-processing platform, enabling high speed implementation of video analytic tasks such as optical flow and convolution. The platform features standard interface to existing vision systems. It overcomes the over demanding computing power requirement of the existing vision based systems which are difficult to be realized in mobile computing platforms.

The research into the sensor technology started in 2009 and it has received $500,000 in funding from the Ministry of Education Tier 1 research grant and the Singapore-MIT Alliance for Research and Technology (SMART) Proof-of-Concept grant.

Chen and his researchers have spun off a start-up company named Hillhouse Tech to commercialize the new camera technology. The start-up is incubated by NTUitive, NTU’s innovation and enterprise company. Chen expects that the new camera will be commercially ready by the end of this year; Hillhouse is already in talks with global electronic manufacturers.

Comments

Davemart

Yet another technology clearly showing that Tesla is completely bonkers and spinning a yarn with their notion that the hardware they are putting in is all that is needed for level 5.

Over promising and under delivering as usual.

It is completely nuts not to develop the software suite and at that time co-ordinate it with the best hardware practicably available and releasing cars with actual installed and working capability.

Anyone who falls for the claptrap Tesla is peddling deserves what they will get.

HarveyD

Hardware and software needed for future practical all weather ADVs will be developed in the next 10 years or so.

All weather ADVs may not become common place much before 2030. All major vehicle manufacturers will offer ADVs in their BEVs and FCEVs.

Account Deleted

LOL

mahonj

Most cameras run at about 25 frames / second.
This is the approx update rate of the human eye.
So you do not need to go at nanosecond update rates, unless you are monitoring atomic bombs or stuff like that.
At 100 kph, or 27 m/sec, you move 27 mm every millisescond.
That should be good enough for anyone.
Even at 25 Hz, you are only moving 1 m / frame.
If you up that to 100 Hz, you have 25 cms / frame, that should be enough for anyone to track typical road behavior (both good and bad).

SJC

24 FPS is fine, with good real time processing you can get what you need. It looks like LIDAR will be necessary for autonomy so more than a software upgrade will be needed.

Davemart

Get somewhere near it, then finally specify the sensors and hardware from those available at the time.

Specifying and installing hardware when they have not even got basic functions working and claiming it will be 'good enough' is absurd.

Arnold

How can this not be better. Even at the specified refresh rate, there will be circumstance where derating from any number of field of view or software will give speed the edge.
High speed is one thing but in combination with the compression algorithm would imply higher reliability and lower power consumption.
Some of the comments seem a bit defensive.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)