Mitsubishi Electric introduces MEMS LiDAR solution for autonomous vehicles
MDL releases Motor-CAD v13

TU Wien researchers develop neural hardware for image recognition in nanoseconds

Researchers at TU Wien (Vienna) have developed an ultra-fast image sensor with a built-in neural network; the sensor can be trained to recognize certain objects. They describe their work on ultrafast machine vision in a paper in Nature.

Machine vision technology has taken huge leaps in recent years, and is now becoming an integral part of various intelligent systems, including autonomous vehicles and robotics. Usually, visual information is captured by a frame-based camera, converted into a digital format and processed afterwards using a machine-learning algorithm such as an artificial neural network (ANN). The large amount of (mostly redundant) data passed through the entire signal chain, however, results in low frame rates and high power consumption. Various visual data preprocessing techniques have thus been developed to increase the efficiency of the subsequent signal processing in an ANN.

Here we demonstrate that an image sensor can itself constitute an ANN that can simultaneously sense and process optical images without latency. Our device is based on a reconfigurable two-dimensional (2D) semiconductor photodiode array, and the synaptic weights of the network are stored in a continuously tunable photoresponsivity matrix. We demonstrate both supervised and unsupervised learning and train the sensor to classify and encode images that are optically projected onto the chip with a throughput of 20 million bins per second.

—Mennel et al.

Neural networks are artificial systems that are similar to the brain. Nerve cells connect to many other nerve cells. When one cell is active, this can influence the activity of neighboring nerve cells. Artificial learning on the computer works according to the same principle: A network of neurons is simulated digitally, and the strength with which one node of this network influences the other is changed until the network shows the desired behavior.

Typically, the image data is first read out pixel by pixel and then processed on the computer. We, on the other hand, integrate the neural network with its artificial intelligence directly into the hardware of the image sensor. This makes object recognition many orders of magnitude faster.

—Thomas Mueller, corresponding author

The chip was developed and manufactured at the TU Vienna. It is based on photodetectors made of tungsten diselenide—an ultra-thin material consisting of only three atomic layers. The individual photodetectors, the “pixels” of the camera system, are all connected to a small number of output elements that provide the result of object recognition.


Illustration of the ANN photodiode array. All subpixels with the same color are connected in parallel to generate M output currents. Mennel et al.

In our chip, we can specifically adjust the sensitivity of each individual detector element—in other words, we can control the way the signal picked up by a particular detector affects the output signal. All we have to do is simply adjust a local electric field directly at the photodetector.

—Lukas Mennel, first author

This adaptation is done externally, with the help of a computer program. One can, for example, use the sensor to record different letters and change the sensitivities of the individual pixels step by step until a certain letter always leads exactly to a corresponding output signal. This is how the neural network in the chip is configured—making some connections in the network stronger and others weaker.

Once this learning process is complete, the computer is no longer needed. The neural network can now work alone. If a certain letter is presented to the sensor, it generates the trained output signal within 50 nanoseconds—for example, a numerical code representing the letter that the chip has just recognized.

Our test chip is still small at the moment, but you can easily scale up the technology depending on the task you want to solve. In principle, the chip could also be trained to distinguish apples from bananas, but we see its use more in scientific experiments or other specialized applications.

From fracture mechanics to particle detection—in many research areas, short events are investigated. Often it is not necessary to keep all the data about this event, but rather to answer a very specific question: Does a crack propagate from left to right? Which of several possible particles has just passed by? This is exactly what our technology is good for.

—Thomas Mueller


  • Mennel, L., Symonowicz, J., Wachter, S. et al. (2020) “Ultrafast machine vision with 2D material neural network image sensors.” Nature 579, 62–66 doi: 10.1038/s41586-020-2038-x



This is brilliant, a front end recognition filter.

The comments to this entry are closed.