Researchers devise efficient process for recycling and upcycling CFRP using supercritical water
Verde Clean Fuels participating in consortium awarded DOE funding for study of zero-emission methanol production technology

Hesai launches new ultra-wide FOV, long-range ATX lidar designed for ADAS series production vehicles

Hesai Technology released its new ultra-wide field of view (FOV), long-range lida: the ATX. With a longer detection range, higher resolution, and wider FOV, ATX empowers intelligent vehicles with excellent 3D perception. This new lidar utilizes Hesai’s advanced 4th-generation technology platform, with comprehensive upgrades to its laser transceiver module and size.

ATX incorporates the market-validated transceiver architecture from Hesai’s AT series, significantly increasing module integration and simplifying the core optical scanning structure, all while maintaining a compact and lightweight form.

Hesai has shipped more than 300,000 units of AT128 as of today. Compared to the AT128, the ATX is 60% smaller by volume, almost half the weight, features a minimum surface window measuring only 25 mm tall, and power consumption just 8 W.

With its compact size and ultra-low power consumption, ATX can be flexibly integrated into various positions on a vehicle, including on the roof, behind the windshield, or inside the front headlights.

ATX has a maximum detection range of 300 meters and its horizontal FOV reaches 140°, providing expansive visibility of complex road conditions such as surrounding vehicles or pedestrians. Its ultra-wide FOV also equips vehicle systems with comprehensive and precise perception information. It can identify conditions such as rain, fog, exhaust fumes, and water droplets, and mark them in real-time at a pixel level, filtering over 99% environment noises.

ATX has already received design wins and nominations from global leading OEMs. Large-scale mass production of ATX is expected to begin in the first quarter of 2025.

Comments

Variant003

I think having computers try to focus on everything at the same time is overloading the computing power we have in cars. I would think a better approach would be to mimic mammalian vision - overall scan of area, focus on "points of interest".

New LiDAR and RADAR systems don't need to focus on everything all the time, but rather they need to learn to overview scan entire area and only focus detail scan on important points of interest. This is where AI may be of help training theses system to more efficiently scan the road ahead?

I wonder if this would require discreet lower rate/resolution scanner reading entire field of view with separate high rate/resolution scanner to focus in detail? Or can the rate/resolution be scaled on the fly?

Bonus if you get better vision than humans "seeing" through fog and smoke.

I think this is only 1 tool that needs to work together with other tools, such as stereo vision cameras, to provide enough confidence on L3, L4 driver aids. Also think a simplistic "confidence" meter would greatly improve customer acceptance. My idea is a simple light that changes tone from Red to yellow to green, maybe even blue, depending on level of computer confidence for self driving.

Sorry, I'm going to stop rambling now...

Jason

GdB

C-V2X Connected cars tech is a super power sensor that can effectively see through anything and around corners. C-V2X with cameras is all that is needed if universally adopted, including pedestrians having a C-V2X enabled phone.

The comments to this entry are closed.