Light detection and ranging (lidar) technology provides a three-dimensional map of all objects around the vehicle regardless of external lighting conditions, and as such can be a key element in autonomous vehicle (AV) and advanced driver assistance systems (ADAS). (Tesla being a notable holdout so far.) This map, updated hundreds of times per second, can be used to estimate the position of the vehicle relative to its surroundings in real time.
Despite their crucial role in both AVs and ADAS, however, lidars currently lack a standardized measure for describing their performance. In other words, there is no widely accepted protocol for comparing one lidar with another. Although one could arguably compare lidars based solely on their manufacturers’ specifications, such comparisons are not very useful. This is because the performance metrics used by the manufacturers vary and are typically confidential.
Moreover, unlike lidars used for science, surveyance, or defense applications, automotive-grade lidars are optimized for manufacturability, cost, and size. This is likely to lead to marked variations in performance that would be impossible to quantify without standardized tests.
To address this problem, Dr. Paul McManamon of Exciting Technology formed a national group in conjunction with SPIE, the international society for optics and photonics, to address the issue with a three-year effort to develop tests and performance standards for lidars used in AVs and ADAS.
The tests during the first year were led by Dr Jeremy P. Bos, an associate professor at Michigan Technological University (MTU), with assistance from his PhD student, Zach Jeffries. Other authors included Charles Kershner from the National Geospatial-Intelligence Agency, who set up a ground truth Reigl lidar for the test, and Akhil Kurup, also of MTU.
Overview of the test area as a RGB point cloud assembled from multiple positions across the test range indicated by black circles. Black pixels indicate no return and are obstructed or shadowed due to observation geometry.—Jeffries et al.
In an open-access paper published recently in Optical Engineering, the team reports the findings of the first-year tests and a brief outline of the larger three-year plan.
Lidar engineers make design trade-offs to gain competitive advantages in performance and cost in what is a rapidly growing, highly competitive market. Some of these trade-offs include operating wavelength (typically between 850 and 1600 nm), range measurement based on either direct detection/Time-of-Flight (ToF) or coherent techniques, beam steering solutions (mechanically rotating components, MEMS mirrors, microlenses), and laser source type [vertical cavity surface-emitting lasers (VCSELs), edge-emitting diodes]. These design choices have trade-offs of their own, with differences in scan patterns, sampling frequency, achievable ranges, susceptibility to interference from other lidars, etc.—Jeffries et al.
The objective of these tests was to evaluate the range, accuracy, and precision of eight automotive-grade lidars using a survey-grade lidar as a reference. Bos, Jeffries, and the team set up various targets along a 200-meter path in an open field in Kissimmee, Florida. One key aspect of these targets that made the tests stand out from previous studies was that they were near-perfect matte surfaces with a calibrated 10% reflectivity across a wide spectrum. The researchers also measured the ability of the lidars to detect the target among highly reflective road signs.
Reflective road signs were placed near the targets to measure the precision and range of automotive lidars under more challenging conditions. Jeffries et al.
The tests results were, in general, consistent with the values advertised in the manufacturer’s datasheets. However, despite recording a mean precision of 2.9 cm across all the tested devices, the distribution of the measured values was not Gaussian.
Simply put, there was a non-negligible probability for these devices to report very imprecise values (error greater than 10 cm). In fact, in some cases, the measured range deviated from the real value by as much as 20 cm. Another important result was that the reflective road signs impaired the target detection performance of the lidars.
The advertised range performance of lidars pertains to very specific conditions, and performance degrades significantly in the presence of a highly reflective adjacent object.—Jeremy Bos
Overall, the first round of tests provided important insights into the performance differences between different lidars, suggesting that the metrics reported by their respective manufacturers are not reliable. Still, Bos emphasizes this is only the beginning.
The first-year tests were the simplest of them. In the second year, we will duplicate these tests for the characterized lidars while introducing confusion resulting from other automotive lidars approaching from the opposite direction. Additionally, we will measure the eye safety of the lidars. Finally, in the third year, we will include weather effects as a culmination of the complexity build-up.—Jeremy Bos
Zach Jeffries, Jeremy P. Bos, Paul McManamon, Charles Kershner, Akhil Kurup (2023) “Towards open benchmark tests for automotive lidars, year 1: static range error, accuracy, and precision” Opt. Eng. 62(3) doi: 10.1117/1.OE.62.3.031211