ORNL researchers use stop-light cameras to reduce fuel consumption of less-efficient vehicles via traffic management
Approximately 6 billion gallons of fuel are wasted in the US each year as vehicles wait at stop lights or sit in dense traffic with engines idling, according to US Department of Energy estimates. The least efficient of these vehicles are the large, heavy trucks used for hauling goods—they burn much more fuel than passenger cars consume when not moving.
Now, researchers at Oak Ridge National Laboratory (ORNL) have designed a computer vision system—using the preexisting stop-light cameras of GRIDSMART, a Tennessee-based company that specializes in traffic-management services—that can visually identify vehicles at intersections, determine their gas mileage estimates, and then direct traffic lights to keep less-efficient vehicles moving to reduce their fuel consumption.
Examples from the ORNL Overhead Vehicle Dataset, generated with images captured by GRIDSMART cameras. Image: Thomas Karnowski/ORNL
The ORNL project is a first-year seed project funded by HPC4Mobility, the DOE Vehicle Technologies Office’s program for exploring energy efficiency increases in mobility systems.
Proving such a system could work with current technology was a complicated puzzle that required fitting together a lot of different pieces: high-tech cameras, vehicle datasets, artificial neural networks, and computerized traffic simulations.
To make such a camera-based control system work in the first place requires smart cameras placed at high-traffic intersections, able to capture images of vehicles and equipped to transmit the data. Such camera systems do exist—including one produced by GRIDSMART, a company located just a few miles from the ORNL campus in East Tennessee.
GRIDSMART’s camera systems are installed in 1,200 cities globally, replacing traditional ground sensors with overhead fisheye cameras that provide horizon-to-horizon vision tracking for optimal traffic-light actuation. The bell-shaped cameras connect to processor units running GRIDSMART client software that provides municipal traffic engineers with very detailed information, from traffic metrics to unobstructed views of accidents.
In addition to detecting vehicles, bicycles, and pedestrians for intersection actuation, the GRIDSMART processor counts vehicles and bicycles moving underneath the camera. For each vehicle count, we determine a length-based classification and what type of turn the vehicle made as it went through the intersection.—Tim Gee, principal computer vision engineer at GRIDSMART
This data can be used to adjust intersection timings to improve the flow of traffic. Additionally, the vehicle counts can be taken into consideration when planning for construction or lane changes, as well as helping measure the effects of traffic-control changes.
The team’s first step in February 2018 was to use GRIDSMART cameras to create an image dataset of vehicle classes. With GRIDSMART cameras conveniently installed on the ORNL campus, the team also employed a ground-based roadside sensor system being developed at ORNL, allowing them to combine the overhead images with high-resolution ground-level views.
Once vehicle-classification labels were applied using commercial software, and DOE fuel-economy estimates added, the team had a unique dataset to train a convolutional neural network for vehicle identification.
The resulting ORNL Overhead Vehicle Dataset showed that GRIDSMART cameras could indeed successfully capture useful vehicle data, gathering images of approximately 12,600 vehicles by the end of September 2018, with “ground truth” labels (makes, models, and MPG estimates) spanning 474 classifications. However, R&D staff member Thomas Karnowski of ORNL’s Imaging, Signals, and Machine Learning Group determined that these classifications weren’t numerous enough to effectively train a deep learning network—and the team didn’t have sufficient time left in their year-long project to gather more. So, where to find a larger, fine-grained vehicle dataset?
Karnowski recalled a vehicle-image project by Stanford University researcher Timnit Gebru that identified 22 million cars from Google Street View images, classifying them into more than 2,600 categories (such as make and model) and then correlating them with demographic data. With Gebru’s permission, Karnowski downloaded the dataset, and the team was ready to create a neural network as the second step in the project.
Gebru had used the AlexNet convolutional neural network for her project, so the team decided to try adapting it, too.
We got the same neural network and retrained it on her data and got very similar results to what she got—the difference is that we then used it to estimate fuel consumption by substituting vehicle types with their average fuel consumption, using DOE’s tables. That was a bit of an effort, too, but that’s what it’s all about.—Thomas Karnowski
The team produced another neural network for comparison using the Multinode Evolutionary Neural Networks for Deep Learning (MENNDL), a high-performance computing software stack developed by ORNL’s Computational Data Analytics Group. A 2018 finalist for the Association for Computing Machinery’s Gordon Bell Prize and a 2018 R&D 100 Award winner, MENNDL uses an evolutionary algorithm that not only creates deep learning networks but also evolves network design on the fly. By automatically combining and testing millions of “parent” networks to produce higher-performing “children,” MENNDL breeds optimized neural networks.
Using Gebru’s training dataset, Karnowski’s team ran MENNDL on the now-decommissioned Cray XK7 Titan—once rated as the most powerful supercomputer in the world at 27 petaflops—at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility at ORNL. Karnowski said that while MENNDL produced some novel architectures, its network’s classification results didn’t supersede the accuracy of the team’s AlexNet-derived network. With additional time and image data for training, Karnowski believes MENNDL could have produced a more optimal network, but the team was nearing its deadline.
Lacking an available city-wide grid of intersections equipped with GRIDSMART traffic lights, Karnowski’s team instead turned to computer simulations to test their system. Simulation of Urban MObility (SUMO) is an open-source simulation suite that enables researchers to model traffic systems, including vehicles, public transportation, and even pedestrians. SUMO allows for custom models, so Karnowski’s team was able to adapt it to their project. Adding a “visual sensor model” to the SUMO simulation environment, the team used reinforcement learning to guide a grid of traffic-light controllers to reduce wait times for larger vehicles.
In a real GRIDSMART system, they just send vehicle data to a controller, and it says, ‘I’ve got cars waiting, so it’s time to change the light.’ In our proof-of-concept system, that information would then be fed to a controller that can look at multiple intersections and try to say, ‘We’ve got high-consumption vehicles coming in this direction, and lower-consumption vehicles in this other direction—let’s change the light timing so we favor the direction where there’s more fuel consumption.’—Thomas Karnowski
The method was tested under a variety of traffic scenarios designed to evaluate the potential for fuel savings with visual sensing. In particular, some scenarios with heavy truck usage suggested savings of up to 25% in fuel consumption with minimal impact on wait times. In other scenarios, the simulated system was trained with heavy truck usage but evaluated on more balanced test-traffic conditions. The savings are not quantified, but the trained reinforcement learning control easily adapted to the new conditions.
All these test cases were limited to establish proof-of-concept, and more work is needed to accurately assess the impact of this approach. Karnowski hopes to continue developing the system with larger datasets, improved classifiers, and more expansive simulations.
GRIDSMART considers the project’s results to foreshadow promising new services for their customers.
Work was funded by the Vehicle Technologies Office’s HPC4Mobility seed project program of the US Department of Energy’s Office of Energy Efficiency and Renewable Energy.
Karnowski, R. Tokola, S. Oesch, M. Eicholtz, J. Price, and T. Gee, “Estimating Vehicle Fuel Economy from Overhead Camera Imagery and Application for Traffic Control.” Paper presented at IS&T International Symposium on Electronic Imaging Science and Technology, Burlingame, CA, 26-30 January 2020 doi: 10.2352/ISSN.2470-1173.2020.6.IRIACV-070 (open access)