Toyota Research Institute outlines progress in automated driving; Platform 2.1; new Luminar LiDAR
27 September 2017
Toyota Research Institute (TRI) outlined some of its progress in the development of automated driving technology and other project work, including the development of robots for in-home support of people.
Since unveiling its Platform 2.0 research vehicle in March 2017 (earlier post), TRI has quickly updated its automated driving technology to Platform 2.1. In parallel with the creation of this test platform, TRI said it has made strong advances in deep learning computer perception models that allow the automated vehicle system to more accurately understand the vehicle surroundings, detecting objects and roadways, and better predict a safe driving route.
|
Platform 2.1 vehicle. Click to enlarge. |
These new architectures are faster, more efficient and more highly accurate. In addition to object detection, the models’ prediction capabilities can also provide data about road elements, such as road signs and lane markings, to support the development of maps, which are a key component of automated driving functionality.
Platform 2.1 also expands TRI’s portfolio of suppliers, incorporating a new high-fidelity LiDAR system provided by Luminar. This new LiDAR provides a longer sensing range, a much denser point cloud to better detect positions of three-dimensional objects, and a field of view that is the first to be dynamically configurable, which means that measurement points can be concentrated where sensing is needed most.
Luminar’s LiDAR delivers more than an order of magnitude greater resolution than current sensors and the ability to see dark objects, such as a tire (10% reflectivity) at more than 200 meters, compared to less than 40 meters. The sensor is also the first to allow resolution to be concentrated where it’s needed most, in real time, enabling the car to clearly see and recognize cars, people, and objects, even at distance.
Luminar was founded five years ago and began development of a new LiDAR architecture. Luminar took a new approach by building all major components in its system from the ground up: lasers, receivers, scanning mechanisms and processing electronics. The radical architecture requires only a single laser, single receiver, and ultra-fast scanner to collect millions of points of information in the environment from just a fraction of the components used in today’s LiDAR systems.
The new LIDAR is married to the existing sensing system for 360-degree coverage. TRI expects to source additional suppliers as disruptive technology becomes available in the future.
On Platform 2.1, TRI created a second vehicle control cockpit on the front passenger side with a fully operational drive-by-wire steering wheel and pedals for acceleration and braking. This setup allows the research team to probe effective methods of transferring vehicle control between the human driver and the autonomous system in a range of challenging scenarios. It also helps with development of machine learning algorithms that can learn from expert human drivers and provide coaching to novice drivers.
TRI has also designed a unified approach to showing the various states of autonomy in the vehicle, using a consistent UI across screens, colored lights and a tonal language that is tied into Guardian and Chauffeur. The institute is also experimenting with increasing a driver’s situational awareness by showing a point cloud representation of everything the car “sees” on the multi-media screen in the center stack.
With its broad-based advances in hardware and software, Platform 2.1 is a research tool for concurrent testing of TRI’s dual approaches to vehicle autonomy―Guardian and Chauffeur―using a single technology stack.
Under Guardian, the human driver maintains vehicle control and the automated driving system operates in parallel, monitoring for potential crash situations and intervening to protect vehicle occupants when needed.
Chauffeur is Toyota’s version of SAE Level 4/5 autonomy where all vehicle occupants are passengers. Both approaches use the same technology stack of sensors and cameras.
The platform includes the ability of the Guardian system to detect distracted or drowsy driving in certain situations, and to take action if the driver does not react to turns in the road. In such a situation, the system first warns and then will intervene with braking and steering to safely follow the road’s curvature. Chauffeur test scenarios demonstrate the vehicle’s ability to drive itself on a closed course, navigate around road obstacles, and make a safe lane change around an impediment in its path with another vehicle travelling at the same speed in the lane next to it.
In addition to real-world testing, TRI is using simulation to accurately and safely test engineering assumptions.
Robotics and artificial intelligence. TRI is also making advancements in robotics and artificial intelligence. As part of its research into human support robots that can assist with tasks in the home, such as item retrieval, TRI has pioneered new tools to give future robots enhanced, human-like dexterity in order to grasp and manipulate objects so that they are not dropped or damaged.
TRI is also applying computer vision and artificial intelligence to robot development, allowing robots to detect the physical presence of humans and objects, note their locations and retrieve objects for humans when prompted. The robots can detect when objects have been relocated, updating the item’s location in the robot’s database, and even detect faces of known people and differentiate individuals.
TRI’s progress in robotics has been made possible by its ability to increase the value and accuracy of simulation to augment physical testing. Since it is impossible to physically test the wide variety of situations robots may encounter in the real world, the institute uses simulated environments, constantly adapting them with data collected in real-world testing for greater precision.
Additionally, TRI is pursuing new concepts for applying artificial intelligence inside a vehicle cabin to keep occupants comfortable, safe and satisfied. The institute has created a simulator showing an in-car AI agent that can detect a driver’s skeletal pose, head and gaze position and emotion to anticipate needs or potential driving impairments. For example, when the system detects the driver taking a drink and facial expressions which might indicate discomfort, the agent hypothesizes that the driver might be feeling warm and can adjust the air conditioning or roll down the windows. If the agent detects drowsiness, it might provide a verbal prompt in the cabin suggesting that the driver pull over for coffee or navigate the car to a coffee shop.
White paper. Toyota also released a comprehensive overview of its work on automated driving. The white paper includes the philosophy that guides its approach to the technology, its ongoing research programs, and its near-term product plans.
The white paper summarizes the dual concepts of Guardian and Chauffeur that guides Toyota’s automated driving research and the Mobility Teammate Concept which represents Toyota’s belief that interactions between drivers and cars should mirror those between close friends who share a common purpose, watch over each other, and when in need, help each other out.
Excellent ideas towards ADVs.
All those sensors will have to be miniturized and fitted into head/tail lights?
Posted by: HarveyD | 27 September 2017 at 01:36 PM