Study finds 6.3% spread in fuel economy obtained with maximum and minimum rolling resistance tires
New efficient electrolysis process for direct hydrogen production from biomass; 16.7% of energy required for water electrolysis

Mitsubishi Electric showcases EMIRAI 3 xDAS assisted-driving concept car at CES

Mitsubishi Electric Automotive America introduced its EMIRAI 3 xDAS concept car at CES 2016 this week. Building on the EMIRAI 2 xDAS, which was introduced at the 2013 Tokyo Motor Show, Mitsubishi Electric developed the EMIRAI 3 xDAS with evolved technologies for human machine interface (HMI), driver sensing, telematics, and light control.

The LCD panels on the dashboard and center console are laminated with an optical bonding process for high visibility and operability, as well as aesthetic harmony with vehicle interiors. The HMI center console features two separate tablet-size LCD panels mounted in such a way as to give the appearance of a larger, curved panel.


The high-visibility panels reduce reflections with optical-bonding and optical-design technologies.

Display items can be changed according to user preferences. Cloud content synchronization and selectable contents layouts enable drivers to create highly personalized interiors.

Mitsubishi is experimenting with a “three-finger twist” on the center console panel to pull up controls for simplified adjustment of air temperature and music volume. The control can be slid from top to bottom screen and vice versa, changing functions. The result is reduced need for eye movement away from the road.


A 3D HUD, which appears in the combiner, provides three-dimensional images of objects up to more than 10 meters ahead of the driver so that the driver can keep his or her eyes on the road ahead.

3D imaging with binocular disparity adjusts the HUD’s position in the combiner according to specific situations, such as when turning or driving on an expressway for safer, easier driving.

The driver’s operating condition is sensed with a camera and a cardiograph that is based on a non-contact cardiograph co-developed with the National University Corporation Kyushu Institute of Technology. The driver’s face direction and line of sight are sensed via a camera.

The system also provides proactive analysis of map data to identify intersections with poor visibility, and then display side-camera views looking up and down the cross street. Further, the system learns to react automatically whenever the same location/situation is reencountered.

A cloud-based application analyzes the driver’s physical condition by comparing current behavior with past behavioral data stored in the cloud. If fatigue is detected, suitable rest stops are recommended.


The comments to this entry are closed.