Toyota to invest additional $95M in Preferred Networks; joint R&D on deep learning AI tech in mobility-related fields
With the aim of accelerating joint research and development of artificial intelligence technologies in mobility-related fields, such as automated driving technology, Toyota Motor Corporation will invest an additional ¥10.5 billion (US $95 million) in Preferred Networks (PFN). (Earlier post.) Toyota will acquire stock in PFN through the allocation of new shares to a third party.
Toyota and PFN have been working on joint research and development since October 2014, and for the purpose of strengthening the relationship, Toyota invested approximately ¥1 billion (US $9.02 million) in PFN in December 2015. To date, the joint research and development has focused on object-recognition technology and analysis technology of vehicle information. This additional investment will further enhance the relationship between Toyota and PFN and spur joint research and development.
Toyota is actively researching and developing a wide range of technologies, including automated driving technologies that are expected to significantly affect the nature of mobility in the future.
PFN’s intelligence-related technologies (including machine learning, deep learning, and big data processing) are essential to Toyota. Toyota says that the overall goal of these initiatives is to foster the creation of a society in which mobility means safety and freedom.
PFN was founded in March 2014 with the aim of business utilization of deep learning (DL) technology focused on IoT. PFN advocates “Edge Heavy Computing” as a way to handle the enormous amount of data generated by devices in a distributed and collaborative manner at the edge of the network and realizes innovation in the three priority business areas of transportation systems, the manufacturing industry, and bio/healthcare.
PFN collaborates with various organizations and promotes advanced initiatives through the development and offering of an open-source deep learning framework (Chainer) and an integrated solution that includes applications (Deep Intelligence in Motion or “DIMo”).
DL is expected to revolutionize data analytics that are currently based on traditional statistical modeling or conventional machine learning techniques with two distinctive features.
DL models can easily handle extremely high-dimensional data. In traditional statistical modeling, the number of independent variables (input variables) is relatively small, which forces data scientists to disregard many potentially significant but seemingly irrelevant input variables. One important example of high-dimensional data is time-series data, which is often prevalent in sensor data from industrial devices. DL can capture the interactions between thousands or even millions of input variables, and make every piece of information from complex interactions that contribute to the output, resulting in significantly higher accuracy than conventional methods.
DL is model-free—i.e., it does not assume a priori knowledge on the class of probability distribution, as any probability distribution can be approximated by a sufficiently complex neural network. This frees data scientists from making too many assumptions (which might also be incorrect or over-simplistic) in advance and from exploring the enormous space of possible statistical models. These two characteristics enable DL to be applied to a very wide range of application areas and to scale to large volumes of data.
PFN focuses on sectors such as automotive and industrial robotics, where an enormous amount of sensor data is generated but only traditional statistical modeling is commonly used. In the course of collaborating with industry leaders such as Toyota and Fanuc, PFN has become convinced that DL technologies can truly revolutionize data analytics in this domain, has built up experiences and knowledge of how DL can be applied in a variety of settings, and come up with a number of innovative ideas, especially in the areas of recognition, prediction, and control.
In June, PFN released a major update of its open source deep learning framework Chainer, called Chainer v2. Chainer v2 has three major enhancements and improvements.
Improved memory efficiency during learning. Chainer v2 shows significantly reduced memory usage without sacrificing learning speed. It has been confirmed that the memory usage can be reduced by 33% or more when learning using the network ResNet50 used in the field of image recognition. This makes it easier to design larger networks and allows to learn using larger batch sizes in usual networks.
Chainer’s accompanying array library CuPy has been separated and made into an independent project, allowing a broader range of HPC applications to be easily accelerated using GPUs. The general-purpose array calculation library CuPy is highly compatible with library NumPy, which is very popular in the field of scientific computing, making it possible to run faster using the GPU without altering the code written for use with NumPy. By separating CuPy and developing it as a separate library, PFN aims to increase users for expanding application not only in deep learning field but also in other research and development fields.
Organized the API and made it more intuitive. One of the major features of Chainer is its ability to intuitively describe a complex neural network as a program. PFN took into consideration the various use cases and needs of the community to remove unnecessary options and organize interfaces to provide a more sophisticated API. Due to a more intuitive description, unintentional bugs occur less frequently.