Summary of the latest technology in global autonomous vehicle hardware and software (2020)

Entering 2020, autonomous driving technology has come to a time when large-scale commercialization is required to prove the technical value.

Whether it is mining areas, ports and parks in closed or semi-closed scenarios, or RoboTaxi and RoboTruck on open roads, technology is the basis for the commercialization of autonomous driving in different scenarios.

This report covers perception, mapping and localization, sensor fusion, machine learning methods, data collection and processing, path planning, autonomous driving architecture, passenger experience, autonomous vehicle interaction with the outside world, and autonomous driving The challenges of components (such as power consumption, size, weight, etc.), communication and connection (vehicle-road collaboration, cloud management platform) and other technical fields are discussed, and corresponding implementation cases of various autonomous driving companies are provided.

This report is a comprehensive elaboration of the hardware and software technology of autonomous driving technology by autonomous driving experts from different countries and regions in the world, such as the United States, China, Israel, Canada, and the United Kingdom, so that readers can understand the latest technology from a technical point of view. Technology dynamics to gain a comprehensive understanding of autonomous vehicles.

Most of the cases in this report are from the automotive field, which is also the hottest application scenario in the autonomous driving industry at present. However, the car serving personal travel is not an industry with far-reaching influence of autonomous driving technology. Other industries, such as public transportation, freight, agriculture , mining and other fields are also widely used in the application of autonomous driving technology.

01 Various types of sensors

Various types of sensors are used for self-driving cars to perceive the environment, just like human eyes, the basic components of self-driving cars; there are five main types of sensors for self-driving cars, including: 1. Long range RADAR; 2. Camera; 3. LIDAR ;4.Short/Medium range RADAR;5.Ultrasound;

These different sensors are mainly used for different distances and different types of object perception, providing the most important source of information for autonomous vehicles to judge the surrounding environment. In addition, there is another source of information for environmental perception, which is the source of vehicle-road coordination. It is also stated in the point report.

Regarding the selection of the sensor, it is mainly judged according to the following technical factors:

1. Scanning range, determining the time that must be reacted to the object being sensed;

2. Resolution, which determines the details of the environment that the sensor can provide to the autonomous vehicle;

3. Field of view or angular resolution, determine the number of sensors needed to cover and perceive the area;

4. Refresh rate, which determines how often information from sensors is updated;

5. Perceive the number of objects, be able to distinguish the number of static objects and the number of dynamic objects in 3D, and determine the number of objects that need to be tracked;

6. Reliability and accuracy, the overall reliability and accuracy of the sensor in different environments;

7. Cost, size and software compatibility, which is one of the technical conditions for mass production;

8. The amount of data generated determines the calculation amount of the on-board computing unit. Now the sensors are biased towards smart sensors, that is, not only perception, but also distinguish information, and transmit the most important data that affects the driving of the vehicle to the on-board computing unit. , thereby reducing its computational load;

The following is a schematic diagram of the sensor solutions of Waymo, Volvo-Uber, and Tesla:

Because the sensor has been exposed to the environment, it is easily polluted by the environment, which affects the working efficiency of the sensor. Therefore, the sensor needs to be cleaned.

1. Tesla’s sensor, with heating function, can resist frost and fog;

2. Volvo’s sensor is equipped with a water spray cleaning system for cleaning dust;

3. The sensors of the Chrysler Pacifica used by Waymo have a water spray system and wipers.

02 SLAM and sensor fusion

SLAM is a complex process because localization requires maps, and mapping requires good location estimates. While the fundamental “chicken-or-egg” problem for robots to become autonomous has long been considered, breakthrough research in the 1980s and mid-1990s solved SLAM conceptually and theoretically. Since then, multiple SLAM methods have been developed, most of which use probabilistic concepts.

To perform SLAM more accurately, sensor fusion comes into play. Sensor fusion is the process of combining data from multiple sensors and databases to obtain improved information. It is a multi-stage process that deals with the association, correlation and combination of data to achieve cheaper, higher quality or more relevant information than if only a single data source were used.

For all the processing and decision-making required from sensor data to motion, two different AI approaches are commonly used:

1. Sequentially, decompose the driving process into components of a hierarchical pipeline, each step (sensing, positioning, path planning, motion control) is handled by a specific software element, and each component of the pipeline feeds data to the next One;

2. An end-to-end solution based on deep learning, responsible for all these functions.

The question of which approach is best for AV is an area of ​​constant debate. The traditional and most common approach involves decomposing the autonomous driving problem into multiple sub-problems and solving each sub-problem in turn using specialized machine learning algorithm techniques including computer vision, sensor fusion, localization, control theory, and path planning.

End-to-end (e2e) learning is gaining increasing attention as a solution to the challenges faced by complex AI systems in autonomous vehicles. End-to-end (e2e) learning applies iterative learning to entire complex systems and has gained popularity in the context of deep learning.

03 Three machine deep learning methods

Currently, different types of machine learning algorithms are used for different applications in autonomous vehicles. Essentially, machine learning maps a set of inputs to a set of outputs based on a provided set of training data. 1. Convolutional Neural Network (CNN); 2. Recurrent Neural Network (RNN); 3. Deep Reinforcement Learning (DRL); is the most common deep learning method applied to autonomous driving.

CNN – Mainly used to process image and spatial information to extract features of interest and identify objects in the environment. These neural networks consist of convolutional layers: collections of convolutional filters that attempt to distinguish image elements or input data to label them. The output of this convolutional layer is fed into an algorithm that combines them to predict the best description of the image. The final software component is often called an object classifier because it can classify objects in an image, such as street signs or other cars.

RNN – RNNs are powerful tools when dealing with temporal information such as videos. In these networks, the outputs of previous steps are fed into the network as inputs, enabling information and knowledge to persist and be contextualized in the network.

DRL – Combining Deep Learning (DL) and Reinforcement Learning. The DRL approach enables software-defined “agents” to use reward functions to learn optimal actions in a virtual environment to achieve their goals. These goal-oriented algorithms learn how to achieve a goal, or how to maximize along a particular dimension in multiple steps. Although promising, the challenge for DRL is to design the correct reward function for driving a vehicle. In self-driving cars, deep reinforcement learning is considered still in its early stages.

These methods do not necessarily exist in isolation. For example, companies such as Tesla rely on hybrid forms, which try to use multiple methods together to improve accuracy and reduce computational requirements.

Training a network on multiple tasks at once is a common practice in deep learning, often referred to as multi-task training or auxiliary task training. This is to avoid overfitting, a common problem with neural networks. When a machine learning algorithm is trained for a specific task, it becomes so focused on imitating the data it was trained on that its output becomes unrealistic when trying to interpolate or extrapolate.

By training a machine learning algorithm on multiple tasks, the core of the network will focus on discovering routine features useful for all purposes, rather than focusing on just one task. This can make the output more realistic and useful to the application.

The Links:   G156XW01V4 BSM300GB120DLC