Is autonomous driving safe? A recent study by a research team showed that autonomous driving is not safe.
A team of researchers at the University of Illinois at Urbana-Champaign analyzed all safety reports submitted by self-driving companies from 2014 to 2017 (covering 144 self-driving cars with 1,116,605 cumulative miles) and came to a diametrically opposite conclusion: driving the same A self-driving car is 4,000 times more likely to be involved in an accident than a human-driven car, given the mileage. A fault assessment technology developed by this research team for autonomous driving found 561 critical safety faults in just 4 hours in the test of Baidu Apollo 3.0 and NVIDIA’s proprietary autonomous driving system DriveAV!
The research team is committed to using artificial intelligence and machine learning to improve the safety of autonomous driving technology through software and hardware improvements.
“Using AI to improve autonomous vehicles is very difficult due to the complexity of the vehicle’s electrical and mechanical components, as well as changes in external conditions such as weather, road conditions, terrain, traffic patterns, and lighting,” said Ravishankar K.Iyer, a professor at the University of Illinois’ CSL Lab. “Right now we are making progress, but safety remains a major issue.”
The research team is currently developing techniques and tools to identify driving conditions and issues that affect the safety of autonomous vehicles. Using their technology, a large number of safety-critical scenarios can be found where a small mistake can lead to a big disaster. This saves a lot of time and money.
In tests of Baidu Apollo 3.0 and NVIDIA DriveAV, DriveFI, the fault injection engine developed by the team, found more than 500 of the soft problems within 4 hours.
Findings like these have brought the team’s work to the attention of the industry. The team is patenting their test technology and plans to deploy it soon. Ideally, the researchers want the company to use the new technology to simulate identified problems and fix them before the cars are deployed.Self-driving accidents are 4,000 times more likely than human drivers, and safety assessments face challenges
“Our team is tackling some challenges,” said Saurabh Jha, a PhD student in computer science who led the project. “Solving this challenge requires a multidisciplinary effort spanning science, technology and manufacturing.”
Why is this job so challenging? Because autonomous driving is a complex system that uses AI and machine learning to integrate mechanical, Electronic, and computational technologies to make real-time driving decisions. A typical self-driving system is like a tiny supercomputer on wheels; with more than 50 processors and accelerators running more than 100 million lines of code to support computer vision, planning, and other machine learning tasks.
There may be issues with the sensors and the autonomous driving stack (computing software and hardware) in these vehicles. When a car is driving on the highway at 70 miles per hour, the breakdown can be a significant safety hazard for the driver.
“If the driver of a normal car feels a problem such as vehicle drift or pull, he/she can adjust his/her behavior and steer the vehicle to a safe stop,” explains Jha. “However, in this In this case, it’s unpredictable how a self-driving car will respond unless it’s trained on these problems. In the real world, there are countless examples of this.”
When most people encounter software problems on their computer or smartphone, the most common response is to shut down and restart. However, this method is not recommended for self-driving cars, as every millisecond of delay affects the results, and a slightly slower response can lead to death. Safety concerns about such AI-based systems have grown over the past few years due to various accidents caused by autonomous driving.
“Current regulations require companies like Uber and Waymo that test self-driving cars on public roads to report annually to the California Department of Vehicles (DMV) on the safety of their vehicles,” said Subho Banerjee, a graduate student in CSL and computer science. “We want to understand what the common safety issues are, how well the car performs, and what the ideal safety standard is to understand whether the self-driving system is designed well enough.”