In This Story
The promise of the widespread adoption of autonomous vehicles has seemingly been just around the corner for several years. But a jittery public, already hesitant to give up control of their vehicles, must be reassured of their safety after every high-profile autonomous vehicle accident and fatality. New research at George Mason University takes a “holistic” look at autonomous vehicle technology, aiming to fix the vulnerabilities that cause those wrong turns.
Lishan Yang, an assistant professor in the Department of Computer Science, received a three-year grant from the National Science Foundation for $338,000 to study autonomous vehicle safety and examines what’s called an end-to-end view.
“We are far from bringing autonomous vehicles to everyday life,” she said. “There are too many cases they haven’t mastered. Any components may suffer from faults and failures. What if it misrecognizes a truck as a bird, causing the wrong control decisions?”
Yang’s end-to-end approach aims to understand where the system learns to drive by processing raw input data, such as camera images, all the way to generating control outputs like steering or acceleration, in one integrated process. This contrasts with traditional systems that break down driving into separate components like perception, planning, and control. End-to-end systems often rely on deep learning models to mimic human driving by directly mapping sensor data to driving actions.
“We put the car in different simulation scenarios under control and see if we have a certain failure or error and if that will lead to an accident,” she said. “The camera is capturing an image and the car has to decide, for example, if it can move forward or if it has to stop, and the response time has to be very quick. We don’t want to protect the system with too much ‘overhead,’ because that slows down the response time, even if just a little bit.”
Numerous things can cause an autonomous vehicle to react inappropriately, including even sunshine; one single cosmic ray particle can mess with the vehicle’s computers, so designers include protection mechanisms to address that.
Autonomous vehicles use machine-learning models, which are systems that can find patterns or make decisions from a previously unseen dataset. But if the model itself is wrong, it is difficult to pinpoint where an undesired result comes from. Yang said, “We break these complex systems into pieces to study reliability and find vulnerabilities in each component. We connect the components to evaluate the error propagation through the components, related to the environment. And we consider the hardware and software, so it’s a holistic view of the system.”
Once Yang determines when and where autonomous driving systems are most vulnerable, she will explore ways to improve their resilience, focusing on selecting the best protection strategies to tackle challenges like real-time demands and limited resources in the vehicles.
She stressed that even in these early days of autonomous vehicles, accidents are rare. “But given how many autonomous vehicles are going to be around in the future,” she added, “you don’t want a failure to happen in your car.”