VR for self-driving cars makes training safer, more efficient

The vehicles think they’re on public roads, when they’re really in parking lots.

Software that acts like a virtual reality (VR) headset for self-driving cars could make training autonomous vehicles faster and safer.

The challenge: Aside from freeing us from tedious commutes, self-driving cars could potentially make our roads much safer by eliminating the possibility of human error leading to an accident.

Before that can happen, though, the AIs trained in computer simulations need to be able to safely operate actual vehicles, adapting to countless variables, including construction, pedestrians, erratic human drivers, inclement weather, and other road hazards.

“The method can work with any AV simulator.”

Cao, X. et al.

Typically, developers do this by testing their self-driving cars on public roads, with safety backup drivers behind the wheel. But they need to log a lot of miles before the vehicles are exposed to enough “edge cases” — the kinds of rare situations that can throw the software off — before they can be sure they’re safe enough to be fully deployed.

More importantly, if self-driving cars make mistakes during on-road training, they can potentially destroy property or even take lives.

What’s new? Researchers at the Ohio State University (OSU) have now unveiled a new method for training self-driving cars that works like virtual reality for autonomous vehicles (AVs), making the AIs “think” the car is in one place when it’s actually in another.

A developer could use the tech to make the AV believe it’s approaching a busy intersection, for example, when it’s really just driving around an empty lot. The key feature here, which makes it different from a pure simulation, is that the system is operating a real, physical car, while virtual obstacles can be safely thrown its way.

“The [Vehicle-in-Virtual-Environment (VVE)] method can work with any AV simulator and virtual environment rendering software as long as these can be run in real time and can generate the raw sensor data required by the actual AV computing system,” they write in a study, published in Sensors.

“This ability saves time, money, and there is no risk of fatal traffic accidents.”

Bilin Aksun-Guvenc

Looking ahead: The OSU researchers used a real self-driving car to demonstrate the viability of the VVE method for the Sensors study. They’ve now filed a patent for the tech behind it, which they believe could become a “staple” of the AV industry in the next 5 to 10 years. 

“With our software, we’re able to make the vehicle think that it’s driving on actual roads while actually operating on a large open, safe test area,” said study co-author Bilin Aksun-Guvenc. “This ability saves time, money, and there is no risk of fatal traffic accidents.”

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Related
Tesla’s new self-driving software throws out its old code entirely
Tesla made a bold move ahead of its robotaxi launch, completely overhauling the code for its Full Self-Driving (FSD) software.
Flying cars are almost here — but who will actually fly them?
Air taxi services could launch soon, but only if regulators and developers can make operating eVTOLs appealing to prospective pilots.
DARPA is testing this autonomous tank with glowing “eyes”
DARPA just tested an autonomous tank that could help keep soldiers safe — and even more self-driving military vehicles are on the horizon.
New York City greenlights congestion pricing
Here’s how New York City’s congestion pricing is expected to improve traffic, air quality, and public transit.
Why aren’t there solar-powered cars?
There are a number of reasons why solar-powered cars aren’t an option for everyday travel, at least not yet.
Up Next
a render of Alef Aeronautic's electric flying car soaring above a forest with trees
Subscribe to Freethink for more great stories