Self-driving Uber fatality: Video shows tech failure & human error

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

The first pedestrian was killed by a self-driving car this week, when an Uber in Arizona struck a woman crossing a highway at night. Many commenters rushed to blame Uber, the human backup driver, or the pedestrian, or dismissed the whole incident as a non-story. Now, video released by local police suggests that both the human driver and Uber’s self-driving program failed that night.

The Video: The Uber had two cameras: one outside, facing forward, and one inside, recording the driver. On Monday, after reviewing the footage, the police chief told the media that the pedestrian had come suddenly “from the shadows right into the roadway,” so that “it would have been difficult to avoid,” and although “the Uber would likely not be at fault,” she “won’t rule out” charging the driver.

Tech Failure: The actual video doesn’t look good for Uber. The pedestrian does seem to appear suddenly in the darkness — less than two seconds pass between the headlights first picking up the pedestrian’s legs (already in the vehicle’s lane) and the collision itself. But that shouldn’t matter to a driverless car. Self-driving vehicles use radar and laser sensors, which don’t depend on visible light, to “see” what’s in front of them. The Uber car should have easily detected the pedestrian and the bike, even before they were directly in its path, and reacted to avoid them.

This is a very simple problem in self-driving technology, but, somehow, the system failed. It’s not clear whether it was a problem with the hardware, software, or just a glitch, but it’s a huge black mark for Uber. Under Arizona law, Uber could face criminal charges if the investigation determines they acted negligently.

Human Error: The backup driver also failed to prevent the accident, and the interior video shows her repeatedly getting distracted, scanning the road for a second and then looking down again. For several seconds before the collision, the driver was looking down at the car’s center console. While the police chief might be right that an alert driver could not have stopped in time, Uber’s backup driver wasn’t in a position to see it coming anyway.

Several YouTubers have since filmed the same stretch of road at night with different cameras, and their videos show that the road is much brighter than in Uber’s video, meaning that the driver should have had much more time to react.

Either way, loss of focus is a documented problem for drivers in autonomous and semi-autonomous vehicles — after all, in the long term, the goal is to make it so that we don’t have to pay attention at all. But it doesn’t do much good to have a backup driver if they aren’t engaged with their surroundings.

Solutions: If tech failures were to blame for this accident, it will be bad for Uber as a company. It might even turn people off from driverless cars in general for a while. But it could turn out to be a good thing for driverless cars as a whole, because technical problems can be solved. Software can be updated, redundant sensors can be installed, and diagnostic programs can be run to make sure that the car only drives when everything is operating correctly.

Human errors are a more challenging design problem, and it’s tough to integrate people into a system that is designed to make them obsolete, but many new cars already come with camera systems that can warn drivers if their eyes close or look away from the road for too long. Augmented Reality or Heads Up Displays could also be used to keep drivers awake and engaged when the car is in self-driving mode.

Someday, it will happen that everything on a driverless car works exactly as intended, and someone will be hurt anyway. How we handle that will be a much bigger and more difficult social challenge that can’t be addressed with a software patch. If we’re not ready to deal with that yet—and it’s not clear we are— perhaps it’s a good thing that we can still focus on basic testing, technical adjustments, and regulations.

The Bottom Line: It’s hard to interpret a single data point. It could be Uber is unlucky, or it could be that driverless cars aren’t actually safer than human drivers yet. But either way, it’s impossible for the technology to really improve without a lot more testing in real world conditions, and that means tolerating some level of risk, even if it turns out to be higher than the average driver today. The industry will have to figure out how to earn back the public trust, or else we will continue to suffer from tens of thousands of avoidable human-caused accidents while the technology to prevent them languishes in the scientific garage.

Related
Tesla’s new self-driving software throws out its old code entirely
Tesla made a bold move ahead of its robotaxi launch, completely overhauling the code for its Full Self-Driving (FSD) software.
Flying cars are almost here — but who will actually fly them?
Air taxi services could launch soon, but only if regulators and developers can make operating eVTOLs appealing to prospective pilots.
New York City greenlights congestion pricing
Here’s how New York City’s congestion pricing is expected to improve traffic, air quality, and public transit.
Why aren’t there solar-powered cars?
There are a number of reasons why solar-powered cars aren’t an option for everyday travel, at least not yet.
Even as the fusion era dawns, we’re still in the Steam Age
Why do we use steam rather than other gases? Steam has lasted this long because we have an abundance of water, covering 71% of Earth’s surface.
Up Next
Subscribe to Freethink for more great stories