Self-driving Uber fatality: Video shows tech failure & human error

The first pedestrian was killed by a self-driving car this week, when an Uber in Arizona struck a woman crossing a highway at night. Many commenters rushed to blame Uber, the human backup driver, or the pedestrian, or dismissed the whole incident as a non-story. Now, video released by local police suggests that both the human driver and Uber’s self-driving program failed that night.

The Video: The Uber had two cameras: one outside, facing forward, and one inside, recording the driver. On Monday, after reviewing the footage, the police chief told the media that the pedestrian had come suddenly “from the shadows right into the roadway,” so that “it would have been difficult to avoid,” and although “the Uber would likely not be at fault,” she “won’t rule out” charging the driver.

Tech Failure: The actual video doesn’t look good for Uber. The pedestrian does seem to appear suddenly in the darkness — less than two seconds pass between the headlights first picking up the pedestrian’s legs (already in the vehicle’s lane) and the collision itself. But that shouldn’t matter to a driverless car. Self-driving vehicles use radar and laser sensors, which don’t depend on visible light, to “see” what’s in front of them. The Uber car should have easily detected the pedestrian and the bike, even before they were directly in its path, and reacted to avoid them.

This is a very simple problem in self-driving technology, but, somehow, the system failed. It’s not clear whether it was a problem with the hardware, software, or just a glitch, but it’s a huge black mark for Uber. Under Arizona law, Uber could face criminal charges if the investigation determines they acted negligently.

Human Error: The backup driver also failed to prevent the accident, and the interior video shows her repeatedly getting distracted, scanning the road for a second and then looking down again. For several seconds before the collision, the driver was looking down at the car’s center console. While the police chief might be right that an alert driver could not have stopped in time, Uber’s backup driver wasn’t in a position to see it coming anyway.

Several YouTubers have since filmed the same stretch of road at night with different cameras, and their videos show that the road is much brighter than in Uber’s video, meaning that the driver should have had much more time to react.

Video of the crash scene shot with a better camera (bottom) shows that pedestrian was much more visible than showed in Uber’s video (top). Uber / Brian Kaufman

Either way, loss of focus is a documented problem for drivers in autonomous and semi-autonomous vehicles — after all, in the long term, the goal is to make it so that we don’t have to pay attention at all. But it doesn’t do much good to have a backup driver if they aren’t engaged with their surroundings.

Solutions: If tech failures were to blame for this accident, it will be bad for Uber as a company. It might even turn people off from driverless cars in general for a while. But it could turn out to be a good thing for driverless cars as a whole, because technical problems can be solved. Software can be updated, redundant sensors can be installed, and diagnostic programs can be run to make sure that the car only drives when everything is operating correctly.

Human errors are a more challenging design problem, and it’s tough to integrate people into a system that is designed to make them obsolete, but many new cars already come with camera systems that can warn drivers if their eyes close or look away from the road for too long. Augmented Reality or Heads Up Displays could also be used to keep drivers awake and engaged when the car is in self-driving mode.

Someday, it will happen that everything on a driverless car works exactly as intended, and someone will be hurt anyway. How we handle that will be a much bigger and more difficult social challenge that can’t be addressed with a software patch. If we’re not ready to deal with that yet—and it’s not clear we are— perhaps it’s a good thing that we can still focus on basic testing, technical adjustments, and regulations.

The Bottom Line: It’s hard to interpret a single data point. It could be Uber is unlucky, or it could be that driverless cars aren’t actually safer than human drivers yet. But either way, it’s impossible for the technology to really improve without a lot more testing in real world conditions, and that means tolerating some level of risk, even if it turns out to be higher than the average driver today. The industry will have to figure out how to earn back the public trust, or else we will continue to suffer from tens of thousands of avoidable human-caused accidents while the technology to prevent them languishes in the scientific garage.

Related
Expert explains why a nearly new Boeing 737 MAX 9 door flew off
A 60-pound “door plug” blew out from a nearly new Boeing 737 MAX 9 in flight at 16,000 feet, leaving a gaping hole in the fuselage. Why?
Safer skies with self-flying helicopters
Engineers start with an existing helicopter model and add control, sensing, and other software systems to make it autonomous.
Aerospace engineer explains why AI can’t replace air traffic controllers
For everyone’s safety, humans are likely to remain a necessary central component of air traffic control for a long time to come.
Cobalt-free batteries could power cars of the future
A new lithium-ion battery that includes a cathode based on organic materials could offer a more sustainable way to power electric cars.
Up Next
Subscribe to Freethink for more great stories