Tesla and Uber fatalities show the limits of “semi-autonomous” cars

How can we make humans pay attention when a machine is doing our job for us?
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

Two weeks ago, a self-driving Uber car struck and killed a pedestrian in Arizona, bringing Uber’s driverless vehicle program to a screeching halt.

Five days later, a Tesla Model X slammed into a concrete barrier in California, killing the sole occupant. On Friday, Tesla confirmed that the vehicle’s “Autopilot” system was engaged at the time of the accident.

Both cases highlight the problems with splitting driving responsibilities between humans and machines.

On Autopilot: Allegedly, Teslas in Autopilot mode are not self-driving cars. Instead, Autopilot is just supposed to be a very advanced “driver assistance program” (a kind of souped-up cruise control) that only helps drivers navigate, steer, avoid obstacles, park, and maintain (or change) their speed, distance, or lane.

But the headline on the Tesla’s  website boasts “Full Self-Driving Hardware on All Cars,” and its advertising video shows Autopilot driving around complex city streets with no human direction.

Nonetheless, Tesla reminds drivers that “Every driver is responsible for remaining alert and active when using Autopilot, and must be prepared to take action at any time.” Autopilot also sends visual and audible warnings to the human driver if it detects them taking their hands off the wheel or looking away from the road.

The Tesla Crash: Tesla has released some details about the fatal Model X accident last month, saying that, while Autopilot was on, “The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision.”

A Tesla driver in Chicago tried to recreate the conditions of the accident along a similar stretch of highway, showing how Autopilot can get confused when the left-hand line on a highway splits off and becomes the right-hand line on the exit ramp, steering the Tesla directly into the concrete divider between the ramp and the highway.

It’s possible that the Model X suffered a similar error and the driver was not alert enough to react in time.

The Uber Crash: In the Uber fatality, the car was in full self-driving mode at the time of the accident, suggesting that something must have gone wrong with its sensors, which should have detected the approaching pedestrian.

But Uber’s human backup driver also failed to prevent the crash, and the car’s interior video shows the driver repeatedly getting distracted. Ultimately, she was looking down for several crucial seconds before the collision, while the car drove itself into the pedestrian.

Different Tech, Same Problem: While Uber is technically fully autonomous, and Tesla is officially semi-autonomous, the difference for their human drivers is small and shrinking. They are both supposed to be alert and ready to take over at a moment’s notice while the car is driving itself.

The problem is that people have a hard time focusing when they aren’t actively engaged with their surroundings, whether their hands are always supposed to be on the wheel (as in a Tesla) or not (as in Uber’s self-driving car).

Former Uber backup drivers have testified about how difficult it is to stay alert when you rarely have anything to do, and the problem of drivers “zoning out” on long, boring trips has been well-documented even in old-fashioned “dumb” cars.

Upshot: We’re in an awkward, in-between phase for self-driving tech, with fully autonomous cars and “driver assistance” programs beginning to converge, without either quite eliminating the need for people just yet.

Each step forward makes humans less and less necessary — the goal, after all, is to replace us — but as we become more redundant, it becomes harder for us to stay focused or recognize the lapses where we need to intervene.

On the road, the margin for error is often slim, and the difference between a car doing the right thing (staying in the left-hand lane) and the wrong thing (following the shoulder into a concrete barrier) isn’t always easy for a driver to anticipate.

But the benefits of fully self-driving cars are enormous, eliminating human error, making DUI impossible, and giving freedom to the elderly and disabled. As uncomfortable as it sounds, while the technology matures, we might simply have to ride it out, with both hands on the wheel and both eyes on the road — regardless of who or what is driving.

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
Tesla’s new self-driving software throws out its old code entirely
Tesla made a bold move ahead of its robotaxi launch, completely overhauling the code for its Full Self-Driving (FSD) software.
Flying cars are almost here — but who will actually fly them?
Air taxi services could launch soon, but only if regulators and developers can make operating eVTOLs appealing to prospective pilots.
New York City greenlights congestion pricing
Here’s how New York City’s congestion pricing is expected to improve traffic, air quality, and public transit.
Why aren’t there solar-powered cars?
There are a number of reasons why solar-powered cars aren’t an option for everyday travel, at least not yet.
Even as the fusion era dawns, we’re still in the Steam Age
Why do we use steam rather than other gases? Steam has lasted this long because we have an abundance of water, covering 71% of Earth’s surface.
Up Next
Subscribe to Freethink for more great stories