What’s the point of self-driving cars if you still have to drive them?
Earlier this month, a Tesla Model X ping-ponged between the guard rails on the Pennsylvania Turnpike before flipping over and screeching to a stop. Oops, the car’s driver told the Pennsylvania State police when they arrived on the scene. He’d put the car on Autopilot.
The driver should have known better. The day before that crash, Tesla announced the first known fatality involving a self-driving vehicle — a man named Joshua Brown, who may have been watching a Harry Potter movie when his Tesla Model S confused the sunlight bouncing off a very large truck and drove straight into it.
Tesla reacted with a eulogy as tender as you might expect from a company that had just inadvertently killed its biggest fan. (Brown nicknamed his car “Tessie” and posted enthusiastic videos to YouTube of the vehicle routing itself around obstacles.) “The customer who died,” Tesla wrote on its company blog, “was a friend to Tesla and the broader EV community, a person who spent his life focused on innovation and the promise of technology.” Still, Tesla continued, the Autopilot feature is meant to be used with both hands on the wheel and eyes on the road.
That raises the question: What’s the freaking point? What’s Autopilot for if it doesn’t actually, you know, auto-pilot? Elon Musk told us that a Tesla will be able to drive itself from New York to Los Angeles by 2018, but it’s 2016 and I can’t even drink a beer in one? What about the company’s other promises? Maybe the “bioweapon defense mode” on the Model X doesn’t work so great either?
But, as Tesla points out, its Autopilot-equipped cars still kill way fewer people than the average U.S. car: only one fatality per 130 million miles of Autopilot-enabled car travel, versus the one per 82 million miles of regular car travel in 2015.
After all, driving is not something that humans do particularly well. We get distracted, bored, tired, road-rage prone, texting-addicted, and tipsy. But as bad as humans are when driving, we get worse when a computer is steering for us. A small case study at Stanford found that, out of 48 students put into a self-driving car and told to stay alert in case of an emergency, 13 of them started to fall asleep.
Tesla’s fatality numbers are based on a sample size of one, but there have been several other Autopilot-related accidents that haven’t resulted in anyone dying, as Tesla told the Wall Street Journal.
Behind the wheel in one of those accidents was Arianna Simpson, a venture capitalist in San Francisco, whose Tesla crashed into a stopped car hidden in moving traffic on the highway. Tesla informed Simpson that the crash was her fault, because after Autopilot’s “collision imminent” alarm sounded, Simpson hit the brakes, putting her back in control of the car.
“If you don’t brake, it’s your fault because you weren’t paying attention,” Simpson told the Journal. “And if you do brake, it’s your fault because you were driving.” The experience has made her nervous around Autopilot: “When I have a bug on my app, it crashes. When I have a bug on my car, people die.”
If that sounds like a catch-22 to you, here’s something else to think about. You know trains? Those things that you don’t have to drive yourself? That you can legally drink a beer inside of while watching Harry Potter? Trains kill 0.43 people per billion miles of travel. Subways and light rail only manage to kill 0.24 people per billion miles. So if you want to brag about whose wheels offer the biggest net benefit to society, Tesla, get in line.