Once, when the economy had tanked (again) and the only work my dad could find was working the late shift at a faraway factory, he began writing a science fiction story. He never bothered to give it an ending (or even a beginning, really). The story simply consisted of a lovingly detailed description of a service available in a not-too-distant future: tunnels where drivers could park on a conveyor belt and sleep while the conveyor belt sped them to their destination. Soothing music was available for an extra charge. He called it “The Z Way.”

In lieu of the Z Way, my friends in far-flung neighborhoods used more accessible technology. One friend rolled her hair up in the driver’s side window, so that if she fall asleep while driving home late at night the pain of having her hair yanked out by its roots would wake her up. Regardless of the measures taken: Coffee, speed, energy drinks — just about everyone I knew had a few stories like this under their belts:

I think about this when I read about self-driving cars — particularly now that the National Highway Traffic Safety Administration (NHTSA) has indicated that it is willing to consider the computer that pilots Google’s self-driving cars as “the driver.”

Reader support helps sustain our work. Donate today to keep our climate news free. All donations DOUBLED!

This doesn’t mean that a Google self-driving car is street legal, by a long shot. But it is a big deal. According to such precedent-setting vehicular law as the 1949 Geneva Convention on Driving, self-driving cars only have a chance at passing legal muster as long as “any vehicle or combination thereof” has a driver inside who is able at any minute to step in and take control. Now, all those champagne dreams put forward by people like Elon Musk, of cars that can drive themselves from New York to L.A. (maybe bringing some decent bagels while they’re at it), could actually come true.

Grist thanks its sponsors. Become one.

Now, it’s not necessarily a bad idea to insist that human passengers should always be able to override the digital autopilot. For one thing, this would protect the manufacturers and programmers of self-driving cars from a certain degree of liability — if something goes wrong, just blame the humans. For another thing, self-driving cars have been deployed on the flat, sunny, low-pedestrian, low-chaos streets of places like Silicon Valley and Austin, Texas. Once they get out into the big bad world at large, it’s likely that they are going to run across situations that they can’t handle. Like fog. Or snow, which about 70 percent of the U.S. population has to live with for part of the year.

But Google’s “Computer: Knows best. Humans: Not to be trusted” philosophy also has a point. For one thing, the resulting system is potentially much more liberating for people who wouldn’t ordinarily be able to drive — blind people, elderly people. It’s also more exciting: What’s the point of even having a self-driving car if you can’t call it and have it drive through town to find you, as if you were Batman in the middle of a very important fight scene? Plus, it could theoretically drive your kids to soccer practice (though it couldn’t glare over its shoulder to break up their fights).

And, in much the same way that automatic transmissions enable a lot of bad habits that manual transmissions prevent (like texting while driving), autonomous vehicles have their own quirks. For instance, they put people to sleep. Out of 48 students that Stanford put into a self-driving car and told to monitor the vehicle and take over in case of an emergency, 13 of them began to nod off. The number went down to three if the drivers were given a tablet to read or a movie to watch, but that’s still not great when you’re talking about piloting a large hunk of metal that could kill people. Even the most alert people needed at least five seconds to take control of the car when the time came to avert an imaginary crisis.

So it makes sense that Google would decide it was safer to eliminate humans from the situation altogether and declare that the only thing that a human driver can be trusted with is a “stop” and “start” button. Chuck the steering wheel, chuck the pedals — put the computer in charge of everything. It’s also possible that completely autonomous vehicles might be limited to certain regions of the country: Robot taxis in San Jose, but the same old cranky human kind in New York City.

Grist thanks its sponsors. Become one.

If your self-driving car is going to require intermittent human intervention, the soporific quality of self-driving mode will demand an entire protocol for keeping the human part of the system alert (noises, vibrating seats). And it will have to manage this without annoying that human so much that it disables the system or refuses to buy the car in the first place. In 2017, Cadillac plans to deploy the CTS Super Cruise System, offering the first vehicles on the market capable of hands-free driving at highway speeds; that system takes a kind of passive-aggressive approach — the less you respond to its attempts to get your attention, the more the car will slow down.

In other words, self-driving cars aren’t going to be perfect. The computer-driven ones will make computer mistakes — which will seem so weird and alien that it will be difficult for us, as humans, to anticipate them. (Several of the autonomous car accidents that Google blamed on human drivers were actually the result of human drivers on the road being confused by the Google car’s behavior. There are social cues around driving that are difficult to program for.) The human-driven ones will make human mistakes and computer mistakes. Everyone will blame everyone else. It will be tremendously interesting.

And the truth is that, as far as cars and safety are concerned, the bar is pretty low. In the U.S., car accidents kill about 32,000 people every year (which is actually a huge improvement over the ’70s, when 50,000 a year was more the norm). A self-driving car doesn’t have to be perfect — it just has to be better than what we have already.