Google’s Problems With Its Self-Driving Cars

It's hard to tell which Google project the world is more excited about--Google Glass(es) or Google's self-driving cars--but this frequent car-driver, anyway, can't wait for the latter.

(The Glasses sound cool, too, don't get me wrong.)

Google is making great progress with these cars: The cars have now been driven more than 300,000 miles, and there have been no accidents with a car under the computer's control. (There was a widely publicized fender-bender in 2011, when one of the Google cars crunched into the car ahead of it, but the human driver was driving.)

One of the insights that people often quickly have when they ride in these cars, according to people who have ridden in them, is that it's obvious the computer can be a vastly better driver than a human ever could be. With lasers and radar for eyes, the computer can monitor an extraordinary number of inputs and react much more quickly to surprises than a human ever could.

I have always assumed that there would be much excitement around the self-driving cars right up until the time that one killed someone. At that point, I assumed, the years of litigation and liability arguments would make the technology so expensive as to be impractical for normal use.

One hypothesis that was mentioned to me recently by a person who has ridden in Google's cars, however, was the idea that, in a decade or two, your insurance premium will cost more if you don't have self-driving technology than if you do.

Why?

Because the computers will quickly reveal themselves to be far superior drivers than humans, especially given that humans often drive while distracted (texting, kids) or impaired (booze, drugs, drowsiness).

In any event...

Google is facing a couple of interesting challenges with the cars right now, one of which I heard about from someone close to the company. (The Google self-driving car team mentioned some of these in passing in a blog post.)

The first challenge is driving in snow.

When snow is on the road, the cars often have a tough time "seeing" the lane markers and other cues that they use to stay correctly positioned on the road. It will be interesting to see how the Google team sorts that one out.

A second challenge, apparently, is when the car encounters a change in a road that is not yet reflected in its onboard "map." In those situations, the car can presumably get lost, just the way a human can.

A third challenge is driving through construction zones, accident zones, or other situations in which a human is directing traffic with hand signals. The cars are excellent at observing stop signs, traffic lights, speed limits, the behavior of other cars, and other common cues that human drivers use to figure out how fast to go and where and when to turn. But when a human is directing traffic with hand signals--and especially when these hand signals conflict with a traffic light or stop sign--the cars get confused.

(Imagine pulling up to an intersection in which a police officer is temporarily directing traffic and overriding a traffic light. What should the car pay attention to? How should the car be "taught" to give the police officer's hand signals more weight than the traffic light? How should the car interpret the hand signals, which are often different from person to person? And what if the cop is just pointing at you and yelling, which happens frequently in intersections in New York?)

According to an engineer (not a Googler) who was involved in the conversation I had about this latter challenge, none of these problems are insurmountable. But they're certainly interesting.

The engineer's view, for what it's worth, is that self-driving technology will enter cars gradually, first for use in certain special and limited situations--highway driving, for example, in a form of augmented cruise control. Then, eventually, after these baby steps have been mastered, the technology will progress toward the fully-automated electronic chauffeur that Google is working on.

Advertisement