Challenges that self-driving cars Still have to overcome
1) Creating (and maintaining) maps for self-driving cars is difficult work
First, a quick clarification: Lots of car companies, from GM to BMW to Tesla to Uber, are working on various species of autonomous technology. Some of this is partial autonomy, as with Honda’s Civic LX, a car now on the market that can stay within its lane. But I’m mostly going to focus on full autonomy — cars that don’t need drivers at all. And right now, Google seems to be the furthest along with that technology:
Google’s self-driving cars work by relying on a combination of detailed pre-made maps as well as sensors that “see” obstacles on the road in real time. Both systems are crucial and they work in tandem.
This is a time-intensive process, but Google thinks it’s the best way forward. The idea is that building the map ahead of the time can free up processing power for the car’s software to be “alert” while puttering around autonomously. The car uses the map as a reference and then deploys its sensors to look out for other vehicles, pedestrians, as well as any new objects that weren’t on the map, such as unexpected signs or construction.
Before Google can test a self-driving car in any new city or town, its employees first manually drive the vehicles all over the streets and build a rich, detailed 3-D map of the area using the rotating Lidar camera on the car’s roof. The camera sends out laser pulses to gauge its surroundings, and the people on Google’s mapping team then pore over the data to categorize different features such as intersections, driveways, or fire hydrants.
Olson points out that relying on this mapping system will pose some major challenges. Right now, Google has only built detailed 3-D maps for a relatively limited number of test areas, like Mountain View. For self-driving cars to go mainstream, Google would have to build and maintain detailed maps all over the country — across 4 million miles of public roads — and update them constantly. After all, roads change a lot: Researchers at Oxford University recently tracked a single 6-mile stretch of road in England over the course of a year and found its features were constantlyshifting. One rotary along the path was moved three times.
2) Driving requires many complex social interactions — which are still tough for robots
A far more difficult hurdle, meanwhile, is the fact that driving is an intensely social process that frequently involves intricate interactions with other drivers, cyclists, and pedestrians. In many of those situations, humans rely on generalized intelligence and common sense that robots still very much lack
Much of the testing that Google has been doing over the years has involved “training” the cars’ software to recognize various thorny situations that pop up on the roads.
3) Bad weather makes everything trickier
Compounding these challenges is the fact that weather still poses a major challenge for self-driving vehicles. Much like our eyes, car sensors don’t work as well in fog or rain or snow. What’s more, companies are currently testing cars in locations with benign climates, like Mountain View, California — and not, say, up in the Colorado Rockies.
Olson classifies this as a real, but lesser, hurdle. “Weather adds to the difficulty, but it’s not a fundamental challenge,” he says. “Also, even if you had a car that only worked in fair weather, that’s still enormously valuable. I suspect it might take longer to overcome weather challenges, but I don’t think this will derail the technology.”
Urmson took a similar view in his SXSW talk: “This technology is almost certainly going to come out incrementally,” he said. “We imagine we are going to find places where the weather is good, where the roads are easy to drive — the technology might come there first. And then once we have confidence with that, we will move to more challenging locations.”
4) We may have to design regulations before we know how safe self-driving cars really are
Another big obstacle for self-driving cars isn’t technical — it’s political. Before self-driving cars can hit the roads, regulators are going to have to approve them for use. One thing they’re going to want to ask is: How safe are these things, anyway?
And here’s the tricky part: We probably won’t know!
Kalra laid this all out in a recent paper for RAND. As noted above, drivers in the United States currently get into fatal accidents at a rate of about one for every 100 million miles driven. Ideally, we’d want self-driving cars to be at least that safe. But it’s unlikely we’ll be able to prove that any time soon. Google only drove its cars 1.3 million miles total between 2009 and 2015 — not nearly enough to draw rigorous statistical conclusions about safety. It would take many decades to drive the hundreds and hundreds of millions of miles needed to prove safety.
“My hunch is that by the time automakers are ready to sell these things, we still won’t know how safe they are,” says Kalra. “We’re going to have to make these decisions under uncertainty.”
5) Cyber security will likely be an issue — though a surmountable one
“Another issue is cybersecurity,” says Kalra. “How do you make sure these cars can’t be hacked? As vehicles get smarter and more connected, there are more ways to get into them and disrupt what they’re doing.”
This shouldn’t be impossible to fix. Software companies have been dealing with this issue for a long time. But as Vox’s Timothy Lee has written, it will likely require a culture change in the auto industry, which hasn’t traditionally worried much about cybersecurity issues.
Olson raises a related issue: Many car enthusiasts already modify their own vehicles to improve performance. What happens if they do this for self-driving cars and inadvertently compromise the computers’ decision-making ability? “Just as an example, someone puts on oversized wheels that distorts’ the cars sense of how fast it’s going,” he notes. “It’s hard to stop anyone from doing that.”
Olson points out this could be a particular challenge if the auto industry tries to develop systems that enable different vehicles to talk to each other on the road (say, to make merging easier). “The whole premise of using V2V [vehicle-to-vehicle communication] for safety is that if you get a message to slam on the brakes, you better be able to trust that message. But securing that system could be extremely difficult.” Again, not fatal. But something to ponder.