Self-Driving Cars: The 5 Biggest Roadblocks
By Tim Scargill
The development of self-driving cars has accelerated dramatically over the past decade, and the reality is becoming ever closer. While intermediate levels of autonomy are being pursued by some manufacturers (notably Tesla), the challenges involved in ensuring a human driver re-engages has lead other companies to concentrate their efforts on vehicles that do not require a human driver at all. Whichever path proves more successful in the end, proponents of both approaches aim to have fully autonomous cars on the road by 2020.
That may sound like an ambitious timeline, especially given the new legislation that will need to be introduced. Even putting aside those complications, some significant technical challenges remain before self-driving cars are ready to hit our roads. Here are the five biggest roadblocks on the way to fully autonomous vehicles, and how the industry is looking to overcome them.
Self-driving cars require a combination of GPS and sensor data to detect where they are on a map, which in turn also helps them to detect obstacles better. However, achieving highly accurate localization is difficult in areas with a poor GPS signal, such as wooded, mountainous or (significantly) urban areas with tall buildings. Proposed improvements like employing more satellites in the Global Satellite Navigation System (GNSS) can help, but do not solve the issue.
Therefore that sensor data, coming from radar and cameras, clearly has a big role to play. The problem is that computer vision technology is still evolving, and distance calculation remains a challenge, particularly in low lighting. The most effective technology and the one used in most prototypes is LIDAR, which can achieve sufficient range and accuracy even in the dark. But those systems are currently very expensive, and the potential for interference between vehicles is a big challenge for researchers at the moment.
The likelihood is that the first fully autonomous cars will use a variety of different sensor technologies, from computer vision to radar and LIDAR. How quickly each field progresses will determine the balance employed, while the sophistication of sensor technologies as a whole will gradually reduce the reliance on high-resolution mapping. Bosch and TomTom are even working on a system that constantly captures radar data from the surroundings and uses that to update cloud-based maps, further improving localization.
Variable Road Conditions
Closely related to navigation is how vehicles cope with unfavorable, changeable road conditions, and of the many miles logged in testing, only a small percentage have dealt with that. One of the biggest factors affecting sensors is the weather; even LIDAR systems which cope well in low lighting can have problems in heavy rain and snow. Road markings may also be unusable simply due to erosion and general wear and tear, so how will a car deal with that? More testing focused on this is needed, and in the future we may see highway infrastructure change to accommodate the needs of autonomous navigation systems.
Perhaps more challenging still is when cars face unpredictable situations. If there is an object in the road, does it need to be avoided or is it just a plastic bag that can be driven over? Is that a puddle or a pothole? Those kinds of decisions require advanced computer vision, as does spotting the irregular forms of cyclists and skateboarders. When we as human drivers encounter broken or obscured traffic signals, intuition enables us to decide how to proceed with relative ease, but evaluating all the relevant visual clues is extremely complex for an AI system. And while human drivers are on the road, those who break the rules will pose similar problems.
One of the technologies that could help is vehicle-to-vehicle (V2V) communication, which will allow cars to share information about current road conditions. It will be a vital tool in preventing accidents because cars will also broadcast their current position and speed, so surrounding vehicles can alter their course if necessary - crucially providing that information much earlier than a sensor would. Even non-autonomous cars can be connected and warn a human driver of a possible collision. And while transmitting and storing all of that data is not a simple task, the technology is not far from being ready for deployment.
A much deeper problem arises when we consider communication between vehicles and humans however. Powered by machine learning, computer vision can be trained to recognize hand signals used by cyclists, but at intersections human drivers sometimes rely on subtle body language cues which are more difficult to detect. The same goes for pedestrians and judging their intentions, particularly in busy urban areas. Systems are being developed which would allow pedestrians to stop driverless cars with a standard gesture - but would that give them too much control, increasing travel times to an unacceptable level? Proposed solutions range from advanced detection systems to enforcing jaywalking more strictly, perhaps with the car involved even taking a picture of the offender for facial recognition. Many see it inevitably leading to highly segregated traffic flow, in which case it will affect the whole layout of our cities.
Autonomous vehicles do have the potential to reduce accidents greatly, but they will still occur, particularly while the technology and infrastructure are still developing and human drivers remain on the road. And if some kind of collision is inevitable, whose safety should be prioritized? Manufacturers might sell cars on the basis that it would be the driver, but hitting a pedestrian would leave them open to all kinds of litigation, and erode public trust in the whole industry.
Google has indicated that their cars will try hardest to avoid unprotected road users, and in the future vehicles could potentially even identify and factor in the age and gender of those involved. By 2030 21% of the US population will be over 65, putting strain on healthcare providers; so it is not inconceivable that manufacturers could be put under pressure to protect the most vulnerable.
It is worth noting that many within the field dispute the need to venture too deep into philosophical territory, arguing that action will determined by 'neutral' algorithms, and that we should instead focus on the general risk that is acceptable. But perhaps they miss the point; as a prerequisite to accepting driverless cars the public will demand to know the answers to these kinds of questions. Although incidents will be rare, the moral code (explicit or implicit) that is embedded in algorithms must be clear.
From reducing accidents to improving productivity on the commute, the introduction of connected cars will have many benefits, and it is estimated that by 2020, 250 million vehicles will be web-enabled. But there are serious concerns that those desirable features (entertainment systems, navigation, tire-pressure sensors etc.) provide multiple entry points for hackers looking to take control of a car. While manufacturers have made efforts to block access from those areas to the main computing system (the computer area network or CAN), researchers have already shown that it is far from impossible to breach. Companies can respond by rolling out software security patches, but the challenge of ensuring they are applied to cars already on the road is a big one.
And if that connected car is an autonomous vehicle, then clearly the risks are multiplied - the possibility that someone could take full control while you are inside is a terrifying one. Even without a passenger present, thieves might be able to drive your car away, or a terrorist could remotely hijack a vehicle. The vehicle-to-vehicle communication that appears so crucial to the success of autonomous vehicles actually makes them more vulnerable, by again opening up millions more possible access points. What happens when we connect vehicles to infrastructure as well? While these systems are being developed we still have time to act, but we cannot afford for security to be an afterthought; it must not be layered on top of existing solutions, but be a fundamental part of the design process instead.