Self-driving Uber car incident: more questions

If you take an interest in self-driving cars, then you probably already know about the incident involving an Uber car that killed a pedestrian. Although the investigation is still underway, several interesting facts have become available to us that were recently published by the National Traffic Safety Board, the US traffic security agency. The agency’s experts have established that the car’s sensors detected the pedestrian on the road six seconds before the accident – more than enough to avoid the collision. However, the emergency brake system did not work because it had been disabled. This was done specifically to reduce the potential for “erratic vehicle behavior”. It turns out that if an obstacle appears on the road while driving in autonomous mode, the human safety driver behind the wheel has to apply the brakes. Unfortunately, that did not happen in this case. Apparently, Uber’s self-driving technology at its current stage of development cannot be described as an autopilot – it is a system to assist the driver, but is in no way a self-driving system.

Now, let’s talk about the “erratic behavior” the Uber engineers were so keen to prevent. As we wrote above, the self-driving system detected an obstacle on the road six seconds before impact. The obstacle detection logic from the car’s point of view was as follows: first the woman walking her bicycle across the road was identified as an unknown object, then as a vehicle, and then as a bicycle. The Uber system only decided that emergency braking was necessary 1.3 seconds before the collision. Moreover, the Uber developers intentionally disabled the emergency braking system to make things less stressful for people traveling in the car. The system was somehow supposed to inform the human driver that emergency braking was necessary. To us, that logic seems very strange. We can see why it might not be appropriate to apply emergency braking every time an unknown object appears on the road. However, if a vehicle or a bicycle is detected in front of the car, surely it’s absolutely essential to avoid a collision! Anyone who has ever taken driving lessons will remember their driving instructor warning them (and road rules stipulating clearly) that when an unknown object (or a potentially dangerous situation) appears on the road, you need to be more attentive and reduce your speed so you can stop and avoid a collision. Why was this logic not implemented in the Uber car?

We can only guess at how exactly the Uber self-driving system was implemented, but it is very likely to be based on machine learning algorithms. If we can talk about the system detecting a vehicle, we need to think about how accurately the system identifies the vehicle or, in other words, how good the system is at identifying an object. Perhaps, the system was only a few percent away from correctly identifying the object on the road and braking earlier. If the system is not entirely sure something is in front of it (i.e. the probability is below a certain threshold value), it may decide that it’s a false alarm and no action is needed. If the system receives some extra data and decides (again with a certain level of probability) that a vehicle is in front of it, then it needs to perform specific actions. The problem the driving system programmers had was in setting the right probability values to definitely identify an obstacle in front of the car and, consequently, to brake. While the problem of choosing the threshold values is relatively easy to solve, it is not that easy with machine learning algorithms: once the training is completed, it is often difficult to know why the ML system has made the decision it has made. This is the problem with complex systems based on machine learning. And that’s not all – machine learning systems can also be fooled rather easily.

There are a variety of computer vision systems now in use by autonomous vehicles to detect road signs, road markings, obstacles and the overall traffic situation. Most of these systems are based on machine learning algorithms.

One of the seminal works on deceiving such algorithms, Intriguing properties of neural networks, was published in 2014. It presents examples of how computer vision systems can be deceived by adding small, unobtrusive ‘noise’ to the image, causing the algorithm to change its verdict. This research is continued in another publication, Explaining and Harnessing Adversarial Examples, published a little later. These two publications do not cover everything in this line of research; the article Robust Physical-World Attacks on Deep Learning Visual Classification investigates how some machine vision systems can be deceived by placing several black and white strips on a road sign – this is all it takes for some systems to stop recognizing it.

The situation surrounding the use of machine learning algorithms today is reminiscent of the emergence of the internet when no one had started to think about information security, which then led to major security problems with old software. A similar situation may also unfold with machine learning algorithms. Models created without sufficient attention to security requirements could spread fast and become the basis for an entire platform, providing new attack vectors against information systems.

Of course, these problems with operational algorithms are of a technical nature, and we have no doubt they will be resolved sooner or later. However, there is another major problem related to this accident that cannot be solved by writing a few lines of code, and that is the problem of responsibility. If a human driver is behind the wheel, then the responsibility will lie with either the driver or the pedestrian if the latter violates traffic regulations. The other option is that of the manufacturer being held responsible in the event of a vehicle malfunction. But who would be to blame if a completely autonomous car is involved in an accident? There is no human at the wheel, and let’s assume that any pedestrians involved are innocent and the car was functioning properly. In that case, the only other party that could be held responsible is the developer of the autonomous driving system. This leads to a second question of who should pay the insurance. Definitely not the owner of the car, as they cannot influence the way the self-driving car acts, or alter the vehicle algorithms or, to be more precise, of its autonomous driving system. It follows that it is again the system developer who bears responsibility. Of course, the car owner will be responsible if the car is stolen or if there is damage to the car or to the environment, but the developer should be responsible for everything that happens to the self-driving car while it is in use. Eventually, a widespread acceptance of self-driven cars is likely to seriously affect the insurance business – insurers will have to bear responsibility for incidents involving self-driven vehicles that result from system malfunction. We don’t currently sign a contract with the manufacturer when we buy a new car, but when self-driven cars become more popular in the future, a large chunk of the responsibility will be regulated by a contract that’s signed when the car is purchased. It’s a calculated world we live in.

Already today, we can safely say that autopilot systems will become an indispensable component of our cars, though it’s difficult to say when exactly this will happen. Optimists say that self-driven cars will appear on our roads on a massive scale in a year or two; but we do not share their optimism because there are a number of unaddressed problems, some of which are covered in this article. For our part, we are making every effort to ensure the cars of the future (and that of the present) are not only functionally secure but also cybersecure. Cars can be trained to act the way the trainer deems necessary, but there will always be those who look for potential ways to manipulate them, and not always with the best of intentions.