The Fault In Our Cars
It won’t be long—a few years, maybe ten— before the algorithm begins killing us.
As self-driving cars emerge on the market, eventually overtaking human-operated ones, there will inevitably be fatal car accidents in which computer code is responsible.
Self-driving cars rely on deep neural networks to operate. This means that humans haven’t encoded every plausible scenario into the car’s code, telling it what to do in any given situation in any place and time. Rather, while programmers can influence the car’s sense of which outcomes are favorable, the car develops its own way of making decisions based on what it observes through its numerous sensors and instruments. Over time, the algorithm builds itself, and the programmers themselves have no way of fully comprehending the car’s decision-making process. They just know that it works. And if it doesn’t, they tweak the code to indicate that the output was wrong, and the algorithm modifies itself to avoid the bad outcome.
After enough iterations of this “machine learning”—after enough trials, tests, and refinements—the carmakers, and eventually the federal government, will decide that what we have is good enough for society. The end result will be much, much safer than human drivers. But it won’t be perfect—some accidents will remain unavoidable.
Consider a dilemma that self-driving car manufacturers are facing: assuming that it’s too late for any other option, should a car hit a jaywalker, or should it swerve out of the way and potentially kill its driver? Should the safety of the driver, who bought the vehicle, come first? Should the illegality of jaywalking be factored into the computer’s split-second decision? Should it be a 50-50 chance? Should a car kill one person to save many? (This recalls the infamous Trolley Problem from your intro ethics course.)
These are the questions auto manufacturers and government regulators must face. Our whole society, however, has to grapple with a whole new reality—an era of increasing machine autonomy. However the algorithm decides to act, people will die, and there will be no accountability or justice.
Can you really blame the coders, who laid the groundwork for an algorithm much safer than a human driver, when the algorithm unpredictably fails? Can you blame the owner of the car, who was not in control? The coders cannot have any ethical culpability, as they tested the algorithm and found it safer than humans. And what about the car companies? They could be legally liable if their car is deemed at fault, but how challenging would this be to prove?
We must confront this inevitability now, head-on, before it becomes reality. Tragedies borne by self-driving cars will claim lives, and while the unlucky few may never get a sense of “justice” or “closure,” we will have no choice but to write off this lack of accountability as the cost of increased public safety.
To be clear, around 100 people die in car accidents in the United States every day. A large chunk of these are caused by distracted driving, drunk driving, or other forms of human error. Self-driving cars don’t text or drink. They don’t fall asleep at the wheel. And on top of this, they can use radar and other technology to detect hazards down the road that an attentive human driver could not.
Because of this technology, fewer people will die. But for those that do die, the experience that their families face will be different; instead of confronting the responsible driver in court, there might not even be a court date. In the event of a nonfatal injury, the injured party will have a similar experience.
Self-driving cars will malfunction, sometimes with fatal results. The specter of dying via computer glitch—a miscalculation of the road ahead, a somehow undetected deer—is uniquely disturbing. There’s no clearly culpable individual to throw in jail. Nobody did anything wrong. And, worst of all, it could happen anywhere, at any time, with no way for the occupant to try and avoid it.
Indeed, getting in a self-driving car is a surrender of agency to a machine. Even though we are worse at driving than we think, we take comfort in the idea of being in control. Even though a self-driving car never gets tired, has no blind spots, and is much more precise than a human, when it comes to that split-second choice, the car will make a decision, and it will be controversial.
Auto travel of the future—the near future—will be overwhelmingly safer than it is today. But this safety will come at a psychological cost. That will be our burden to bear, whether we’re ready or not.
Sam Klein ’18 studies in the College of Arts & Sciences. He can be reached at klein.s@wustl.edu.