Humans have been playing Go, the Chinese board game, for more than 2,000 years. So you’d think every move would have been explored and studied hundreds or thousands of times. But in a 2016 match against reigning world champion Lee Sedol, something weird happened.
Sedol was competing against Google’s AlphaGo, an AI program. And in Move 37, the computer made, one observer later said, “a move that no human ever would.” Sedol was so flummoxed he left the room temporarily, only to return and lose the match.
Unexpected moves are the hallmark of AI. Supercomputers crunch through terabytes of data to uncover patterns no human could ever detect, then use those insights to make decisions no human could ever predict.
Reassessing Accountability
That development is creating a problem in tort law. One of the fundamental principles of American tort law is that liability for someone else’s injury is linked to foreseeability: If a reasonable person could anticipate something bad happening as a result of their behavior, they should be held responsible for the outcome. By holding that person accountable, legal scholars say, tort law accomplishes the twin goals of compensating victims and discouraging bad behavior.
So how are courts supposed to deal with a human who lets a computer do the deciding? Take self-driving cars. Experts agree self-driving technology has the potential to save tens of thousands of lives a year, because nearly every accident is caused by human error. Some even suggest the most dangerous technology is Level 3 autonomous driving, because it allows potentially distracted or sleepy drivers to override the computer—just in time to cause an otherwise avoidable accident.
The problem for courts is deciding who’s to blame if the computer messes up. Ryan Calo of the University of Washington Law School dreamed up a gruesome scenario where engineers design a hybrid automobile with an AI system programmed to make the car as efficient as possible. After trying many alternatives, the artificial brain determines a hybrid is most efficient if it begins the day with a fully charged battery. It tells the car to run its motor at night. Unfortunately, the car is in a garage, and it kills the entire family with carbon monoxide poisoning.
Machine-Made Mistakes
Who’s to blame for this tragedy? The engineers knew a driverless car might get in an accident. But they “did not in their wildest nightmares imagine it would kill people through carbon monoxide poisoning,” Calo wrote in an influential 2018 article. The whole reason humans use AI is to come up with solutions they couldn’t think of themselves. But behind that lies what another scholar called “a layer of complex, often inscrutable, computation that substitutes statistics for individualized reasoning.”
Since the entire edifice of American tort law is based upon the idea of a reasonable person predicting the future, AI represents a paradox. The only decision a human can make is to trust the AI. And if that decision is reasonable—think of the tens of thousands of people who might be spared from dying in human-caused car accidents—can a jury decide that decision was unreasonable because the AI did something utterly unpredictable?
The traditional solution when machines fail is to file a product-liability lawsuit. But that may be difficult with AI, since the technology relies on machine learning, or adapting outputs to data as it flows in. How can you argue a machine had a defective design when the “design” was the result of complex, inscrutable calculations after it left the factory?
There are no easy answers to these questions. But given the ingenuity of U.S. plaintiff lawyers, the one thing that can be predicted with ease is they will figure out a way to get at the deep pockets behind AI. No machine is going to win that game.