What Worries A Calif. Supreme Court Justice About AI And The Law?

Details

Publish Date:
July 19, 2017
Author(s):
Source:
The Recorder
Related Person(s):

Summary

There are many complicated issues that undoubtedly weigh on the mind of California Supreme Court Justice Mariano-Florentino Cuéllar. But the notion that super-intelligent robots might one day destroy humanity is apparently not one of them.

In a far-ranging panel discussion about artificial intelligence and the law at the annual Ninth Circuit Judicial Conference, Cuéllar was critical of a book by philosopher Nick Bostrom about the existential threat to humanity posed by self-aware machines. But the justice also laid out what he said were the promises and risks presented by AI, in the realm of law and beyond.

Here’s a sum-up of Cuéllar’s fears and hopes when it comes to AI and the law.

Worry No. 1: Humans will rely on AI too readily, even if it’s faulty.

“If you study cybersecurity, what you realize is that 95 percent of the activity is in fooling humans, whether you’re name is ‘John Podesta’ or not,” said Cuéllar, in a zinger referring to the phishing-based hack of the former White House chief of staff. The justice’s point was that as AI is incorporated into legal practice, judges and lawyers might rely on it too easily without really understanding the system. That could have important consequences if, for example, AI algorithms are used to gauge a criminal defendant’s risk of recidivism.

Crawford pointed to studies showing that people often fall prey to what she dubbed “automation bias,” or a trust in the result in automated processes. These studies have shown that even when subjects know that a machine or process is faulty, they will trust the result, she said.

Worry No. 2. Does “due process” compute?

On a related point, Cuéllar noted that the way an AI algorithm reaches a decision could have important consequences for the law. For example, he noted that regulators at the Environmental Protection Agency set limits on new chemical compounds using AI that helps determine how similar they are to already-regulated substances. But if the regulator doesn’t fully understand how the computer reached that decision, is that “arbitrary and capricious” and therefore faulty?

Pointing to the robots in the room, Cuéllar also envisioned a scenario where a robot stops someone on the street and searches them. How would a judge determine whether that was an unreasonable search and seizure without knowing how the robot made that decision? LeCun noted that in Europe, laws have been adopted requiring companies to disclose how an algorithm reached a decision in certain circumstances, such as if an applicant was denied a loan.

Worry No. 3: Courts’ and legislatures’ hands will be tied before AI is understood.

Drawing on the example of how the civil nuclear industry worked to pre-empt state regulation, Cuéllar expressed fear that technology companies might try to block rules that would limit how AI is used even before courts and statehouses have a chance to wrestle with the issue.

“We have uncertainty about how this is going to play out in different applications and sectors. So notwithstanding some arguments I’m sure will be made to the contrary,” Cuéllar said, “this is an area where the regulatory context has to sort of learn and experiment.”

Worry No. 4: It’s not the machines. It’s the humans.

LeCun was dismissive of fears that researchers are anywhere near the point of developing AI that would threaten humanity, despite warnings from the likes of Elon Musk and Bostrom, the philosopher, who wrote “Superintelligence: Paths, Dangers, Strategies.” “I disagree with almost everything that’s been said in that book,” the Facebook researcher said.

Cuéllar agreed that “Bostrom is wrong,” but said he still worries about how people will use AI. “The dangers may not come from some property of superintelligence, but the dangers to my mind might come from the desires we have right now,” the justice said. “Armies have a desire to dominate … they will deploy AI, quite likely in lethal form, to gain military advantage.”

Read More