Laws
We humans are social animals who subdued all other species and conquered
Earth thanks to our ability to cooperate. We’ve developed laws to incentivize and
facilitate cooperation, so if AI can improve our legal and governance systems,
then it can enable us to cooperate more successfully than ever before, bringing
out the very best in us. And there’s plenty of opportunity for improvement here,
both in how our laws are applied and how they’re written, so let’s explore both
in turn.
What are the first associations that come to your mind when you think about
the court system in your country? If it’s lengthy delays, high costs and occasional
injustice, then you’re not alone. Wouldn’t it be wonderful if your first thoughts
were instead “efficiency” and “fairness”? Since the legal process can be
abstractly viewed as a computation, inputting information about evidence and
laws and outputting a decision, some scholars dream of fully automating it with
robojudges: AI systems that tirelessly apply the same high legal standards to
every judgment without succumbing to human errors such as bias, fatigue or lack
of the latest knowledge.
Robojudges
Byron De La Beckwith Jr. was convicted in 1994 of assassinating civil rights leader
Medgar Evers in 1963, but two separate all-white Mississippi juries had failed to
convict him the year after the murder, even though the physical evidence was
essentially the same.34 Alas, legal history is rife with judgments biased by skin
color, gender, sexual orientation, religion, nationality and other factors.
Robojudges could in principle ensure that, for the first time in history, everyone
becomes truly equal under the law: they could be programmed to all be identical
and to treat everyone equally, transparently applying the law in a truly unbiased
fashion.
Robojudges could also eliminate human biases that are accidental rather than
intentional. For example, a controversial 2012 study of Israeli judges claimed that
they delivered significantly harsher verdicts when they were hungry: whereas
they denied about 35% of parole cases right after breakfast, they denied over
85% right before lunch.35 Another shortcoming of human judges is that they may
lack sufficient time to explore all details of a case. In contrast, robojudges can
easily be copied, since they consist of little more than software, allowing all
pending cases to be processed in parallel rather than in series, each case getting
its own robojudge for as long as it takes. Finally, although it’s impossible for
human judges to master all technical knowledge required for every possible case,
from thorny patent disputes to murder mysteries hinging on the latest forensic
science, future robojudges may have essentially unlimited memory and learning
capacity.
One day, such robojudges may therefore be both more efficient and fairer, by
virtue of being unbiased, competent and transparent. Their efficiency makes
them fairer still: by speeding up the legal process and making it harder for savvy
lawyers to skew the outcome, they could make it dramatically cheaper to get
justice through the courts. This could greatly increase the chances of a
cashstrapped individual or startup company prevailing against a billionaire or
multinational corporation with an army of lawyers.
On the other hand, what if robojudges have bugs or get hacked? Both have
already afflicted automatic voting machines, and when years behind bars or
millions in the bank are at stake, the incentives for cyberattacks are greater still.
Even if AI can be made robust enough for us to trust that a robojudge is using the
legislated algorithm, will everybody feel that they understand its logical
reasoning enough to respect its judgment? This challenge is exacerbated by the
recent success of neural networks, which often outperform traditional easy-
tounderstand AI algorithms at the price of inscrutability. If defendants wish to
know why they were convicted, shouldn’t they have the right to a better answer
than “we trained the system on lots of data, and this is what it decided”?
Moreover, recent studies have shown that if you train a deep neural learning
system with massive amounts of prisoner data, it can predict who’s likely to
return to crime (and should therefore be denied parole) better than human
judges. But what if this system finds that recidivism is statistically linked to a
prisoner’s sex or race—would this count as a sexist, racist robojudge that needs
reprogramming? Indeed, a 2016 study argued that recidivism-prediction
software used across the United States was biased against African Americans and
had contributed to unfair sentencing.36 These are important questions that we
all need to ponder and discuss to ensure that AI remains beneficial. We aren’t
facing an all-or-nothing decision regarding robojudges, but rather a decision
about the extent and speed with which we want to deploy AI in our legal system.
Do we want human judges to have AI-based decision support systems, just like
tomorrow’s medical doctors? Do we want to go further and have robojudge
decisions that can be appealed to human judges, or do we want to go all the way
and give even the final say to machines, even for death penalties?
Legal Controversies
So far, we’ve explored only the application of law; let us now turn to its content.
There’s broad consensus that our laws need to evolve to keep pace with our
technology. For example, the two programmers who created the
aforementioned ILOVEYOU worm and caused billions of dollars in damages were
acquitted of all charges and walked free because at that time, there were no laws
against malware creation in the Philippines. Since the pace of technological
progress appears to be accelerating, laws need to be updated ever more rapidly,
and have a tendency to lag behind. Getting more tech-savvy people into law
schools and governments is probably a smart move for society. But should AI-
based decision support systems for voters and legislators ensue, followed by
outright robolegislators?
How to best alter our laws to reflect AI progress is a fascinatingly controversial
topic. One dispute reflects the tension between privacy versus freedom of
information. Freedom fans argue that the less privacy we have, the more
evidence the courts will have, and the fairer the judgments will be. For example,
if the government taps into everyone’s electronic devices to record where they
are and what they type, click, say and do, many crimes would be readily solved,
and additional ones could be prevented. Privacy advocates counter that they
don’t want an Orwellian surveillance state, and that even if they did, there’s a
risk of it turning into a totalitarian dictatorship of epic proportions. Moreover,
machine-learning techniques have gotten better at analyzing brain data from
fMRI scanners to determine what a person is thinking about and, in particular,
whether they’re telling the truth or lying.37 If AI-assisted brain scanning
technology became commonplace in courtrooms, the currently tedious process
of establishing the facts of a case could be dramatically simplified and expedited,
enabling faster trials and fairer judgments. But privacy advocates might worry
about whether such systems occasionally make mistakes and, more
fundamentally, whether our minds should be off-limits to government snooping.
Governments that don’t support freedom of thought could use such technology
to criminalize the holding of certain beliefs and opinions. Where would you draw
the line between justice and privacy, and between protecting society and
protecting personal freedom? Wherever you draw it, will it gradually but
inexorably move toward reduced privacy to compensate for the fact that
evidence gets easier to fake? For example, once AI becomes able to generate
fully realistic fake videos of you committing crimes, will you vote for a system
where the government tracks everyone’s whereabouts at all times and can
provide you with an ironclad alibi if needed?
Another captivating controversy is whether AI research should be regulated or,
more generally, what incentives policymakers should give AI researchers to
maximize the chances of a beneficial outcome. Some AI researchers have argued
against all forms of regulation of AI development, claiming that they would
needlessly delay urgently needed innovation (for example, lifesaving self-driving
cars) and would drive cutting-edge AI research underground and/or to other
countries with more permissive governments. At the Puerto Rico beneficial-AI
conference mentioned in the first chapter, Elon Musk argued that what we need
right now from governments isn’t oversight but insight: specifically, technically
capable people in government positions who can monitor AI’s progress and steer
it if warranted down the road. He also argued that government regulation can
sometimes nurture rather than stifle progress: for example, if government safety
standards for self-driving cars can help reduce the number of self-driving-car
accidents, then a public backlash is less likely and adoption of the new
technology can be accelerated. The most safety-conscious AI companies might
therefore favor regulation that forces less scrupulous competitors to match their
high safety standards.
Yet another interesting legal controversy involves granting rights to machines.
If self-driving cars cut the 32,000 annual U.S. traffic fatalities in half, perhaps
carmakers won’t get 16,000 thank-you notes, but 16,000 lawsuits. So if a
selfdriving car causes an accident, who should be liable—its occupants, its owner
or its manufacturer? Legal scholar David Vladeck has proposed a fourth answer:
the car itself! Specifically, he proposes that self-driving cars be allowed (and
required) to hold car insurance. This way, models with a sterling safety record
will qualify for premiums that are very low, probably lower than what’s available
to human drivers, while poorly designed models from sloppy manufacturers will
only qualify for insurance policies that make them prohibitively expensive to
own.
But if machines such as cars are allowed to hold insurance policies, should they
also be able to own money and property? If so, there’s nothing legally stopping
smart computers from making money on the stock market and using it to buy
online services. Once a computer starts paying humans to work for it, it can
accomplish anything that humans can do. If AI systems eventually get better than
humans at investing (which they already are in some domains), this could lead
to a situation where most of our economy is owned and controlled by machines.
Is this what we want? If it sounds far-off, consider that most of our economy is
already owned by another form of non-human entity: corporations, which are
often more powerful than any one person in them and can to some extent take
on life of their own.
If you’re OK with granting machines the rights to own property, then how
about granting them the right to vote? If so, should each computer program get
one vote, even though it can trivially make trillions of copies of itself in the cloud
if it’s rich enough, thereby guaranteeing that it will decide all elections? If not,
then on what moral basis are we discriminating against machine minds relative
to human minds? Does it make a difference if machine minds are conscious in
the sense of having a subjective experience like we do? We’ll explore in greater
depth these controversial questions related to computer control of our world in
the next chapter, and questions related to machine consciousness in chapter 8.