My Recent Podcast
AI is smokin’ hot these days. It’s hard not to be impressed by ChatGPT, arguably the most visible AI killer-app to date. What’s particularly fascinating is how something as simple as a pre-trained large language model (LLM), which is optimized to predict the next thing in a sequence, can be the basis for the range of applications we are seeing. For a sequence like “thousand island,” it might progressively extend it to something like “is a salad dressing made from mayo, ketchup, paprika, lemon juice and salt.” And depending on the context, it would continue accordingly. This capability transfers over to all kinds of things like providing advice, writing documents or code, even helping diagnose illnesses.
But how could ChatGPT possibly understand anything, if all it is based on is to predict the next thing in a sequence? Is this modern AI for real, or is it a bunch of hot air? Skeptics abound.
My most recent podcast features one such skeptic. Gary Smith, Professor of Economics at Pomona College, writes extensively about AI’s inflated claims, its false promises, misplaced expectations, and the lack of “real intelligence” behind the current paradigm. (Check out my summary of the paradigm shifts in AI over the last sixty years.)
Gary has written several books, the last two of which have been The AI Delusion, and Distrust: Big Data, Data Torturing, and the Assault on Science. The titles speak for themselves.
The Democratization of Law
I spoke at a recent conference at the NYU Law School on the “Opportunities and Risks of AI” with my Computer Science colleagues Yann LeCun and Anasse Bari, and Yale Law School Lecturer Andrew Miller.
I used my “Trust Framework” as a way to think about AI in law. Specifically, what kinds of applications is AI suitable for, and how should we think about the legal framework we need in the era of AI.
The Trust Framework positions decision-making on two dimensions of risk: predictability and cost of error. Predictability is an estimate of how often an algorithm will be wrong, and the cost of error measures the consequences of its mistakes. Using this lens, we should expect “high predictability and low error-cost” decision-making to be ideal for algorithms. If the machine is never wrong, you’d trust it. Even if it’s wrong occasionally, that’s still okay as long as it doesn’t result in unacceptable harm. It depends on the consequences.
There is a tradeoff between predictability and cost of error that defines an “automation frontier,” that is shown graphically in the picture, which I’ve populated in the context of law.
The dark green zone corresponds to infrequent low cost errors, and represents a zone of applications where we should expect increasingly automated decision-making. This would cover areas such as traffic, taxation, permits, and all kinds of legal services where the criteria for decision making are clear and the data are reliable. In a sense, this zone represents the democratization of law. Whereas previously, advice and access were limited to those with means and social capital, legal advice will now become widely (and freely?) available, a consequence of society’s tech-dividend. Indeed, pre-trained models are based on public data, so it is only fitting that advisory services based on them be freely available to society.
In addition to the democratization of law at the societal level, the productivity implications of AI in law are equally significant: AI can already create, analyze and critique legal contracts, and is capable in theory, of finding and analyzing precedents, and doing much of the time consuming and costly intellectual work that can require human judgment. In such a world, the role of humans is elevated to tasks involving verification or modification of the outputs of AI. We don’t accept its outputs blindly. Such decisions lie on the automation frontier. Better data or lower cost of error (perhaps through regulation) would nudge them into the green zone.
The red zone on the upper left corresponds to things without obvious precedent or rules, like cases that end up in the Supreme court. AI has no significant role to play in this arena at the moment.
The zone on the top right presents the most interesting challenges and opportunities for AI and human-based systems. These are cases with clearly defined rules but high costs of error, like sentencing decisions, where there is considerable variance in human judgment. In my conversation with Daniel Kahneman, he shared an interesting and disturbing data point on the implementation of justice by human judges:
for crimes where the average sentence is 7 years in prison, the average difference between two randomly selected judges is 4 years, and over half the time, the difference is more than 4 years. (Brave New World, episode 21)
Depending on the mood of the judge – impacted by things like weather or the outcome of a football match – someone could get very lucky or unlucky.
Does that seem fair to you?
While some degree of variation in judgment is to be expected, Kahneman observes that much of the variance is undesirable – based on bias or other irrelevant factors. This variance is “noise,” and it’s harder to detect than bias. But both are measurable and correctable. For example, patterns of sentences by a judge that are overturned on appeal might indicate bias. Similarly, high variability within and across judges for the same crime could indicate noise.
The good news is that once the data are curated, trustable models can then be constructed based on the clean data. The first order of business is to assemble the data that exist already, and use it to establish benchmarks against which models can be evaluated.
AI as The Creator
In the meantime, what is worrisome is how easily and cheaply AI can now “create” and distribute things at scale. Create what kinds of things? The answer is anything that is expressible as information. Things like deep fakes, deep porn, and legal contracts are all creations – information that is rendered in some form. At the moment, there is no way to “watermark” who has created content, let alone to find the creator.
During my presentation, I brought attention to a film called Another Body, which documents the story of a young woman who found herself on Pornhub in a deep fake movie. The police said that no crime had been committed, and didn’t know how to proceed. These are isolated incidents Thousands of women are facing this ordeal and it’s all legal. Our current laws and norms are unable to deal with such situations easily or consistently.
Expectations are that we will see an increase in fakes, deception, and AI-generated content aimed at manipulation, such as in the upcoming election. We need to start thinking seriously about the limitations of our current laws to handle the new world of possibilities created by AI. I tuned into the Senate Committee on Intelligence on September 19, 2023 which featured my colleague Yann LeCun. It’s clear that we have no answers at the moment to the pressing concerns about AI.
For Seinfeld fans and 60s comic lovers, what a “bizarro world of AI.”
Stay Around
In the meantime, check out this most wonderful blues song by one of the greats, J.J Cale, who had a big influence on many legends, including Eric Clapton and Mark Knopfler:
Savor it and ‘bon weekend’!
Great article