Discover more from Vasant Dhar's Brave New World
And Why Resistance to AI is Futile
It’s my birthday today! I lied to Facebook, and told it July 20, 1905. I chose the July 20 because of the first lunar landing (July 20, 1969), and 1905 because that’s the year Einstein changed everything.
Like last year, I’m going to be European and take August off! So, I will be in touch again in the fall. Until then, have a great rest of the summer.
My Recent Podcast
My most recent guest on Brave New World was Jameel Jaffer, who is the executive director of the Knight First Amendment Institute at Columbia University. Jameel’s work covers free speech, privacy, technology, national security, and international human rights.
Free speech takes on new meaning in the era of social media platforms, whose algorithms provide great potential for amplification and make all kinds of decisions for us, such as how to order the content they show us. Can such algorithms cause widespread harms that can potentially undermine the democratic process? If so, what criteria should they use instead? These are pressing questions for society to resolve.
Indeed, a number of cases against the major social platforms are pending before the US Supreme court. Accusations against them include biased content and aiding terrorism, such as the Paris nightclub shooting in November 2015. The outcomes of these cases will have a major impact on the future of social media platforms.
More generally, it is becoming apparent that our existing laws are not adequate in the age of social media and AI. A documentary film maker recently chronicled the story of a young woman who woke up one morning to find that she was a porn star in a deep fake movie created by AI. She went to the police, who didn’t know where and how to begin with such a case. Who committed the crime? How do you find the criminal? Indeed, finding a criminal in the digital world can be much harder than in the physical world. Cyber criminals tend to be much more sophisticated than lawmakers or law enforcement.
This is a pressing issue in an era when machines become indistinguishable from humans. What happens when AI mutates and starts doing things that were not imagined by their designers – like making a fake porn movie. You can’t punish a machine! Can you “turn off” an AI? When is it unethical to do so? These questions would have been science fiction a few years ago, but they are very real now.
We need to revisit our laws in the age of AI. I’m speaking about this topic at an event on AI and Law at the NYU Center for Civil Justice on September 13, so if you have interest in the subject, please come.
In the meantime, check out what Jameel has to say about how to think about free speech and the regulation of social media platforms.
An editor of a business magazine recently remarked that his readers are suffering from “AI fatigue”!
I’m not surprised. I’ve seen several hype cycles during my career, but this one is huge. Perhaps AI fatigue is a good thing for a change, and a good time to get our heads around where we should let AI make decisions for us and where it still poses major risks.
When is AI Fair and Accurate?
After last year’s US Open, I wrote that there was “no escape from Alcatraz,” where I had nicknamed Alcaraz Alcatraz. This time around, I predicted it again, as did the AI, which gave him the edge. I’d love to know why the AI gave him the edge since all the pros predicted a Djokovic win.
Wimbledon is wonderful, but it is time for it to cede refereeing to AI. Humans make way too many costly errors, which are unfair to athletes and the game.
In Wimbledon, the human always makes the decision. A player can appeal it, but this isn’t costless: if the computer concurs with the human, the appeal involves paying a penalty for imposing a cost on the system.
Such a system is biased towards accepting costly error. A recent example played out in the first set of a Wimbledon 2023 match between Novak Djokovic and Hubert Hurkacz, with Hurkacz serving at 40-30 to hold serve. The umpire erroneously called a Hurkacz volley out, making it deuce. After watching the replay, the commentators remarked that Hurkacz should have challenged the call. I was surprised how easily they accepted the error as being part of Wimbledon.
It shouldn’t be. The bad call took a visible toll on the relatively inexperienced underdog. Hurkacz fought back hard to win the game, but went on to lose the match. It might have been a completely different outcome without human error.
The only reason we accept such a flawed process is tradition. Some argue that the human judge adds to the game. But at what cost? Unfairness.
It is ironic that we often lament the fact that AI is unfair and biased when it learns from biased data, and yet, when the machine is accurate and fair in its judgment, we accept biased, inferior quality human decisions. Wimbledon should follow the example set by the US Open, which has already adopted Hawk-Eye as the referee, thanks to the pandemic where we needed to reduce the number of humans on the court. Hawk-eye makes all decisions, but still uses a human voice to call the play. A raw video feed is recorded as a backup, just in case its calls need to be verified.
Other sports such as basketball are trickier for machines, given the large variety of fouls involved. But it is inevitable that it and other sports will cede to AI. Indeed, sports is probably a harbinger of things to come as AI creeps into more areas of our lives. The same question about accuracy and fairness can be asked of any area of decision-making, such as medicine, law, and business. If the stakes are high, why would we choose to consult a source with inferior accuracy at all, let alone accepting it as the primary decision maker?
In general, there are two main reasons to trust humans over AI. The first is that the case under consideration lies outside the training set of the machine. This is called an “edge case,” and is usually hard to recognize. It might occur, for example, in the form of an unusual symptom in medicine or an unprecedented legal case, not unlike those that end up in the Supreme Court.
A second reason to trust humans is when explanation and nuance are essential to the decision. Machines might have excellent predictive ability, but their transparency and reasoning are still inadequate for human comfort. Thus, even as AI machines like ChatGPT may have seen orders of magnitude more cases in their training set than a human expert will see in their lifetime, their ability to explain themselves and introspect is still limited relative to humans.
Neither of these apply to sports, especially tennis. There are no edge cases, and it isn’t important to explain why the machine made a call.
Where decisions must be accompanied by narratives and explanation, humans have the edge at the moment. But what happens when the AI makes better decisions, and its behavior and reasoning become indistinguishable from that of humans? Other than the edge cases, I can’t see us accepting higher levels of error from humans as being fair in the long run just because it has been this way historically.
Resistance to better decision-making is futile and unfair.