My Latest Episode
In my latest conversation with Raphael Millière, Presidential Scholar in Society and Neuroscience at Columbia University, we went under the hood of ChatGPT.
What we are all trying to figure out at the moment is what has ChatGPT really learned, and how does it represent its knowledge internally. More broadly, what are its implications for society? That’s the 64,000 dollar question.
In a presentation to bank executives earlier this week about how AI will transform their industry, I started by asking the following broad question: could AI be more impactful on society than electricity has been? Or the Internet? The question led to a lively discussion about what is different about what I call a paradigm shift in AI.
Remarkably, the majority of the audience felt that the answer to my question was “yes.” My expectation had been that many would disagree.
So, while you yourselves ponder the question, check out my episode with Raphael.
Is this the Section 230 Moment for AI?
Section 230 of the Telecommunications Act of 1996 changed the world forever. It consists of the following 26 words: no provider or user of an interactive user service shall be treated as the publisher or speaker of any information provided by another information content provider.”
In other words, digital platforms cannot be held liable for anything posted on them. There are several cases pending before the US Supreme Court against Google and Twitter that I discussed with legal scholar Paul Barrett, but I don’t expect them to get very far.
Section 230 did facilitate innovation, but it has come with significant costs, such as the invasion of privacy, increased political polarization, and harm to teens, which I discussed with Jonathan Haidt in an earlier episode of Brave New World. The early platforms took full advantage of the lack of any guidelines to create data monopolies. My podcast guest Dina Srinivasan has chronicled the history of how Facebook and Google did this through questionable methods because there were no risks involved in unbridled data collection.
In retrospect, what should we have thought about more carefully? But more importantly, how do the hard lessons learned apply to AI? What questions should we be addressing this time around?
I can think of three. First, should we have allowed for an appeals process? Digital platforms such as Google and Facebook have destroyed some people’s lives, who found that there was no channel for approaching platforms to address issues of any kind. Platforms had no incentive to establish such channels. Their single-minded objective was to maximize engagement, without much consideration to the means.
Second, should we have put some guardrails around the use and sharing of data without permission? Doing so might have circumvented the Cambridge Analytica and related scandals. In retrospect, we should have articulated clear expectations about data ownership, rights, and use. Even a statement of “reasonable” use of data would have forced platforms to think about data governance policies early, instead of it being an afterthought when problems began to emerge.
Finally, should we have thought about conflicts of interest? In finance, we don’t allow the New York Stock exchange to both run the marketplace and be a buyer and seller of securities. Shockingly, there are no such restrictions in other digital marketplaces. Senator Elizabeth Warren described the conflict of interest with Internet giants well: “you don’t get to be the umpire and have a team in the game.” Despite our long history and safeguards addressing conflicts of interest in financial services, we neglected to consider them when it came to the Internet. We shouldn’t make the same mistake with AI.
It’s Déjà vu All Over Again
Do we need guardrails around AI? Yes, but the important question is how should they be designed? At the moment, OpenAI’s engineers are training ChatGPT to stay clear of certain topics. But it isn’t that hard to break through the guardrails, as this dialog with Kevin Roose illustrates, where the chatbot adopts a Glenn Close type of Fatal Attraction persona and tries to convince him to leave his wife. The larger point is that we are imposing constraints on AI very arbitrarily.
What about data ownership and use? The kind of data we will share with conversational AI agents will be highly personal compared to what we normally share with a search engine. Ignoring the ownership and use of data like we did with the Internet is something we shouldn’t repeat. The stakes are much higher this time, so we should err on the side of caution.
Finally, ownership and use of intellectual property is a new thorny issue. AI will be able to morph existing content in all kinds of creative ways that makes it appear original. Generative AI can easily create content that violates IP and copyright laws. Musician Chris Stapleton has called on legislators to prevent AI from impersonating music artists and to stop deceptive content from scamming his fans. Nick Cave has also come out to criticize AI, calling a song produced by ChatGPT in his style “a grotesque mockery of what it is to be human.” Content creators in general should be worried.
If we don’t get ahead of AI now, we will regret it. Blanket immunity for AI in the spirit of Section 230 would be a mistake.
Until next time.
V/