Everything You Always Wanted to Know About Rogue AI...But Were Afraid to Ask
The AI Genie Is Out of the Bottle
My Recent Podcast
My most recent episode was with Ajay Shah, co-author of "In service of the republic: The art and science of economic policy", featured in Bloomberg's global 2020 Best Books on Business and Leadership. Ajay is a unique blend of economist, public policy and regulation expert and hard core tech nerd. He is skeptical about state control of public infrastructure of any kind and, in particular, the world of “digital public infrastructure,” which is about delivering services to people using digital networks.
The conversation is timely since governments around the world differ significantly in terms of control and regulation of the Internet, and there’s no consensus on what’s best for society. In the US, regulators have stayed out, letting market forces and Big Tech shape the Internet. In China, it’s the opposite, where the state has complete control, and unrestricted use of the data. India is charting a unique digital path, which began with its biometric authentication system called Aadhar in 2009. It was headed by tech visionary Nandan Nilekani, who was an early guest on Brave New World. Nandan’s vision was empowerment via financial inclusion. Aadhar eliminated a lot of waste and corruption associated with payments and subsidies to people, lowered the costs of entry for poor people into the banking system, and gave them access to credit. However, Aadhar has been used in unforeseen ways, with many parties requiring it for access to routine services. This kind of scope creep goes beyond financial inclusion.
To Ajay, this smells of coercion. He has no intention of seeking government subsidies, and sees only downside in being fingerprinted. His broader point is that without a sensible public policy framework, we are handing over excessive power to the state without recognizing its consequences.
So, check out my episode with Ajay.
Rogue AI
I’m super-excited about the emerging consumer and societal benefits of AI, but it’s important that AI not go rogue.
I can think of four types of rogue AI, which we are already seeing.
The first is that the state goes rogue with AI. The second scenario is that companies go rogue. The third is people going rogue. And finally, the AI itself could go rogue. Let’s dig a little deeper into each of them.
The state can go rogue in a number of ways. It can go Orwellian by exercising control over information and media, like the Chinese state, and to a large extent the Russian state, and use data for surveillance and manipulation. It can also go rogue by manipulating social media in other countries. A recent article documents how the current Chinese state has backed massive disinformation campaigns targeted at Americans, through intimidation and “drowning out” genuine information by burying it with “spamoflage.”
But harm can also occur in more subtle ways, such as when government controls become compromised. There’s a riveting episode of Black Mirror in which a government program of digital bees created for pollination gets hacked by a malicious individual who uses the super intelligent bee swarm to kill the person most hated on social media every day. There is no way out, as the algorithmic bees execute the individual with the highest daily #DeathTo vote on social media. Things really heat up when the Prime Minister rises to the top of the list and wants to shut down the Internet. I won’t spoil the ending.
The lesson here is that digital public infrastructure is great, as long as it is well shackled and can’t go rogue on us.
The second scenario is one where companies go rogue with AI. We’ve seen some of this already with Big Tech. My podcast guest Dina Srinivasan described in great detail how Google and Facebook moved quickly to amass data and create powerful data-based monopolies before anyone had any idea what was happening. My colleague Jonathan Haidt has presented compelling evidence showing that Facebook’s algorithms caused widespread teen depression, perhaps as an unintended side-effect of maximizing “engagement.” (Haidt maintains an archive of all literature on the effects of social media on teen health.) But we shouldn’t point the finger solely at social media. As Chris Bail has shown, we’ve used social media and other digital platforms to explore and project our dark sides, so “the fault lies not within our stars but in ourselves.”
The lesson here is: beware. There are no obvious policy options, other than things like “know your customer” requirements for social media, and more algorithmic transparency and accountability.
The third scenario is that people go rogue with the increasing power of generative AI. Examples are things like revenge fake porn and impersonation. A recent documentary called Another Body traces the story of a young woman who woke up one morning to find herself in a fake porn movie on Pornhub. The police didn’t know how to deal with it, and couldn’t even decide whether a crime had been committed . it was virtually impossible to trace the creator of the fake porn video. I can imagine all kinds of deep fakes emerging, which will also make traditional methods of authentication obsolete.
The lesson here is that we need some liability laws around the malicious use of AI. And platforms need to be on the hook for their content before the AI spirals out of control. But we also need some IP protection laws. What’s to stop AI from creating art, for example that is identical to yours, but with a twist of Picasso thrown in? What’s to stop a business from creating an avatar of a highly valued employee and employing them forever without their permission? I had a fascinating discussion about this possibility with Piyush Gupta, CEO of DBS Bank. It’s a whole new world when it comes to IP protection in the era of AI.
Finally, the AI itself could go rogue.
Well thought-out policy can address the first three, but there is no way at the moment to ensure that AI itself won’t go rogue without our realization. I’ve had great conversations on this “alignment problem” with Stuart Russel and Brian Christian on earlier episodes of Brave New World, where our intentions might be at odds with how the AI interprets them.
Unfortunately, there’s no guarantee that we can get ahead of this problem. That genie is out of the bottle. It’s an uncharted brave new world of AI.