I grew up reading science fiction writers such as Asimov, Huxley, Arthur C Scott, Douglas Adams, Orwell, Kubrick, and Kurt Vonnegut. I found their worlds fascinating, often dystopian, but not worth worrying about in my lifetime.
COVID was a major discontinuity, and we appear to be in the early innings of major societal change. Interestingly, the six-foot distancing norm is eerily similar to a virtual society in Asimov’s Naked Sun written in 1957, where people rarely meet face to face, and when they do, maintain a distance of, yes, six feet! Huxley’s genetically engineered society is already here. Kubrick’s HAL is around the corner. In other words, the science fiction of my childhood is rapidly becoming reality.
Despite the wonderful advances – the Internet, Artificial Intelligence, and Crypto – some things feel bizarre about the world that’s unfolding, don’t they? What are the right questions that can help us identify dystopian outcomes and avoid them? What does it mean for us and our kids?
I started my podcast, Brave New World, to explore just these subjects. And I now want to do so in this newsletter as well. So far, you have been receiving updates on my show here -- but starting today, I will use this to go deeper.
Let me first share with you some highlights from the episodes I have done.
Our Social Media Conundrum
In many ways, algorithms already control us, from determining our partners to how we feel. For example, a realization from my conversation with Sinan Aral was about how social networks are created and used. Did you realize, for example, that the “social network” we see today (our “friends” etc) was created by Facebook’s algorithms? Which means that Facebook could have created a completely different network via a slightly different algorithm?
Did you realize that another set of algorithms determine what flows along the links in the network? Again, you have to ask yourself, to what extent did such algorithms contribute to the January 6, 2021 mob storming Capitol Hill? Would a different set of algorithms have resulted in a civic dialog that brought us together as per Facebook’s original mission?
To what extent are the technologies designed by big tech crafted to create addiction? Adam Alter and Anna Lembke warn us that addiction is on the rise, and this problem will get worse unless we gain some agency, starting with being more mindful about our relationship with devices that become increasingly irresistible.
My conversation with Jonathan Haidt points to the specific damaging effects of social media algorithms on society. Are they causing teen depression? Are they resulting in increased political polarization that threatens democracy, as evidenced by the January 6 2021 storming of Capitol Hill? How do we get the data to answer such questions unequivocally?
Once you start asking these kinds of questions, you realize how major societal movements are being shaped by faceless engineers and nerds, and how dangerous this can be. More generally, I’m realizing that there is a fundamental alignment problem between what humans really want and what machines are programmed to do, which invariably leads to Terminator-like scenarios.
What became clear to me during my conversations, especially with Stuart Russell and Brian Christian, is that machines can go badly wrong when they blindly optimize a single objective function. So, Mark Zuckerberg doesn’t need to be an evil operator, rather, his machines can turn into one without intending to do so.
In others words, machines are already in control of our social and political lives, not humans! Think about that. Next time you go on a date or cast your vote, ask yourself what role an algorithm played in your decision, and how much agency you have in reality.
Web 2.0: After the Wild West
This gets me to Web 2.0 or the current Internet. Web 1.0 was the wild west, before we had Google, Facebook or Twitter.
Web 2.0 is dominated by big tech, and came about thanks to Section 230 which gave platforms immunity from lawsuits for anything they published. The big question now is “how should we amend 230 and make platforms more accountable?” A larger question is “how do we want to govern the Internet?” China, India, Europe, and the US have very different governance models. Can we somehow combine the best of each? This is something I explored in detail with Nandan Nilekani, the tech visionary who created the largest biometric authentication system in the world.
Is the Internet being weaponized by some countries? Arun Sundararajan thinks so, as do Peter Berkowitz and Sam Moyn. Warfare has become largely cyber. Should we be worried? Or is this an improvement relative to traditional warfare?
Dina Srinivasan lays out the history of Web 2.0 and why big tech dominates advertising and social media. She provides answers to the Section 230 conundrum, so check out her pod!
Web 3.0: Crypto and the Decentralized Internet
The wildcard at the moment – the subject with David Yermack and Albert Wenger -- is Crypto and the next generation of the Internet (called Web 3.0). A fascinating question is whether a decentralized Internet and digital currency will lead to centralized control, a la the China model, or the complete opposite, where people communicate peer to peer, anonymously, via a trusted consensus mechanism. In the latter world, governments have zero visibility into the transactions and activity of their citizens, which raises fundamental issues about citizenship and taxation.
Which path will humanity take? Orwellian control, or some sort of Libertarian anarchy? These are fascinating questions that I will continue to explore going forward.
Humans vs Machines
A theme that is close to my interests is how humans and machines will co-exist when it comes to making decisions. Have you noticed how hard it is becoming to reach a human these days? Are machines taking over most decision-making? Will computers make most of our healthcare decisions for us? What role will they play in the justice system? I explored these questions with Daniel Kahneman, Phil Tetlock, Yann LeCun, Solon Barocas, Terry Odean, Peter Railton, Eric Topol, Regina Barzilay, and Erik Brynjolfson.
According to Kahneman, the combination of “human plus machine is fundamentally unstable!” What does that mean? Is it unstable because humans are inconsistent or “noisy?” If so, do we prefer this kind of noise in our systems? Or do we trust machine-like consistency more than human inconsistency?
In other words, are we simplistically assuming that humans + machines is the way to go and so obvious and easy to understand, when really it isn’t that simple and in fact the relationship is fraught and inherently scary too? So, will we be better off having machines make parole decisions? Medical decisions?
Talking about humans and machines, as I write this note on my journal over the last year, I’ve been informed that there is a probability I have a certain type of cancer. Human experts have slightly different takes on my condition and options, which can be confusing. And some humans are a lot better than others as Tetlock has shown us, which makes things harder!
Who should I trust? Am I better off trusting a machine that’s analyzed thousands of cases that resemble mine, but may not consider the nuances of my situation? Are the nuances that a human would consider irrelevant? How can I combine what the machine and humans are telling me? I’m struggling with this decision!
And when do ethical or moral considerations enter the picture at all? Listen to Peter Railton and Molly Crockett discuss our “intuition about morality,” and to what extent such intuitive knowledge about morality is acquired and comes into play in human action and how it might be grafted into machines. Peter has studied the famous Trolley Problem in great detail, so if you have an interest in understanding whether it is better to sacrifice one old man to save five kids, listen to what he has to say. Its not that simple!
Brave New World But Same Old State
My conversations with Paul Sheard and James Robinson were also a lot of fun. I finally understand “Quantitative Easing” or QE. If you’re like most people, you probably don’t understand it, so tune into that session. As Paul explains, QE is a new weapon that central bankers conjured for stimulating the economy when they’ve run out of their traditional ammunition! Namely, lowering short-term rates, which are near zero or even negative in most industrialized economies.
James Robinson’s theory is that stable democracies arise from a balance of power between society and the state. He explains, for example, why India is hampered by its illiberal caste structure and rigid norms that limit social and professional mobility. Interestingly, and IMHO, James ignores big tech as a major new actor that has changed the balance of power in the last decade. In the current state of affairs, my position is that the state should be an ally of the people in controlling the power of big tech.
Last not least, what is the role of education, and in particular, universities, in society? Scott Galloway asserts that American universities have lost sight of their mission and turned into elitist hedge funds. They’re turning away the “unremarkables,” meaning most of us, and perpetuating the deep existing inequalities in society.
John McWhorter is searing in his criticism of universities that have bought into Wokeism, and abandoned rationality in the process. University presidents John Sexton and Michael Roth were more optimistic. Michael noted that educational institutions should be “designed for serendipity,” that we should create environments where the chances of serendipitous encounters go up.
I can identify with that. As a 23-year-old, I had such an encounter in Pittsburgh with Nobel Laureate Herbert Simon and AI trailblazer Harry Pople who changed the direction of my life completely, to pursue knowledge and not worry about money instead of the other way around. It’s a debt I cannot repay, but I can pass it along through Brave New World.
I invite you to submit questions you’d like to ask.
I hope you’ll join me in the journey for answers.