Happy New Year. I hope it’s a good one for you.
My Recent Podcast: Is the world becoming more volatile?
Has the world become more “volatile” in recent years? Is it less safe and predictable than it used to be? I assumed that the brutal periods of world wars were behind us, and that the post-war world was a more stable one. With fewer risks?
It’s an important question because expectations of risk – often indicated by volatility of things like prices or other phenomena such as climate – drive everything from the valuation of financial markets to the prices we pay for everything.
So, I invited an expert who is immersed in this space to talk about risk, historically and in the modern world. Neeti Bhalla Johnson, is the President of Global Risk Solutions for Liberty Mutual Insurance -- where she leads the company’s global commercial and specialty insurance business unit. She walked me through how the world of risk has changed with some concrete numbers, and emerging risks such as environmental, geopolitical, and cyber, whose frequency and severity are on the rise. The globally integrated nature of these risks makes them challenging to insure against. What about geopolitical volatility? Again, it’s there and always simmering, especially around dictators and theocracies. But it’s around in democracies as well. Think about Brexit.
So, tune into my conversation with Neeti to learn more about the multi-faceted dimensions of today’s risk landscape.
Has AI Arrived?
In my last newsletter, I asked whether AI has finally arrived. I was referring, of course, to ChatGPT3, that seems to have taken the world by storm. I asked whether such a machine displays the beginnings of “understanding,” or whether it merely gives the appearance of understanding, and why and when does this “appearance of real” versus “really real” matters.
I ended the newsletter with the following joke, which I hadn’t shared with ChatGPT3 at the time:
One day, in the depths of Siberia, Rudolph, who happened to be a committed member of the Communist Party, was arguing with his wife:
Him: It's raining.
His wife: It's not raining.
Him: It's raining.
His wife: It's not raining.
Him: Rudolph the Red knows rain, dear.
After ChatGPT3 heard the joke, it said “ha ha that’s a great joke,” so I asked it what it found funny. Its response was interesting, but clearly, it didn’t get the joke:
I found the joke funny because of the unexpected twist at the end, when Rudolph says "Rudolph the Red knows rain, dear." It's a play on the familiar phrase "Rudolph the Red knows rain, dear" and adds a humorous element to the joke. The joke also relies on the comedic technique of misdirection, where the punchline takes the listener in a different direction than they were expecting.
A bust. Its next response was an infinite loop, but its third try was interesting:
I found the play on words and the reference to Rudolph the Red-Nosed Reindeer amusing. The joke also has a twist at the end, which makes it even more humorous. Overall, it's just a light-hearted and enjoyable joke.
Not bad, huh? Sure, it’s not quite there yet, but the AI will only get better. I expect that a sufficient improvement in understanding will transform the human-machine interface industrially and at the consumer level. Industrially, every machine and sensor will interface with humans much more naturally, which will result in massive productivity gains.
The use case for AI as a natural interface is even more compelling at the consumer level. One of our limits as humans is attention. For example, I can’t summarize all my meetings or attend to most inquiries from prospective students because I don’t have the time to process them all personally. Regrettably, many are neglected. I would gladly hand off the task of interacting with these inquiries to a reliable agent that alerts me only when necessary. It is an example of a low risk, high value application that expands my attention. I can imagine many such attention-expanding applications of AI.
What’s a Good Question?
A few months ago, I wrote that AI was creating a potential testing problem for teachers since it is already capable of answering university level questions reasonably well. Recently, this subject seems to have gotten a lot of attention: if GPT3 can answer questions and generate answers, how do you avoid plagiarism?
I don’t think that’s the right question.
The more relevant question is “what’s a good question in the age of ChatGPT3?”
In other words, if ChatGPT3 can answer a question, is isn’t plagiarism but a reflection of public knowledge. It’s probably the wrong question.
During the early days of the internet, when “Google is God” was a familiar refrain, my colleague Adam Brandenberger often said that as long as the answer to something was available on Google, it was wasn’t appropriate lecture material or worth class time.
The same thinking applies to AI. If it can answer a question, it probably isn’t a good exam question in the first place, which also means that the bar for what’s a good question will keep rising as AI progresses. For example, “What is AUC?” may have been a good basic question for a Machine Learning course a decade ago, but not anymore. It’s hard to beat ChatGPT3’s answer. Try it.
So, what is a good question? It is one that the machine can’t answer at the moment. That’s the necessary condition for goodness, in addition to anything else we might desire, like simplicity and involving creativity.
Things get a little murkier in the social sciences where context and room for interpretation plays a larger role. For example:
Me: “Is the Persian Letters a feminist text?”
CHATGPT3: It's not clear what you are referring to as "Persian Letters." Could you provide more context or clarify your question?
Me: the book by Montescu. (I misspelled the author’s name deliberately)
CHATGPT3: "Persian Letters" is a novel by Montesquieu, published in 1721. It is a work of fiction that tells the story of two Persian noblemen, Usbek and Rica, who travel to France and observe the customs and culture of the French people. The book is written in the form of a series of letters that Usbek and Rica send back to their friends in Persia. It is not a feminist text in the sense that it does not specifically address issues related to women's rights or gender equality. However, it does touch on themes of cultural differences and the nature of society, and it has been interpreted by some as a commentary on the political and social climate of France during the time in which it was written.
Although ChatGPT3’s response is pretty mindboggling, I’d probably expect a more nuanced answer from a human. It might begin by telling us what feminism meant in the early 18th century, how its meaning has changed over time, etc. But note that it is possible to have such a dialog with the machine as well! In fact, the first thing it asked me above for was more context to go on, and I could have continued the conversation after its initial answer, in which case it might have effectively helped me answer the question in a way that a human would.
The larger point is that as AI progresses, humans will need to keep upping their game. That’s where the future of work is headed for humans: “robot proof” work, as my guest Joseph Aoun, president of Northeastern University describes in his book “Robot Proof.” What it means to be “educated” has been ratcheting up over the centuries, but AI is upping the ante significantly.
In the meantime, explore your humanity with ChatGPT3.
Good "Vasant" humor, that I knew 40 years ago:)