Je Suis de Retour!
Hello all, I’m back. In my multilingual language mashup family, I’d say “Je suis back!”
I hope you’ve had a wonderful summer. I certainly have. But fall is suddenly upon us, and it’s time for a new season of my Brave New World podcast, and musings on AI, life, and 42.
I’m happy to announce that I completed and posted my article on arxiv.org called The Paradigm Shifts in Artificial Intelligence. My initial motivation for writing the article was as background reading for a PhD seminar I was co-teaching with one of my colleagues almost two years ago. It took me a year longer than I expected to finish the article, but I learned a lot in the process. I’ve made the article fun to read.
So, check it out. It’s a year and a half of thinking condensed into ten pages with a pretty picture and a table.
My Latest Podcast: AI and Instant Gratification
My latest podcast features Computer Science Professor Ellie Pavlick from Brown University:
https://bravenewpodcast.com/episodes/2023/09/07/episode-67-ellie-pavlick-on-the-cutting-edge-of-ai/
Ellie belongs to the new generation of AI researchers working on large language models and chatbots such as ChatGPT, and exploring the connections between machine and human intelligence using such models. I asked Ellie what drew her to AI. A big part of it, she observed, is the instant gratification you get from getting the machine “to work,” that is, to do something you intend. I’ve always felt this way about AI since I got into it as a graduate student in the late 70s – even though most attempts at creating something new lead to failure, it is always exhilarating when things actually work, like getting the machine to display behavior that you expect. Ellie’s response reminded me of this conversation between John Markoff and the early AI visionary Raj Reddy:
“There’s instant gratification. You do something, and it either works or it doesn’t work. If it doesn’t work, it becomes a challenge of debugging. That’s what Minsky and Papert used to say, “Learning is debugging.”
That’s what AI is all about in a nutshell: constant debugging! Ellie talked about how good it felt to learn new things quickly through trial and error, leveraging the cooperative spirit of the open-source community.
Sky High Expectations from AI
Meanwhile, the expectations of businesses from AI scientists are sky high. I was a guest on Yahoo Finance last month, on the subject of skyrocketing salaries for Machine Learning scientists. Netflix recently advertised Machine Learning roles with salaries of almost a million bucks.
And yet, it’s very hard to get the right people – the variance in abilities of people who look identical on paper is huge. But getting the right people only solves half the problem: business leaders need to know how to use them. “What I need to do about AI” is the sixty four dollar question these days with business leaders.
Expectations of AI are also sky high, thanks to ChatGPT, which seems like magic to most people. They expect ChatGPT to give the right answers to everything. Diane Hall, the host of my Yahoo Finance segment, asked me whether I was concerned about AI’s “hallucinations,” referring to the fact that some ChatGPT answers that look correct are completely made up.
I reminded Diane that humans hallucinate as well, so why would we expect anything different from a machine that’s been trained on data that includes truths, falsehoods, and all our imperfections?
That’s what “pre-trained” language models are all about, built with virtually no data curation. Applications such as ChatGPT that are built on top of them inherit all the imperfections of their pre-trained models, as much as we might suppress them. So why should we trust AI to always be truthful?
Truthful or not, the AI train has left the station, and pre-trained models are already unleashing a torrent of what I call “low cost of error applications” across all areas of human activity. Sports is one such arena.
No Escape from Alcatraz
It’s the US Open again, and a wonderful time of year in New York City. And like last year, there’s no escape from Alcaraz. But this time around, its also exciting seeing the young American Ben Shelton explode on the scene, and Coco Gauff becoming a serious contender for the title.
What’s also very interesting about the US Open is its creative use of AI, not just for refereeing, but in creating new materials from the video feeds. Unlike Wimbledon, which is in the dark ages, with humans still making line calls, the US Open is less error-prone thanks to AI, which makes the calls and shouts them in a human voice. When a player questioned a call during a match, one of the commentators remarked how it was pointless, since the machine is always right. “The Beauty of the Machine,” he continued, “is that there’s no one to get upset at!”
But what’s also super cool is how the US Open is using AI to create summaries of matches from videos. The machine identifies the critical parts of a match and describes them. How amazing is that? And a harbinger of things to come in terms of AI capabilities. Such applications fall into what I call “low cost of error” situations, where we let the machine make decisions without requiring a human in the loop. And what when the machine’s output, which is created in the blink of an eye, is superior to what most humans would produce after hours of work. In such applications, imperfections and variations in the output are less important.
So, even as AI machines hallucinate and machine intelligence has a long way to go, AI is ready for prime time in the low risk application areas in our lives. That’s where people are having a lot of fun with AI.