Discover more from Vasant Dhar's Brave New World
Assimilating ChatGPT: Resistance is Futile
And The Liability Side of AI
My Recent Podcast
My latest podcast was with Paulo Kaiser, CEO of tech consulting company Plative Inc. Paulo is an alum of NYU Stern and Courant Institute from when the Internet was just emerging.
I get a lot of questions from students and business leaders these days on how to prepare for the rapidly changing world of ChatGPT and AI. What does it mean to be literate these days? What tools and thinking do you need in order to succeed? It was great to catch up with Paulo after all these years and talk about these questions, including how AI and ChatGPT will permeate work.
In my previous episode with Sam Bowman, we discussed how large language models (LLMs) like ChatGPT work, and their uses. Will they replace human decision-making or will they make it better?
Sam’s research team has done a couple of clever experiments that begin to address this question. They picked two specialized problems from healthcare and business, where a pre-trained model like GPT3 will make lots of mistakes, as will a human who isn’t a specialist. Could such a noisy system still be useful to human decision makers? He observed the performance of two groups of people – one with and one without the machine. Both problems required a lot of reading within a limited amount of time.
The results showed that the combo did better. It’s hard to be certain about why, but my conjecture is that the language model was able to summarize things well enough to provide the human with useful information that was hard to glean under time pressure. Consulting is all about solving the right problem as efficiently as possible. My general takeaway is that machines like GPT are a godsend to consultants, who never have enough time. Paulo agrees.
LLMs will likewise transform the legal profession by doing the heavy lifting, freeing up more time for thinking, and as Paulo stresses, exploring solutions with clients by asking the right questions and solving their real problem.
ChatGPT in Education
What about ChatGPT’s impact on learning and teaching?
That’s a big question these days. As teachers, we use assignments, tests and exams as proxies for how much students have learned: higher scores are better. Many teachers use open-book exams.
But many of the existing proxies we use to measure knowledge won’t work when students have instant access to an oracle. So, what will be the new proxies for learning and assessment in the era of ChatGPT? That’s the key question for educators going forward.
One line of thinking is that it is important for students to be able to hold their own during a professional conversation. You can’t measure that using the old proxies.
When I was in engineering school, a routine part of our evaluation was a “Viva Voce,” or an oral exam. The goal of the exam was to assess whether you could converse intelligently about engineering. A typical question might be something like “is the rate of heat transfer rate higher in streamlined or turbulent flows of a fluid and why?” Check out this demonstration of ChatGPT in an oral exam on Data Science set up by my colleagues Foster Provost and Joao Sedoc. I’d be impressed if a student performed as well as the machine without its help.
I’m thinking of going back to orals. It’s ironic that in this high tech era of instantly available knowledge, we may need to evaluate people the old fashioned way. It’s a strange new world.
The Liability Side of AI
Concerns about AI risks have reached a boiling point as expressed in this open letter to stop building systems such as GPT4. It’s a little late and unrealistic to try to stop AI, but lawmakers need to think seriously about the liability side: Who is responsible for the harms from AI that are bound to emerge? In an article that I published last week in The Hill, I describe the risks that arise when no one fully understands the machine’s decision-making behavior. How should you figure out who is liable when it makes mistakes?
That’s what we need to figure out.
It’s important to establish the right expectations and being on guard. At the moment, the darker uses of the technology that cause concern are the creation of fakes or deceptive pitches to vulnerable people, but it isn’t difficult to imagine society-destroying outcomes. Barring consequences, unethical AI systems will proliferate. But the reality is that the AI genie is out of the bottle already, and there’s no putting it back. So, we had better figure this out pronto.