My most recent podcast was with Islamic scholar Bernard Haykel on “Islam and the West.”
September 11, 2001 is a day I won’t ever forget. I watched the World Trade Center towers crumble from the 22nd floor of my apartment building on Bleecker Street. My friend Al Croker woke me up and told me to look out of my window to see whether I could see a little plane lodged into the side of the north tower. It was a big plane, which only became apparent when the second plane hit the south tower.
For months after that I watched people weep on the streets around me as they gazed south towards the non-existent towers. Its when mainstream America became aware of Islam. It has been a turbulent two decades since that day, including the recent chaotic exit of the US from Afghanistan.
America and the west have certainly changed since then, and embraced “multi-culturalism.” Has Islam changed much? What is the current relationship of Islam with itself and the rest of the world? I talked to Bernard Haykel, a scholar of Islam and professor of Near Eastern Studies at Princeton University for answers to these questions. It’s my first foray into the thorny world of religion, one that had little place in Huxley’s Brave New World, but is prominent in ours. Bernie helps us navigate through this complex space. So, check it out.
I Lied to Facebook
When I set up my account in Facebook ages ago, I was reluctant to tell it more than necessary. So, I made up my birthday. Perhaps it was a fear that they were going to play big brother. I had just published a paper in 2007 with my colleague Arun Sundararajan about the emerging virtual “spaces of interaction” that these platforms would control.
I chose 1905 as the year of birth because I’m a science nut, and that’s when Einstein wrote a series of papers that would transform the way we see the universe, including his theory of special relativity and the famous equation E=mc². That would make me 117 years old this year.
I also told Facebook that my birthday was on July 20, when it is really July 21 because Neil Armstrong walked on the moon on July 20 in the US and July 21 where I was. That was quite a summer, with the Woodstock festival. Science, music, and its sense of humor was what attracted me to America as a kid.
The trouble with lying to Facebook is that many members of my family and close friends wish me happy birthday a day before. Every year, I feel compelled to respond with “thanks, but it’s tomorrow!” In a weird way, the response connects me more to my people who respond with “lol” or “huh?” Clearly, I had not imagined a world where algorithms would remind people about birthdays, including my Internet Service Provider. In the future, my avatar, or “Her” will automatically reciprocate my customized responses. Which gets me to…whether my avatar will have “real emotions” that it conveys to others’ avatars.
Can Machines Have Emotions?
Last week I addressed a class of European executives about the business implications of Artificial Intelligence. I had asked them in advance to send me questions of interest to them. A common theme that ran through several questions was whether machines could have emotions, and whether “real bodies” are necessary to feel emotion.
It’s a fascinating subject, especially now, with Google’s recent suspension of an engineer who claimed that his AI system is sentient and has a soul.
Not surprisingly, the discussion turned to whether AI machines can have “real” emotion. Are the feelings real or artificial? We sometimes forget that the “A” in AI stands for artificial, so the question is, how real does artificial need to be to be real enough?
Equally importantly, does this question have any practical relevance?
To answer the first question, I’ll turn to philosopher Dave Chalmers, and draw on a humorous story by Steven Pinker.
If you’re interested in the nature of reality, check out my recent podcast with Dave Chalmers, where we discuss his latest book “Reality+.” Dave draws on technology, specifically VR and AR, to answer an age-old question in philosophy, namely, how do we know whether what we are experiencing is “real?” Interestingly, this will become a very practical issue in our lives as we edge closer to Isaac Asimov’s world of science fiction where real and virtual are indistinguishable.
At the risk of over-simplification, Dave’s position is that virtual is real. In a world where you can’t tell the difference, is there a difference? Does not knowing make your experience any less real? Dave argues that it does not, and that one could live a completely meaningful life in a virtual world. He proposes several thought experiments to make his point.
Pinker’s thought experiment in his 1997 book called How the Mind Works, is a little more tongue-in-cheek. Imagine that an alien civilization of silicon-based creatures lands on earth. The expedition leader radios the chief, informing him that there’s a biological species that’s incredibly intelligent and has made all kinds of scientific discoveries as part of its advanced civilization. The chief doesn’t believe it. Perhaps they go through a meat-based larval state? Nope. But surely, their brain isn’t meat? Yes, it is, says the expedition leader, they are meat all the way through.
The chief radios back in complete disbelief. Sentient meat?!
Pinker’s point is that we are limited by our experience and imagination. Humans created machines. However intelligent we make them, it is a stretch for us to believe that they will ever have “real” feelings or intelligence. Somehow, we have a hard time coming to terms with the fact that we could just be algorithms ourselves, albeit complex ones.
The question has deep and important implications. Have you noticed how difficult it is these days to get a real human to talk to when you need customer service? And how dumb and insensitive the chatbots are at the moment? Whoever cracks this one – I suspect it will be Google – will change the world and make a lot of money.
It also seems clear that we will have our personal “agents” – such as Samantha in the movie “Her” – to deal with an increasingly automated world. Will we trust them more if they are more sensitive to our emotions?
Last but not least, we will probably need new laws in a world where virtual and real are indistinguishable. Imagine a defendant in court pleading “but your honor I had no idea she was REAL.” That will be a whole Brave New World that Huxley hadn’t imagined.
In the meantime, I’ll let you ponder Chalmers and Pinker, and decide whether machines can have emotions. Email me your reactions, and I’ll have the machine analyze your responses, which I’ll share in a future post.
Until then.
V/
Vasant you make it so easy for me to understand technological innovations in spite of not being a formally trained technocrat. Keep up the good work.