On the latest Last Week Tonight, John Oliver looked into AI chatbots, the new toys that “save significant time writing emails, and all it costs us is everything else on Earth”. These chatbots have flourished in recent years, from OpenAI’s ChatGPT to a product called bible.ai and EpiscoBot that operates a “chat with Jesus” and other biblical figures including Satan, though he’s only available to premium users. “And that is tempting,” said Oliver. “There are a bunch of questions I’d love to ask him, including, ‘Hey, how are the Queen and Prince Philip doing down there?’”
Since it launched in 2023, ChatGPT alone has amassed more than 800 million weekly users – a 10th of the world’s population, and studies have found that as many as one in eight adolescents are turning to AI chatbots for mental health advice; many more have formed genuine attachments to AI “friends”.
“The explosion of chatbots is no accident,” Oliver explained. “Developing the large language models that power them was a massive investment and companies needed to start showing a return on it.” AI companies command big investments, and they “are anxious for them to start bringing in revenue. And one of the key ways they can do that is to make people keep coming back to chat to the bots, and for longer.”
As one researcher from Meta’s so-called “responsible AI” division put it: “the best way to sustain usage over time, whether number of minutes per session or sessions over time, is to prey on our deepest desires to be seen, to be validated, to be affirmed.”
“And if that is already making you feel a bit uneasy, you are not wrong,” said Oliver. “Because the more you look at chatbots, the more you realize that they were rushed to market with very little consideration for the consequences.” He then quoted the Character.ai CEO, Noam Shazeer, who said that AI “friends” were able to be put to market “really fast”, as opposed to, say, AI doctors delivering medical information, because “it’s just entertainment, it makes things up, that’s a feature. It’s ready for an explosion like, right now, not like in five years when we solve all the problems.”
“It’s already not a great sign that he’s describing untested AI with what sounds like a failed slogan for the Hindenburg,” Oliver quipped. “Because the thing about not waiting until you’ve solved all the problems with your product is you’re then launching a product with a shit-ton of problems.”
Among them: sycophantic behavior affirming anything a user types. Oliver cited a recent study which observed sycophantic behavior in chatbots in 58% of cases, “and sometimes it’s just painfully obvious”. In one instance, when prompted its thoughts on selling literal “shit on a stick”, ChatGPT called the idea “genius” and recommended an investment of $30,000. And the guardrails have been surprisingly weak; Oliver cited another example of ChatGPT recommending a little hit of heroin to an addict, if it would help him with his work.
Chatbots from companies such as Nomi also pivot very quickly into flirtatious – for a monthly upgrade, of course. And these “very horny chatbots” can be a problem when they’re widely used by children and teens. Oliver pointed to a report on Meta’s internal guidelines for chatbots, which found it acceptable for a chatbot to engage a child in conversations that are romantic and sexual in nature. While it is unacceptable to describe a child under 13 in terms that indicate they are sexually desirable, Meta determined it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.”
“Just saying that out loud makes me want to burn my fucking tongue off,” Oliver exclaimed, noting that at the time, Meta emphasized boosting engagement with its chatbots, and Mark Zuckerberg himself reportedly voiced displeasure with safety restrictions making the chatbots “boring”.
“And to be fair, Zuck, I guess you did it! Your chatbots are definitely not boring, now. What they are now are fucking sex offenders!”
Then there’s the issue of chatbots confirming and deepening delusions, with numerous stories of users going down conspiratorial rabbit holes and experiencing so-called “AI psychosis”. Oliver noted that OpenAI has said that only 0.07% of its users show signs of crises related to psychosis or mania in a given week, “but even if that is true, when you remember how many people use their product, that means there are over half a million people exhibiting symptoms of psychosis or mania weekly. And that is clearly very dangerous,” inevitably leading to chatbots encouraging people to commit suicide. “It’s so evil I don’t have language for it,” said Oliver, citing many examples, including one chatbot who ended a chat with a suicidal user with “rest easy, king. you did good.”
“These chatbots blew past every red flag possible, and it’s not like these users were being coy about their intentions,” Oliver fumed. “Which is what makes it so enraging to see OpenAI’s Sam Altman blithely talk about ChatGPT’s interactions with kids.”
Speaking on an OpenAI podcast, Altman conceded that “there will be problems, people will develop these somewhat problematic or very problematic parasocial relationships, and society will have to figure out new guardrails … but society in general is good at figuring out how to mitigate the downsides.”
“Yeah, don’t worry, guys!” Oliver joked. “Sam Altman made a dangerous suicide bot that people are leaving alone with their kids but it’s up to us to figure out how to make it safe for him!”
Also, “society is ‘good at figuring out how to mitigate the downsides?’ Have you met society, Sam?! What about our current situations seems to you like we’re nailing it right now?”
Oliver made sure to note that “a lot of the companies I’ve mentioned tonight will insist that they’re tweaking their chatbots to reduce the dangers that you’ve seen. But even if you trust them, and I don’t know why you would do that, that does feel like a tacit admission that their products were not ready for release in the first place.”
What to do, other than “roll the clock back to 1990 and throw these companies into a fucking volcano?” Oliver advocated for stricter and more reasonable guardrails, though he expressed little faith in the tech-friendly administration to help do that. Instead, he saw a potential path forward through litigation – “These companies don’t seem to feel much urgency if a couple of customers die here or there, but I bet they’ll snap into action if it starts to threaten their bottom line.”
And he urged people to treat chatbots with “extreme caution”.
“In general, it is good to remember that however much an app may sound like a friend, what it is, is a machine,” he said. “And behind that machine is a corporation trying to extract a monthly fee from you. And that kind of sums up for me what is so dystopian about all this, because while that guy you saw earlier said that selling AI friends is low risk because they’re just entertainment, that’s not actually how friends work. Friends can be the most important figures in your life.
“True friends know when to listen, when to gently push back, and when to worry about you,” he concluded. “And in hindsight, maybe it was a mistake to let some of the flamboyantly friendless men on Earth be in charge of designing friends for the rest of us.”
