Why You Might Not Want AI Chatbots That Are Too Human

AI that makes jokes can raise uncomfortable questions

  • There are growing concerns that AI chatbots seem too human. 
  • Experts say that realistic chatbots can seem threatening. 
  • Human-like chatbots are raising privacy concerns. 
Human-looking robot looking directly at the camera

Alexandra_Koch / Pixabay

Chatbots powered by artificial intelligence (AI) are becoming more human-like every day, but not everyone thinks that's good. 

AI agents that make jokes can seem threatening, according to ongoing research by Marat Bakpayev, a marketing professor at the University of Minnesota-Duluth's Labovitz School of Business and Economics. His study adds to growing sentiments among experts that limits need to be put on chatbots as they grow in popularity. 

"This mode of interaction is still kind of new for many people," Tom Andriola, the chief digital officer at the University of California Irvine, who is currently working on a project to deploy humanlike chatbots, told Lifewire in an email interview. "A large percentage of the population has been raised on phone conversations and email exchanges as primary modes of communication. Even text-based chatbots have been met with mixed reactions as they have been rolled out over the past decade."

Too-Human AI

Collage of a chatbot face over a field of code

Gerd Altmann / Pixabay

One explanation for why people don't like human-like chatbots is that too much of a human facade directly compares the object with a person, Bakpayev said.

"And there is a perceptual mismatch between what people can anticipate and what they see," he added. "It leads to a violation of expectations. More recent studies in consumer research identify that when a chatbot is anthropomorphic, the expectations of chatbot efficacy are inflated."

The more a chatbot looks and acts like a human, the more uncomfortable people get, David Ciccarelli, the CEO of Voices.ai, a company specializing in AI technology, said in an email. He pointed to an example of a recent project of programming a chatbot for a client, and they made it as humanlike as possible. 

"The client was super happy, but their customers found the bot way too creepy," he added. "We learned from that experience and changed our approach. To help people feel more at ease with chatbots, AI programmers need to find a sweet spot between making them act like humans and making sure it's clear they're not actually human."

A recent report reinforced the idea that many users are uncomfortable with generative AI technology that powers chatbots. The study by Disqo found that 60% of individuals trust AI-generated content less than human-generated content, with more than half saying they trust it much less.

Not Your Grandfather’s AI

Young person with a robot body at a table

rony michaud / Pixabay

Discomfort with chatbots may come down to generational differences, Andriola said.

"My parents hate them; my kids, on the other hand, couldn't imagine why in the world you would want or need to talk to a human being," he added. "The younger generations have demonstrated more comfort with these digitally enabled interactions. They've grown up interacting with characters in video games and morphing their faces with social media filters."

But Andriola said some people consider chatbot interactions weird, awkward, or potentially privacy-compromising. "Take the healthcare example," he added. "Do I really feel comfortable talking with a faux human about my recovery from hip surgery? Are they recording this? Where does that data go? Will it lead to denying me coverage or service?"

One area where chatbots are perceived as creepy is when they are used during hiring. The company HireVue, which uses AI technology for employment, found in a study that 39 percent of workers are uncomfortable when engaging with chatbots to answer initial questions in the hiring process, Lindsey Zuloaga, the company's chief data scientist, said in an email. 

My parents hate them; my kids, on the other hand, couldn't imagine why in the world you would want or need to talk to a human being.

"Most of us have interacted with chatbots in some way at this point (like returning items to an online retailer, making a dinner reservation, or asking about the status of a job application)," Zuloaga said. "Interactions of this kind are the typical, benign chatbot use cases, but ChatGPT and other generative AI tools are raising well-deserved concerns."

To assuage fears, programmers should ensure chatbots don't act like humans, Andriola said.  

"The goal is not to confuse or trick people that they're interacting with another human being, but to recognize that digital human chatbots can be very effective at certain tasks or types of questions that are more timely or convenient for the end user," he added. "It doesn't replace the need to talk to a human, just complementing it for certain applications."

Was this page helpful?