According to a new study, people are not quite ready for a chatbot takeover

Chatbots were designed to mimic human-like interactions over text messages or online chat windows, and are quickly becoming the first – and sometimes only – point of contact with web-based customer services in healthcare, retail, government, banking and more. ¨

Advances in artificial intelligence and natural language processing, along with a global pandemic that has reduced human contact to the bare minimum, have placed the chatbot at the heart of online interactions, marking it as an essential part of the future.

However, new research from the University of Göttingen suggests that people are not yet ready for the chatbot to take over — especially not without prior knowledge of its presence behind interactions.

The two-part study A study published in the Journal of Service Management found that users reacted negatively once they learned they were communicating with chatbots during an online exchange.

However, when a chatbot made a mistake or failed to fulfill a customer request but revealed the fact that it was a bot, user reactions tended to be more positive, as they knew and accepted the outcome.

The German university’s study, published in the Journal of Service Management, found that negative reactions from users increased as the importance or importance of their service request increased.

More lenient towards chatbots

Each study had 200 participants in a scenario where they contacted their utility via online chat to update their electricity contract addresses after moving.

Half of the respondents were informed that they were interacting with a chatbot, the other half were not.

“If their issue isn’t resolved, disclosing that they spoke to a chatbot makes it easier for the consumer to understand the root cause of the error,” said Nika Mozafari, lead author of the study.

“A chatbot is more forgiven for making a mistake than a human.”

The researchers also suggested that customer retention after such encounters might actually improve if users are made aware of what they are dealing with in a timely manner.

As a measure of the increasing sophistication of chatbots and investments in chatbots, the Göttingen study comes just days after Facebook announced an update to its open-source Blender bot, which launched last April.

“Blender Bot 2.0 is the first chatbot that can simultaneously build long-term memory that it can access continuously, scour the web for up-to-date information, and engage in sophisticated conversations on almost any topic,” the social media giant said on its Facebook -AI blog.

Facebook AI researchers and research engineers Jason Weston and Kurt Shuster said that current chatbots, including the original Blender Bot 1.0, “are able to express themselves articulately in ongoing conversations and generate realistic-looking text, but “Goldfish -Memories”.

Work is also continuing to eliminate repetition and contradiction from lengthy conversations, they said.