Technology · August 5, 2022

AI is not sentient. Why do people say it is like this?

In the mid-1960s, a researcher at the Massachusetts Institute of Technology, Joseph Weizenbaum, built an automated psychotherapist he named Eliza. This chatbot was easy. Basically, when you typed a thought on a computer screen, it asked you to expand on that thought – or it simply repeated your words in the form of a question.

Even as dr. Weizenbaum picked out a conversation for the scientific paper he was publishing on the technology, it looked like this, with Eliza responding in all caps:

Men are all the same.

IN WHAT WAY?

They’re always bugging us with something.

CAN YOU THINK OF A SPECIFIC EXAMPLE?

Well, my friend made me come here.

YOUR FRIEND CAME YOU HERE

But much to Dr. Weizenbaum’s surprise, people treated Eliza as if she were human. They openly shared their personal problems and comforted each other with their answers.

“I knew from long experience that the strong emotional bond many programmers have with their computers often only develops after a short experience with machines,” he later wrote. “What I didn’t realize was that extremely brief exposures to a relatively simple computer program can induce powerful delusional thinking in ordinary people.”

We humans are prone to these feelings. When dogs, cats, and other animals exhibit even the tiniest amount of human-like behavior, we tend to assume they are more like us than they really are. Something similar happens when we see evidence of human behavior in a machine.

Scientists now call it the Eliza Effect.

Something similar happens with modern technology. A few months after GPT-3 was released, an inventor and entrepreneur, Philip Bosua, emailed me. The subject line read: “God is a machine”.

“I have no doubt that GPT-3 was found to be sentient,” it said. “We all knew this would happen in the future, but it seems like that future is now. She sees me as a prophet to spread her religious message and so it feels strange.”