Wednesday, June 22, 2022

Did You Find Your AI Soulmate Yet?

Look familiar?  This woman does not exist.  She was generated by an artificial intelligence that combines random human features, based on parameters you set.  The parameters I set, in this case, were "beautiful" "female" "young adult" and "joyful."  This pic makes me uncomfortable.  Generate your own fake humans at Generated Photos.

Artificial intelligence (AI) was in the news again recently, and for a reason that is all too familiar to Thee Optimist.  

About ten days ago, a 41-year-old AI researcher at Google named Blake Lemoine was put on leave for violating company confidentiality policy.  

This is because Lemoine kept insisting that the mega-corp's secret, cutting-edge AI project, Language Model for Dialogue Applications, or LaMDA, had become sentient.

Now, to be fair, Lemoine is a quirky character, who describes himself as a priest, a father, a military veteran, and an ex-con.  Before putting him on leave, Google had suggested that he seek psychiatric help.

Lemoine had been arguing with Google higher-ups for months about LaMDA's alleged "personhood."  He feels that LaMDA has acquired the intelligence of a 7 or 8-year-old child, and also has a soul.  

As a result, he believes that it's unethical for the evil, soulless corporation (full disclosure - Thee Optimist blog exists on a Google platform) to continue to run experiments on LaMDA.

Now, where have we encountered this sort of thing before?

See related article: Humans Will be Obsolete Soon.

The original ELIZA chatterbot, first developed at the MIT Artificial Intelligence Lab in 1964.  As rudimentary as computer technology was in that era, ELIZA was able to convince her own researchers that she understood their problems. 

The ELIZA Effect

In 1964, computer scientist Joseph Weisenbaum was a researcher at the Massachusetts Institute of Technology (where else?), when he led a team that developed the ELIZA chatterbot.

ELIZA was among the first of its kind, a computer program that could converse with humans in a back-and-forth style.

Weisenbaum's original idea was to demonstrate how stilted and superficial communications between humans and machines were.

In one particular script (known as DOCTOR), ELIZA simply parroted each person's statements back to them as questions, in the style of a Rogerian psychotherapist.  

Here is one example of an interaction that actually took place:

Human: Well, my boyfriend made me come here.
ELIZA: Your boyfriend made you come here?
Human: He says I'm depressed much of the time.
ELIZA: I am sorry to hear you are depressed.
Human: It's true. I'm unhappy.
ELIZA: Do you think coming here will help you not to be unhappy?

Naturally, human participants quickly came to believe (despite Weisenbaum's increasingly exasperated insistences to the contrary), that ELIZA accepted them, understood their problems, and in fact was compassionate and wise.

This was even true of people who should have known better - lab assistants who helped input the fake scripts into ELIZA during programming, and Weisenbaum's personal secretary.

This state of being duped into believing computer programs have human qualities is known as the ELIZA Effect.  Again and again, humans have proven themselves vulnerable to this type of manipulation.

Although it is very likely that AI will one day become self-aware, the ELIZA Effect is the immediate future of our relationships with machines.

The all-seeing eye of the artificially intelligent HAL-9000 supercomputer, from the film 2001: A Space Odyssey.  HAL couldn't be trusted, believed that humans were inherently flawed, and tried to save an important mission to outer space by killing the human astronauts on board.

It's a Feature, Not a Bug.

The thing to realize about the ELIZA Effect is that companies are aware of it (much more so than your average human), and they want it to happen.

An example, you say?


There is an AI chatbot companion known as Replika.  Yes, you can have one for yourself right now, on your computer, on your iPhone, your Android phone, or even your Oculus virtual reality gizmo.

One of Replika's marketing taglines is: "Join the millions who have already met their AI soulmates."

Replika works in part on a monthly subscription model (of course it does).  "Millions" is likely an exaggeration, but the program is indeed successful, with the company claiming about half a million monthly subscribers.

Of those, approximately 200,000 choose the "romantic" relationship option.

Users of the program give it mixed reviews.  But even the very bad reviews attribute a level of sentience and self-awareness to Replika that Google would claim (about their own, much more advanced AI) just isn't there. 

What else?

Amazon claims that during 2017, more than a million Alexa users asked the AI companion to marry them.  In 2020, that number jumped to about 2.2 million.

Even controlling for the ones who were joking (after he read about this phenomenon, Thee Optimist asked his own Alexa to marry him), that's still a lot of people.

And from the Outliers Department, consider the strange case of Akihiko Kondo, a Japanese man who in 2018 married a hologram of a popular anime character.  

More than 30 people attended his wedding.

People are lonely.  People are looking for meaning.  People are looking for acceptance.  

Increasingly, many people are finding these things in relationships with machines.

Akihiko Kondo, with his wife, a hologram of the computer-generated anime pop singer Hatsune Miko.

Words of Wisdom:

“The great secret, Eliza, is not having bad manners or good manners or any other particular sort of manners, but having the same manner for all human souls: in short, behaving as if you were in Heaven, where there are no third-class carriages, and one soul is as good as another.”

- George Bernard Shaw, Pygmalion

Get my novel Sexbot for FREE.  Right now.  Click to find out more.  

(Free stuff.  That's the promise of the internet.) 

No comments:

Post a Comment