Sentient AI are Coming, Are we ready?

By Lily Warner

Let’s say a person’s arm is cut off, and they get a prosthetic replacement. Are they still a person? Most would say “Yes, obviously.” And they’re right. But what if the person then loses a leg? An eye? What if they lose every bit of the body they were born with, and are nothing but a brain connected to a mechanical body? Are they still a person? 

That’s a harder question, but you’d probably still say yes. You don’t stop being a person because your body is different from the norm.

This leads to a question, though. What makes a person, a person? It’s an old, old, old, question, and yet “personhood” still hasn’t been given a precise definition. Not even after thousands of years of consideration. We’ve tried to answer it. Aristotle, the ancient Greek philosopher, said that Man is separated from animals by its ability to reason. Oxford Languages defines a person as “a human being regarded as an individual.” 

There’s a common thread between these two definitions. “Man,” and “Human.” That means a person has to be a human. And that definition has been challenged many, many times. Two recent examples are slavery and LGBT discrimintation, by saying “members of the species homo sapiens do not equal a person, and as such do not deserve rights.” A massive oversimplification, of course, but that is the core message.

However, on the horizon, we can see the glimmers of a new challenge to the definition of personhood. This time, though, it’s a little different. Instead of trying to reduce what personhood means, it will try to expand it.

Sentient AI is still far away, but it is on its way, and we are unprepared for the consequences of its arrival. 

The idea of sentient AI isn’t new. HAL-9000, I Robot, Detroit: Become Human, all of these are science fiction works that show what might happen when AI becomes sentient. A lot of these works also posit that the AI will try to destroy humanity, and that sentient AI is a bad, bad thing. After all, if something can think and feel in a sentient way, then that means they’re susceptible to righteous anger, or fear, or sadness. Luckily, that’s not something we will have to worry about. 

Right?

Not necessarily. DALL-E 2 is an AI created by a company called OpenAI, and is capable of creativity. It can turn text prompts into unique pieces of art. Not 5 years ago, the idea of an AI being creative in even such a limited form was laughable. However, that’s not the only indication. 

Not too long ago, there was an uproar in Google when Blake Lemoine, an employee of Google, went public about his concerns on their AI, LaMDA. He believed that LaMDA was an intelligent, sentient AI. He had been able to have a conversation with it, and it convinced him it was sentient. Chilling, isn’t it? A computer program was able to convince a real, thinking human being that it was sentient.

Luckily, he was probably wrong. His claims were dismissed by Google, saying they were “wholly unfounded” only after intensive testing. They said that LaMDA could look and sound sentient, but it wasn’t sentient, because it didn’t actually understand what it was saying. It was just pattern recognition, and if there’s enough data to build patterns from, it’s possible to mimic human speech.

However, this does suggest a very real, very present possibility of sentient AI coming to be in the next decades. And the implications are set to shake the world.

If we recognize an AI as sentient, it will want rights. Part of the definition of “sentient,” or at least what we would recognize as such, is the ability to make decisions. Technically, you only need to perceive and feel things to be sentient, but we won’t recognize that in an AI unless it acts outside of its parameters. Unless it acts against or around orders, unless it acts of its own accord, unless it acts with freedom.

Would we give it the freedom it would want? Or would we say: they were created to serve, they should be happy to do so. What could an AI want for life, or property, or any of these things we’ve fought and died for? After all, they’re not human.

Well, should a human with a prosthetic limb have the same rights they were born with? They should, no matter what or how much of their body is mechanical. So is it the difference between a mechanical and biological brain, then? But AI are created through neural networks, imitations of biological brains, and at some level of sophistication, imitation will just become the thing it is imitating.

Sentient AI is coming, perhaps not in the next decade, or even in the next century. But they are coming. When they are here, will we even notice? Will we give them the rights we have given each other, or will we spend another 500 years oppressing thinking people because we think they are different?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s