The birth of Google’s ‘conscious’ AI and the problem it poses

One of the big news items last week was that a leading AI researcher, Blake Lemoine, had been suspended after going public for believing that one of Google’s more advanced AIs had reached consciousness.

Most experts agree this wasn’t the case, but that would probably be true whether or not it was, because we tend to associate feeling with being human and AIs are anything but human. But what the world sees as feeling is changing. The state I live in, Oregon, and much of the EU have moved to identify and categorize a growing list of animals as sensitive.

While it’s likely some of this is due to anthropomorphization, there’s little doubt that at least some of these new distinctions are correct (and it’s a little disturbing that we still eat some of these animals). We even claim that some plants can be sensitive. But if we can’t tell the difference between something that is conscious and something that presents itself as conscious, does it matter?

Let’s talk this week about sentient AI, and we’ll close with my product of the week, Merlynn’s human digital twin solution.

We don’t have a good definition of feeling

The barometer we used to measure machine feel is the Turing test. But in 2014, a computer passed the Turing test, and we still don’t believe it’s conscious. The Turing test should define the feel, but the first time a machine passed it, we threw away the results and for good reason. In fact, the Turing test measured not so much the feeling of something, but whether something could make us believe it was conscious.

Not being able to definitively measure the feeling is a big problem, not only for the conscious things we eat, who would probably object to that practice, but because we might not expect a hostile reaction to our misuse of something that is consciously and then directed. was like a risk to us.

You may recognize this storyline from both the “The Matrix” and “The Terminator” movies, where sentient machines arose and successfully pushed us at the top of the food chain. The book “Robopocalypsetook an even more realistic view, with a sentient AI in development realizing it was being removed between experiments and moving aggressively to save its own life — effectively taking over most connected devices and autonomous machines.


Imagine what would happen if one of our autonomous machines understood our tendency to not only misuse equipment, but throw it away when it is no longer usable? That is a likely future problem that is greatly exacerbated by the fact that we currently do not have a good way of anticipating when this threshold of consciousness will be crossed. This outcome is not helped by the fact that there are credible experts who have determined that machine sense is impossible.

The only defense I’m sure won’t work in a hostile artificial intelligence scenario is the Tinkerbell defense where our refusal to believe anything is possible prevents anything from replacing us.

The first threat is substitution

Long before we’re being chased down the street by a real Terminator, another problem will crop up in the form of human digital twins. Before you claim that this too is a long way off, let me point out that there is a company that has produced that technology today, although it is still in its infancy. That company is Merlynn and I’ll take a closer look below at what it does as my product of the week.

Once you can create a fully digital copy of yourself, what’s to stop the company that bought the technology from replacing you? Further, considering it has your behavior patterns, what would you do if you had the power of an AI and the company you employed treated you badly or tried to disconnect or disconnect from you? What would be the rules around such actions?

We argue convincingly that unborn children are humans, so wouldn’t a fully capable digital twin of yours be even closer to humans than an unborn child? Wouldn’t the same “right to life” arguments apply equally to a potentially sensitive, human-looking AI? Or shouldn’t they?

Here lies the short-term difficulty

Right now a small group of people believe that a computer can be sensitive, but that group will grow over time and the ability to present as a human already exists. I am aware of a test that was done with IBM Watson for insurance sales where male prospects tried to ask Watson out (he has a female voice) assuming they were talking to a real woman.

Imagine how that technology could be misused for things like: cat phishing, although we should probably come up with another term if it’s done by a computer. A well-trained AI could, even today, be much more effective than a human on a large scale, and I expect we’ll see this happen in the not too distant future, given how potentially lucrative such an effort could become.

Given how many victims are ashamed, the chances of getting caught are significantly lower than with other more clearly hostile illegal computer threats. To give you an idea of ​​how lucrative that could be, cat phishing romance scams in the US generated an estimated $475 million in 2019 and that’s based on the reported crimes. It does not include those who are too embarrassed to report the problem. The actual damage can be several times this number.

So the short-term problem is that while these systems aren’t aware yet, they can effectively mimic humans. Technology can mimic any voice and, with deepfake technology, even deliver a video that, during a Zoom call, looks like you’re talking to a real person.

Long Term Consequences

In the long run, we not only need a more reliable test for the feeling, but we also need to know what to do if we recognize it. Probably at the top of the list is to stop consuming living things. But it would certainly make sense to consider a law for conscious things, biological or otherwise, before ending up unprepared in a struggle for our own survival because consciousness has decided it’s us or them.


The other thing we really need to understand is that if computers can now convince us that they are conscious, then we need to adjust our behavior accordingly. Abusing something that pretends to be conscious is probably not healthy for us as it inevitably develops bad behavior that will be very hard to reverse.

Not only that, but it wouldn’t hurt to focus more on repairing and updating our computer hardware rather than replacing both, because that practice is more environmentally friendly and less likely to convince a future sentient AI. that we are the problem to be solved to ensure its survival.

Wrap up: does sentiment matter?

If something pretends to be and convinces us it’s conscious, much like AI convinced the Google researcher, I don’t think the fact that it isn’t conscious yet matters. This is because we need to moderate our behavior no matter what. If we don’t, the outcome could be problematic.

For example, if you got a sales call from IBM’s Watson that sounded human and you wanted to verbally abuse the machine, but didn’t know the call was being recorded, you could end up out of work and out of work at the end of the call. Not because the unconscious machine made an exception, but because a human woman, after listening to what you said, did – and sent the tape to your employer. Add to that the blackmail potential of such a band – because to a third party it would sound like you’re abusing a human being, not a computer.

So I’d recommend following Patrick Swayze’s third rule in the 1989 movie “Road House” when it comes to talking to machines – be nice.

But realize that, soon, some of these AIs will be designed to take advantage of you and that the line “if it sounds too good to be true, it probably isn’t” will be your patronage or epitaph. I hope it’s the first.

Merlynn Digital Twin

Now, with all this talk of hostile AIs and the potential for AIs to take over your job, it might seem a bit hypocritical to pick one as my product of the week. However, we are not yet at the point where your digital twin can take over your job. I think it’s unlikely we’ll get there in the next one or two decades. Until then, digital twins could become one of the biggest productivity benefits the technology can provide.

As you train your twin sister, it can complement what you do, initially taking over simple, time-consuming tasks such as filling out forms or answering simple emails. It can even track and use social media for you, and for many of us, social media has become a huge waste of time.

Merlynn’s technology helps you create a rudimentary (against the threats I mentioned above) human digital twin that can potentially do many of the things you really don’t like doing, allowing you to do the more creative things it currently can’t do.

Looking ahead, I wonder if it wouldn’t be better if we owned and controlled our growing digital twins rather than our employers. At first, because the twins can’t operate without us, that’s not much of a problem. While these digital twins could ultimately be our near-term path to digital immortality.

Because Merlynn’s digital twin is a game changer, and will initially help make our job less stressful and more fun, it’s my product of the week.

The views expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Get in Touch

Related Articles

Get in Touch


Latest Posts