Don’t get too emotional about emotion-reading AI

Call it “artificial emotional intelligence” – the kind of artificial intelligence (AI) that can now detect the emotional state of a human user.

Or is it possible?

More importantly, should it?

Most emotion AI is based on the “basic emotions” theory, which holds that people universally feel six internal emotional states: happiness, surprise, fear, disgust, anger, and sadness, and can communicate these states through facial expression, body language, and voice. intonation.

In the post-pandemic world of remote work, salespeople struggle to “read” the people they sell to via video calls. Wouldn’t it be nice if the software conveyed the emotional response on the other end of the line?

Companies like Uniphore and Sybil are working on it. For example, Uniphore’s “Q for Sales” application processes non-verbal cues and body language via video, and voice intonation and other data via audio, resulting in an “emotion scorecard”.

Making human connections through computers

Zoom himself flirts with the idea. Zoom introduced a trial version of Zoom IQ for sales, which generates transcripts of Zoom conversations for meeting hosts, as well as “sentiment analysis” – not in real time but after the meeting; The criticism was harsh.

While some people love the idea of ​​getting AI help reading emotions, others hate the idea of ​​having their emotional states assessed and conveyed by machines.

The question of whether to use emotion-detecting AI tools is an important question that many industries and the general public must grapple with.

Hiring can benefit from emotion AI, helping interviewers understand truthfulness, sincerity, and motivation. HR teams and hiring managers would like to judge candidates on their willingness to learn and enthusiasm to join a company.

The call for emotion-detection AI is also increasing among government and law enforcement. Border Patrol and Homeland Security officials want the technology to catch smugglers and impostors. Law enforcement sees emotion AI as a tool for police interrogations.

Emotion AI has applications in customer service, advertising review, and even safe driving.

It’s only a matter of time before emotion AI shows up in everyday business applications, conveys the feelings of others on phone calls and business meetings, and provides ongoing mental health counseling at work.

Why emotion AI upsets people

Unfortunately, the “science” of emotion detection is still a kind of pseudoscience. The practical problem with emotion detection AI, also known as affective computing, is simple: people are not that easy to read. Is that smile the result of happiness or shame? Does that frown come from a deep inner feeling, or is it ironic or joking.

Relying on AI to detect the emotional state of others can easily lead to misunderstanding. When applied to follow-up tasks, such as hiring staff or law enforcement, the AI ​​can do more harm than good.

It is also true that people routinely mask their emotional state, especially during business and sales meetings. AI can detect facial expressions, but not the thoughts and feelings behind them. Business people smile and nod and frown empathically because it’s appropriate in social interactions, not because they’re revealing their true feelings.

Conversely, people can dig deep, find their inner Meryl Streep and feign emotion to get the job or lie to Homeland Security. In other words, knowing that emotion AI is being applied creates a perverse incentive to game the technology.

That leads to the biggest dilemma about emotion AI: is it ethical to use in business? Do people want their emotions to be read and judged by AI?

In general, in a sales meeting, for example, people want to control the emotions they convey. If I appear smiling and excited and tell you that I am happy and excited about a product, service or initiative, I want you to believe that – don’t ignore my intended communication and discover my real feelings without my permission.

Salespeople need to be able to read the emotions prospects are trying to convey, not the emotions they want to keep private. As we get a better understanding of how emotional AI works, it’s starting to look more and more like a privacy issue.

People have a right to personal emotions. And that’s why I think Microsoft is going to be a leader in the ethical application of emotion AI.

How Microsoft is doing it right

Microsoft, which developed some pretty advanced emotion detection technologies, later discontinued them as part of a revamp of its AI ethics policy. The main tool, called Azure Face, can also estimate gender, age, and other characteristics.

“Experts inside and outside the company have pointed to the lack of scientific consensus on the definition of ’emotions’, the challenges in how inferences generalize across use cases, regions and demographics, and the heightened privacy concerns surrounding these types of capabilities,” wrote Natasha Crampton, Microsoft’s Chief Responsible AI Officer, wrote in a blog post.

Microsoft continues to use emotion recognition technology in its accessibility app, called Seeing AI, for visually impaired users. And I think this is also the right choice. Using AI to enable visually impaired people, or for example people with autism, to read through their struggles the emotions and reactions of others, to put this technology to good use. And I think it can play an important role in the coming era of augmented reality glasses.

Microsoft isn’t the only organization driving the ethics of emotion AI.

The AI ​​Now Institute and the Brookings Institution are calling for a ban on widespread use of emotion-detection AI. And more than 25 organizations demanded that Zoom end its plans to use emotion detection in the company’s video conferencing software.

Still, some software companies are moving forward with these tools — and they’re finding customers.

For the most part, and for now, using emotion AI tools can be misleading, but mostly harmless, as long as everyone involved really agrees. But as technology gets better and face interpretation, body language reading technology approaches mind reading and lie detection, it could have serious implications for business, government and society.

And, of course, there’s another elephant in the living room: the field of affective computing is also trying to develop conversational AI that can simulate human emotions. And while some emotion simulation is needed for realism, too many can mislead users into believing AI is sentient or conscious. In fact, that belief is already happening on a large scale.

Overall, all of this is part of a new phase in the evolution of AI and our relationship with technology. As we learn that it can solve countless problems, we also discover that it can create new ones.

Copyright © 2022 IDG Communications, Inc.

Get in Touch

Related Articles

Get in Touch


Latest Posts