Robots can read your p-p-p-poker face

Emotion-sensing technology could be the next frontier of personalization.
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

This article is an installment of The Future Explored, a weekly guide to world-changing technology. You can get stories like this one straight to your inbox every Thursday morning by subscribing here.

Robots are learning to interpret our emotions. They’re not that great at it yet, but with rapid advances in sensor technology and A.I., they’re getting better.

Why this matters: Artificial emotional intelligence is new, but it’s already being used in ways that could impact your life — like if you get that job or promotion. The industry has an estimated value of $20 billion USD and is growing fast. We should be weighing the implications of this technology now before it fully arrives.

What it is: This technology — referred to as emotion A.I. — can sense our internal state and use that data to make decisions about how it will respond.

Some companies are already using emotion A.I. to analyze a job candidate’s employability.

How it works: Emotion A.I. analyzes your facial expressions, eye movements, body movements, voice waves, biometric data…even how you walk! Our bodies give off unconscious cues about how we’re feeling all the time — some that are barely noticeable by other humans — and the right technology could pick it up. When we’re stressed or confused, our pupils dilate. When we sweat more, that may indicate excitement or frustration. We also tend to exhale more when we’re scared. Advanced emotion recognition systems could potentially combine multiple sets of data into a machine learning algorithm and reveal things like how hard our brains are working (or not!), whether we’re under stress, engaged in the conversation — or if we’re feeling lonely, excited, surprised, etc.

How it’s being used: Some companies are using emotion A.I. to analyze a job candidate’s employability and some automakers want to install it in their vehicles — claiming it as a safety feature that could, for example, warn the driver if their mental state (distracted, angry, etc) becomes dangerous. Some researchers are even looking at using it in the classroom, claiming it could help improve instruction by telling teachers how engaged students are.

The next frontier of personalization: The falling cost of these sensors could make it feasible to deck out a space with thermal sensors that track blood flow, CO2 monitors that detect breathing rate, and cameras and microphones that detect facial expressions and voice patterns. Poppy Crum, chief scientist at Dolby Labs and professor at Stanford University, says this will allow for spaces to perceive how we are feeling and respond to it — changing the temperature, sound, music, lighting, and color to help you achieve whatever it is you are trying to accomplish.

Deeper well-being: The tech could also lead to more personalized healthcare, argues Crum. She tells Forbes that the consumer technology we have in our homes may eventually be “a better indicator of our mental and physical wellness than most of our clinical visits” because of how much time we spend with the technology.

Accuracy and bias: Brace yourself, here comes the cold water. Facial expressions alone do not reveal a person’s internal state — but, unfortunately, many of the commercial products already on the market (such as those claiming the ability to analyze job applicants) mostly rely on analyzing the face. This has led critics to claim that the field is overhyped and can’t be trusted, raising discrimination concerns similar to those around predictive sentencing.

Privacy questions unanswered: It’s hard to get more personal than data about your emotions. Who gets access to this data? Will we get a say in how it’s used? Will we be able to opt-out of this technology? (According to Wired, Crum believes that will soon be impossible.)

The science of emotions itself is extremely complex. For example, can emotions be measured independently or do they fall somewhere on a spectrum? And the connection between physical expressions and emotions is nuanced — you can scowl or shout if you’re angry, but you can also cry or laugh at perceived injustice, or you could stew silently. We’re not at a point yet where technology can reliably pick up on these gradations — and to be fair, that’s even hard for humans!

Bottom line: While allowing robots to read our emotions and influence our opportunities sounds incredibly dystopian — it doesn’t have to be all Black Mirror.  Like any technology, emotion A.I. probably won’t be “all good” or “all bad” — the impact on humanity will depend on the specifics of how we use it.

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
Should we turn the electricity grid over to AI?
AI could one day be woven throughout the grid management system — here are the pros and cons.
AI skeptic Gary Marcus on AI’s moral and technical shortcomings
From hallucinations to regulatory battles, Gary Marcus argues the AI status quo has failed us and it’s time citizens demand something more.
Flexport is using generative AI to create the “holy grail” of shipping
Flexport is using generative AI to read documents, talk to truckers, and create a “knowledge agent” that’s an expert in shipping.
The West needs more water. This Nobel winner may have the answer.
Paul Migrom has an Emmy, a Nobel, and a successful company. There’s one more big problem on the to-do list.
Can we automate science? Sam Rodriques is already doing it.
People need to anticipate the revolution that’s coming in how humans and AI will collaborate to create discoveries, argues Sam Rodrigues.
Up Next
Distracted Driving
Subscribe to Freethink for more great stories