Can your Poker Face Beat a Computer? The Ups and Downs of Emotion Recognition

I can read minds, and so can you. Unfortunately, we are both pretty bad at it. To be fair, I am not talking about magical, mythical, science fiction capabilities. Instead, I am referring to the low-tech, primordial practice of observing emotional expressions on faces. Most people are very proficient at detecting cardinal, apparent emotions (e.g., disgust, fear, and happiness), but we struggle when it comes to more subtle or covert expressions.

In the 1970s, psychologist Paul Ekman popularized the term “microexpressions” to explain these evasive displays of emotion. Throughout the past 40 years, his research team has developed training protocols that use annotated headshots to highlight subtle variations in the eyes and mouth. One can purchase these training programs online and allegedly improve everything from lie detection skills to empathy. But even if these protocols were substantially effective in controlled laboratory settings, their actual utility seems highly limited—we do not observe emotional expressions through isolated snapshots. So despite Ekman’s attempts, our mind-reading deficiencies persist.

Enter machine learning. Over the past month, a slew of articles have discussed machine-learning techniques that can detect human emotions. Unlike Ekman’s system, these processes do not expect individuals to extract minor facial movements from highly complex environments. Rather, these methods train computers to recognize externally visible facial signals that we often fail to perceive on our own. From cameras in billboards that discern viewers’ expressions, to machines that reveal subtle variations between healthy and depressed patients’ smiles, to algorithms that can decipher concealed emotions, technology is starting to connect the dots for us. As wearable devices like Google Glass continue to progress, it is likely that these algorithms could become real-time emotion recognition tools.

This is fascinating from a purely scientific perspective. But is better detection of subtle or covert expressions a beneficial development? As lawyers love to say: it depends. For clinical applications, such as the depression example cited above, the answer is likely yes. If the algorithms are sufficiently accurate, spotting particular patterns in patients’ smiles could allow doctors to identify at-risk individuals, make better diagnostic predictions, and measure signs of progress. Greater emotional intelligence could also enhance interpersonal exchanges. Increasing the transparency of social interactions could promote more informed decision-making, strengthen group dynamics, and help build rapport among strangers.

Yet, poker faces exist for a reason. The ability to mask emotions is an extremely powerful asset. Whether hiding fear in a dangerous setting, not appearing nervous in an interview, or concealing utter boredom during a lecture, we expect people to be blind to our subtle expressions, and take solace in this option to disguise our true feelings. In fact, this façade is so cherished that people with chronic blushing disorders, whose skin routinely publicizes their embarrassment, frequently consider surgery to access these masking capabilities. Beyond just daily interactions, however, stoicism is an integral aspect of many job descriptions, especially those involving positions of leadership and authority.

To make this concept more concrete, let’s consider emotion recognition in a specific context. Take the courtroom setting, where values of objectivity coexist with subjective evaluations of mental states. Whether or not the technology would be allowed in this arena, the courtroom environment highlights the promise and peril of enhanced emotion detection.

Start with jurors, who assess emotion all the time. In determining the credibility of a witness, jurors often evaluate the witness’ demeanor and confidence.[1] In cases involving rape, jurors expect the victim to exude a Goldilocks’ degree of emotion—affects that are too hysterical or too flat incite skepticism.[2] In conviction decisions, as well as sentencing verdicts, jurors and judges scrutinize the defendant’s perceived level of remorse.[3] Regardless of whether these evaluations are appropriate in the courtroom, a more fundamental problem exists: individuals greatly overestimate their ability to accurately detect remorse, trauma, and confidence. Here, emotion recognition software might serve a useful function. If emotion is to have a role in courtroom decisions, we could at least improve the accuracy with which jurors and judges make their assessments.

However, enhanced emotion detection might simultaneously create its own set of issues. What if attorneys or jurors could detect a judge’s subtle expressions in real time? Might such knowledge influence an attorney’s trial strategy or a juror’s interpretation of evidence? Even though we do not expect judges to be devoid of emotional reactions, their “impassive” auras foster an appearance of neutrality that is central to the adversarial system. Emotion recognition software might divert attention away from the facts of the case towards minute changes in a judge’s facial muscles.

So what should we make of algorithmic emotion recognition? Until more data becomes available, we can’t know how accurate the software will be, or how widely the technology will be adopted. In the meantime, I suggest a context-dependent approach. Harness machine learning to improve clinical outcomes, fix emotion recognition where it goes awry, but give poker faces the respect they deserve.

Natalie Salmanowitz is a fellow in the Stanford Program in Neuroscience and Society.

[1] https://www.law.duke.edu/news/pdf/judicature.pdf

[2] http://bjc.oxfordjournals.org/content/49/2/202.full

[3]http://www.slate.com/articles/news_and_politics/crime/2015/11/remorse_judges_and_juries_think_they_can_tell_when_a_defendant_is_sorry.html?wpsrc=sh_all_dt_tw_ruEm