The Atlantic – There is no good evidence that facial expressions reveal a person’s feelings. But big tech companies want you to believe otherwise: “…Today affect-recognition tools can be found in national-security systems and at airports, in education and hiring start-ups, in software that purports to detect psychiatric illness and policing programs that claim to predict violence. The claim that a person’s interior state can be accurately assessed by analyzing that person’s face is premised on shaky evidence. A 2019 systematic review of the scientific literature on inferring emotions from facial movements, led by the psychologist and neuroscientist Lisa Feldman Barrett, found there is no reliable evidence that you can accurately predict someone’s emotional state in this manner. “It is not possible to confidently infer happiness from a smile, anger from a scowl, or sadness from a frown, as much of current technology tries to do when applying what are mistakenly believed to be the scientific facts,” the study concludes. So why has the idea that there is a small set of universal emotions, readily interpreted from a person’s face, become so accepted in the AI field? To understand that requires tracing the complex history and incentives behind how these ideas developed, long before AI emotion-detection tools were built into the infrastructure of everyday life…”
Sorry, comments are closed for this post.