Facial recognition systems and the underlying artificial intelligence (AI) technologies connected to them are advancing rapidly – much faster than we can fully understand the risks and dangers these technologies may pose.
There are growing calls for greater consideration of the ethical implications of their use, and the inherent biases that continue to be exposed, such as discrimination based on ethnicity or gender.
The potential applications of large-scale and automated decision-making afforded by AI becomes particularly concerning when we closely inspect the underlying theories and datasets that determine their predictions. That being said, there are alternative-use cases for this technology, such as interactive artworks that use AI-based emotion recognition technologies in a more constructive and positive way.
Behind basic emotions
In the case of emotion recognition, one of the most popular approaches involves using facial expression to classify the subject into one of the six basic emotion categories: happiness, sadness, anger, fear, surprise, and disgust. The theory underpinning this approach proposes that emotion categories are innate and universal, with each having a unique facial expression that makes it distinguishable from the others.
For example, when we feel happy, we smile; when we feel angry, we scowl. Irrespective of its scientific validity, the basic emotion model is ubiquitous in effective computing research, largely due to the fact that it’s easy to implement computationally because it only has to recognise a face, and classify its expression into six possibilities.
More recent approaches to automated emotion recognition also add modes of input into the recognition process – including analysing speech, body language, and biofeedback – to infer someone’s emotional state.
But these approaches still operate under the assumption that emotions are hard-wired; a system of innate responses that are triggered by external events, a phenomenon that lends itself to direct measurement.