Tech firms want to detect your emotions and expressions, but people don't like it
This article by , Reader in Advertising and Digital Media, School of Creative Studies & Media was originally published on . Read the .
As revealed in a , Facebook is interested in using webcams and smartphone cameras to . The idea is that by understanding emotional behaviour, Facebook can show us more of what we react positively to in our Facebook news feeds and less of what we do not 鈥 whether that鈥檚 friends鈥 holiday photos, or advertisements.
This might appear innocuous, but consider some of the detail. In addition to smiles, joy, amazement, surprise, humour and excitement, the patent also lists negative emotions. Possibly being read for signs of disappointment, confusion, indifference, boredom, anger, pain and depression is neither innocent, nor fun.
In fact, Facebook is no stranger to using data about emotions. Some readers might remember the furore when to understand 鈥渆motional contagion鈥. This meant that when users logged into their Facebook pages, some were shown content in their news feeds with a greater number of positive words and others were shown content deemed as sadder than average. This changed the emotional behaviour of those users that were 鈥渋nfected鈥.
Given that Facebook has , this patent to read emotions via cameras is important. But there is a bigger story, which is that the largest technology companies have been buying, researching and developing these applications for some time.
Watching you feel
For example, bought Emotient in 2016, a firm that pioneered facial coding software to read emotions. offers its own 鈥渃ognitive services鈥, and is also a key player in industrial efforts to read emotions. It鈥檚 possible that could soon be listening for signs of emotions, too.
This is not the end though: interest in emotions is not just about screens and worn devices, but also our environments. Consider retail, where increasingly the goal is to understand who we are and what we think, feel and do. Somewhat reminiscent of Steven Spielberg鈥檚 2002 film Minority Report, , for example, measures facial emotional responses as people look at goods at shelf-level.
What these and other examples show is that we are witnessing a rise of interest in our emotional lives, encompassing any situation where it might be useful for a machine to know how a person feels. Some less obvious examples include , the use of video cameras by lawyers to , and to prevent accidents (and presumably to lower insurance rates).
Users are not happy
How long till machines can tell what we can?: jura-photography via Pixabay and The ConversationIn a report assessing the rise of , I point out that this is not innately bad. There are already , which take advantage of eye-trackers, facial coding and wearable heart rate sensors. These are a lot of fun, so the issue is not the technology itself but how it is used. Does it enhance, serve or exploit? After all, the scope to make emotions and intimate human life machine-readable has to be treated cautiously.
The report covers views from industry, policymakers, lawyers, regulators and NGOs, but it鈥檚 useful to consider what ordinary people say. I conducted a survey of 2,000 people and asked questions about emotion detection in social media, digital advertising outside the home, gaming, interactive movies through tablets and phones, and using voice and emotion analysis through smartphones.
I found that more than half (50.6%) of UK citizens are 鈥渘ot OK鈥 with any form of emotion capture technology, while just under a third (30.6%) feel 鈥淥K鈥 with it, as long as the emotion-sensitive application does not identify the individual. A mere 8.2% are 鈥淥K鈥 with having data about their emotions connected with personally identifiable information, while 10.4% 鈥渄on鈥檛 know鈥. That such a small proportion are happy for emotion-recognition data to be connected with personally identifying information about them is pretty significant considering what Facebook is proposing.
But do the young care? I found that younger people are twice as likely to be 鈥淥K鈥 with emotion detection than the oldest people. But we should not take this to mean they are 鈥淥K鈥 with having data about emotions linked with personally identifiable information. Only 13.8% of 18- to 24-year-olds accept this. Younger people are open to new forms of media experiences, but they want meaningful control over the process. Facebook and others, take note.
New frontiers, new regulation?
So what should be done about these types of technologies? UK and European law is being strengthened, especially given the introduction of the . While this has little to say about emotions, there are strict codes on the use of personal data and information about the body (biometrics), especially when used to infer mental states (as Facebook have proposed to do).
This leaves us with a final problem: what if the data used to read emotions is not strictly personal? What if shop cameras pick out expressions in such a way as to detect emotion, but not identify a person? This is what retailers are proposing and, as it stands, there is nothing in the law to prevent them.
I suggest we need to tackle the following question: are citizens and the reputation of the industries involved best served by covert surveillance of emotions?
If the answer is no, then codes of practice need to be amended immediately. Questions of ethics, emotion capture and rendering bodies passively machine-readable is not contingent upon personal identification, but something more important. Ultimately, this is a matter of human dignity, and about what kind of environment we want to live in.
There鈥檚 nothing definitively wrong with technology that interacts with emotions. The question is whether they can be shaped to serve, enhance and entertain, rather than exploit. And given that survey respondents of all ages are rightfully wary, it鈥檚 a question that the people should be involved in answering.
Publication date: 27 June 2017