Lombroso sghignazza nella tomba.

Questo serve a tenerci a mente che

  • non è infrequente che si scopra dopo un po’ che i dati usati nei traning set includevano un bias
  • il solo fatto che un articolo sia pubblicato su una rivista prestigiosa non è una garanzia assoluta, aprioristica di qualità

Source : Nature

Facial recognition technology can expose political orientation from naturalistic facial images

Abstract

Ubiquitous facial recognition technology can expose individuals’ political orientation, as faces of liberals and conservatives consistently differ. A facial recognition algorithm was applied to naturalistic images of 1,085,795 individuals to predict their political orientation by comparing their similarity to faces of liberal and conservative others. Political orientation was correctly classified in 72% of liberal–conservative face pairs, remarkably better than chance (50%), human accuracy (55%), or one afforded by a 100-item personality questionnaire (66%). Accuracy was similar across countries (the U.S., Canada, and the UK), environments (Facebook and dating websites), and when comparing faces across samples. Accuracy remained high (69%) even when controlling for age, gender, and ethnicity. Given the widespread use of facial recognition, our findings have critical implications for the protection of privacy and civil liberties.
Introduction

There is a growing concern that the widespread use of facial recognition will lead to the dramatic decline of privacy and civil liberties1. Ubiquitous CCTV cameras and giant databases of facial images, ranging from public social network profiles to national ID card registers, make it alarmingly easy to identify individuals, as well as track their location and social interactions. Moreover, unlike many other biometric systems, facial recognition can be used without subjects’ consent or knowledge.

Pervasive surveillance is not the only risk brought about by facial recognition. Apart from identifying individuals, the algorithms can identify individuals’ personal attributes, as some of them are linked with facial appearance. Like humans, facial recognition algorithms can accurately infer gender, age, ethnicity, or emotional state2,3. Unfortunately, the list of personal attributes that can be inferred from the face extends well beyond those few obvious examples.

A growing number of studies claim to demonstrate that people can make face-based judgments of honesty4, personality5, intelligence6, sexual orientation7, political orientation8,9,10,11,12, and violent tendencies13. There is an ongoing discussion about the extent to which face-based judgments are enabled by stable facial features (e.g., morphology); transient facial features (e.g., facial expression, makeup, facial hair, or head orientation); or targets’ demographic traits that can be easily inferred from their face (e.g., age, gender, and ethnicity)14. Moreover, the accuracy of the human judgment is relatively low. For example, when asked to distinguish between two faces—one conservative and one liberal—people are correct about 55% of the time (derived from Cohen’s d reported in Tskhay and Rule15), only slightly above chance (50%). Yet, as humans may be missing or misinterpreting some of the cues, their low accuracy does not necessarily represent the limit of what algorithms could achieve. Algorithms excel at recognizing patterns in huge datasets that no human could ever process16, and are increasingly outperforming us in visual tasks ranging from diagnosing skin cancer17 to facial recognition18 to face-based judgments of intimate attributes, such as sexual orientation (76% vs. 56%)7,19, personality (64% vs. 57%; derived from Pearson’s rs)20,21,22, and—as shown here—political orientation. (For ease of interpretation and comparisons across studies, across this text, accuracy is expressed as the area under the receiver operating characteristic curve (AUC), an equivalent of the Wilcoxon signed-rank test coefficient and the common language effect size.)
Methods

We used a sample of 1,085,795 participants from three countries (the U.S., the UK, and Canada; see Table 1) and their self-reported political orientation, age, and gender. Their facial images (one per person) were obtained from their profiles on Facebook or a popular dating website. These self-selected, naturalistic images combine many potential cues to political orientation, ranging from facial expression and self-presentation to facial morphology. The ethnic diversity of our sample (it included over 347,000 non-white participants), the relative universality of the conservative–liberal spectrum23, and the generic type of facial images used here increase the likelihood that our findings apply to other countries, cultures, and types of images.
Table 1 Number of participants and the distribution of political orientation, gender, age, and ethnicity.

As we are aiming to study existing privacy threats, rather than develop new privacy-invading tools, we used an open-source facial-recognition algorithm instead of developing an algorithm specifically aimed at political orientation. The procedure is presented in Fig. 1: To minimize the role of the background and non-facial features, images were tightly cropped around the face and resized to 224 × 224 pixels. VGGFace224 was used to convert facial images into face descriptors, or 2,048-value-long vectors subsuming their core features. Usually, similarity between face descriptors is used to identify those similar enough to likely represent the face of the same person. Here, to identify individuals’ political orientation, their face descriptors are compared with the average face descriptors of liberals versus conservatives. Descriptors were entered into a cross-validated logistic regression model aimed at self-reported political orientation (conservative vs. liberal). Virtually identical results were produced by alternative methods: a deep neural network classifier and a simple ratio between average cosine similarity to liberals and conservatives. See the Supplementary Methods section for more details.

Procedure used to predict political orientation from facial images. (To protect participants’ privacy, we used a photo of a professional model. Their informed consent for publication was obtained.)
Results

The results are presented in Fig. 2 (blue bars). The accuracy is expressed as AUC, or a fraction of correct guesses when distinguishing between all possible pairs of faces—one conservative and one liberal. In the largest sample, of 862,770 U.S. dating website users, the cross-validated classification accuracy was 72%, which is much higher than chance (50%) and translates into Cohen’s d = 0.83, or a large effect size. (Sawilowsky25 suggested the following heuristic for interpreting effect sizes: very small [d ≥ 0.01], small [d ≥ 0.2], medium [d ≥ 0.5], large [d ≥ 0.8], very large [d ≥ 1.2], and huge [d ≥ 2].) Similar accuracy was achieved for dating website users in Canada (71%) and in the UK (70%). The predictability was not limited to the online dating environment: The algorithm’s accuracy reached 73% among U.S. Facebook users. To put the algorithm’s accuracy into perspective, consider that human accuracy in similar tasks is 55%, only slightly above chance (SD = 4%; CI95% = [52%,58%])15.

Accuracy of the facial-recognition algorithm predicting political orientation. All 95% confidence intervals are below 1% and are thus omitted. Humans’ and algorithms’ accuracy reported in other studies is included for context.

Moreover, as shown in Table 2, the algorithm could successfully predict political orientation across countries and samples. Regression trained on the U.S. dating website users, for example, could distinguish between liberal and conservative dating website users in Canada (68%), the UK (67%), and in the Facebook sample (71%). Overall, the average out-of-sample accuracy was 68%, indicating that there is a significant overlap in the links between facial cues and political orientation across the samples and countries examined here.
Table 2 Classification accuracy across the subsamples (rows) and models trained on each of the samples (columns). All 95% confidence intervals are below 1% and are thus omitted.

Both in real life and in our sample, the classification of political orientation is to some extent enabled by demographic traits clearly displayed on participants’ faces. For example, as evidenced in literature26 and Table 1, in the U.S., white people, older people, and males are more likely to be conservatives. What would an algorithm’s accuracy be when distinguishing between faces of people of the same age, gender, and ethnicity? To answer this question, classification accuracies were recomputed using only face pairs of the same age, gender, and ethnicity.

The results are presented in Fig. 2 (red bars). The accuracy dropped by only 3.5% on average, reaching 68%, 68%, 65%, and 71% for the U.S., Canadian, and UK dating website users, as well as for the U.S. Facebook users, respectively. This indicates that faces contain many more cues to political orientation than just age, gender, and ethnicity.

Another factor affecting classification accuracy is the quality of the political orientation estimates. While the dichotomous representation used here (i.e., conservative vs. liberal) is widely used in the literature, it offers only a crude estimate of the complex interpersonal differences in ideology. Moreover, self-reported political labels suffer from the reference group effect: respondents’ tendency to assess their traits in the context of the salient comparison group. Thus, a self-described “liberal” from conservative Mississippi could well consider themselves “conservative” if they lived in liberal Massachusetts. Had the political orientation estimates been more precise (i.e., had less error), the accuracy of the face-based algorithm could have been higher. Consequently, apart from considering the absolute classification accuracy, it is useful to compare it with one offered by alternative ways of predicting political orientation. Here, we use personality, a psychological construct closely associated with, and often used to approximate, political orientation27. Facebook users’ scores on a well-established 100-item-long five-factor model of personality questionnaire28 were entered into a tenfold cross-validated logistic regression to predict political orientation.

The results presented in Fig. 3 show that the highest predictive power was offered by openness to experience (65%), followed by conscientiousness (54%) and other traits. In agreement with previous studies27, liberals were more open to experience and somewhat less conscientious. Combined, five personality factors predicted political orientation with 66% accuracy—significantly less than what was achieved by the face-based classifier in the same sample (73%). In other words, a single facial image reveals more about a person’s political orientation than their responses to a fairly long personality questionnaire, including many items ostensibly related to political orientation (e.g., “I treat all people equally” or “I believe that too much tax money goes to support artists”).

Accuracy afforded by transient facial features and personality traits when predicting political orientation in Facebook users (similar results were obtained in other samples; see Supplementary Table S1). All 95% confidence intervals are below 1% and are thus omitted.

High predictability of political orientation from facial images implies that there are significant differences between the facial images of conservatives and liberals. High out-of-sample accuracy suggests that some of them may be widespread (at least within samples used here). Here, we explore correlations between political orientation and a range of interpretable facial features including head pose (pitch, roll, and yaw; see Fig. 3); emotional expression (probability of expressing sadness, disgust, anger, surprise, and fear); eyewear (wearing glasses or sunglasses); and facial hair. Those features were extracted from facial images and entered (separately and in sets) into tenfold cross-validated logistic regression to predict political orientation.

The results presented in Fig. 3 are based on Facebook users (similar results were obtained in other samples; see Supplementary Table S1). The highest predictive power was afforded by head orientation (58%), followed by emotional expression (57%). Liberals tended to face the camera more directly, were more likely to express surprise, and less likely to express disgust. Facial hair and eyewear predicted political orientation with minimal accuracy (51–52%). Even when combined, interpretable facial features afforded an accuracy of merely 59%, much lower than one achieved by the facial recognition algorithm in the same sample (73%), indicating that the latter employed many more features than those extracted here. A more detailed picture could be obtained by exploring the links between political orientation and facial features extracted from images taken in a standardized setting while controlling for facial hair, grooming, facial expression, and head orientation.

via Facial recognition technology can expose political orientation from naturalistic facial images | Scientific Reports.