Categories
Uncategorized

Two Faces of Facial Recognition

Facial Recognition technology is quietly and quickly being applied to a variety of applications in the private and public sectors without regulation. This poses two important ethical issues to think about: what happens if the technology doesn’t work well, and what happens if the technology works too well?

In the first case, research has demonstrated that historically marginalized, minority populations are disparately impacted by errors of Facial Recognition technology. In Gender Shades, Buolamwini and Gebru demonstrated that commercial computer vision applications work very well for lighter-skinned males, but err highly for darker-skinned females. In a follow-up study one year later entitled Actionable Auditing, Raji and Buolamwini analyzed the impact of publicly naming and disclosing performance results of biased AI systems in Gender Shades. They found that the named companies all reduced accuracy disparities between males and females and darker and lighter-skinned subgroups (some better than others), but accuracy disparities for companies not named in Gender Shades remained high. The very same systems investigated by these researchers are currently being sold to private companies, governments and law enforcement agencies as the backbone of their Facial Recognition systems and their disparate errors have already had real-world impacts. In April 2019, a student in New York City sued Apple for $1 billion, claiming a Facial Recognition system used in their retail stores falsely linked him to a series of thefts, leading to his arrest. Suffice to say, the increased use of Facial Recognition technology has sparked an intense debate, with a call by the ACLU for a moratorium on the use of Facial Recognition technology for immigration and law enforcement purposes until Congress and the public debate what uses of the technology should be permitted. In May 2019, The City of San Francisco approved a ban of its use in law enforcement applications, and in the same month, Amazon shareholders voted on a proposal for the company to stop selling Facial Recognition technology to government agencies (it did not pass). Within the past three weeks, the United States House Committee on Oversight and Reform has held hearings on Facial Recognition technology, the first on its impact on civil rights and the second on ensuring transparency in government use. Closer to home, the Toronto Star reported in May that Toronto Police Services have been using a Facial Recognition system for the past year to assist in investigations. The continued application of Facial Recognition technology, despite its disparate performance, poses a threat for individuals, especially those historically marginalized, of being wrongly implicated in crimes. The inability to opt-out of this not-ready-for-primetime technology is problematic.

In the second case, the success of Facial Recognition technology in identifying and tracking people in public and private spaces can lead to a loss of privacy that erodes societal norms, of which I believe people are not fully aware of the implications. A recent article in the New York Times describing the Communist Party of China’s (CPC) use of Facial Recognition to track its citizens is a stark warning for the rest of the world of the implications of the technology’s widespread use. The article describes how the CPC is using Facial Recognition as the core of its surveillance apparatus to track, surveil and intern individuals of a largely Muslim minority group, the Uighurs. In another NYT piece, Sahil Chinoy chronicled his experience using Amazon’s commercial Facial Recognition service Rekognition to identify and track individuals, using only public sources of information, for a total cost of $60 USD. Using a publicly-available, live-streaming camera of Bryant Park in New York City, along with web-scraped images of employees in neighbouring business, the author was able to detect 2,750 faces in a nine-hour period, link several faces to their real-world identities and monitor their movement patterns. This experiment is an illustrative example of the ability of a non-expert, using only publicly available data and a very small budget to create a functioning Facial Recognition system – the success of governments and large private companies in creating invasive Facial Recognition systems, with experts in computer vision, access to massive datasets and large R&D budgets, is almost too incredible to imagine. Once again, the inability to opt-opt of this technology is highly problematic, given that it is already in use, but without public disclosure.


As a master’s student at the University of Waterloo studying computer vision, the societal impact of my work is top-of-mind. With the support of the Vector Scholarship in AI, the Alexander Graham Bell Canadian Graduate Scholarship, and that of my supervisors, I am pursuing the issues outlined above in my thesis work and in a project named Tin Foil AI.

My first research paper addressing the issue of disparate performance of Facial Recognition technology looks at the training data that fuels computer vision systems in a broad sense. Auditing ImageNet is the first in a series of works to develop a framework for the automated annotation of demographic attributes in large-scale image datasets, to be presented on June 17 at the Workshop on Fairness, Transparency, Accountability and Ethics in Computer Vision (FATE CV) at CVPR 2019. This project aims to scrutinize training data that is often abstracted away by computer vision practitioners such that imbalances in intersectional group representation can be quantified and their downstream effects on bias in trained neural networks can be studied.

On the other face of Facial Recognition, Tin Foil AI comes into play. I’m currently in the prototype development phase so I can’t speak too much on it right now, but I’m interested in finding a way to opt-out of this tech. Check out https://tin-foil.ai for more information and to sign-up for updates – if you have experience with adversarial attacks or defences and are concerned with unregulated Facial Recognition, please reach out!