The Fight for Fair Facial Recognition

June 19, 2020
Share on facebook
Share on twitter
Share on email

Just a few weeks ago, the murder of George Floyd sparked worldwide protests by and in alliance with the black community. Especially here in the US, this has immediately led to a parallel push for systemic change for domestic law enforcement. Simultaneously, this has revived media attention regarding the role of technology in perpetuating racial biases in policing—especially that of facial recognition technology.

 

Many companies have made public stances surrounding how they are changing their policies, educating their personnel, or being better allies to the black community. Our industry is no different, as several Big Tech companies announced a moratorium on developing and offering such products for domestic law enforcement.

Big Statements by Big Tech

Notably, IBM CEO Arvind Krishna sent a letter to Congress on June 8th, stating that the company “firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms”. The company urged Congress to start “a national dialogue” on the use of facial recognition technology by domestic law enforcement and the ways it is leveraged. 

Microsoft president Brad Smith followed suit in stating, “We’ve decided that we will not sell facial recognition to police departments in the United States until we have a national law in place ground in human rights that will govern this technology.” This is unsurprising from Microsoft, as this aligns with their past actions. For example, in April 2019, Microsoft said no to California law enforcement installing their facial recognition technology in cars or body cams, and most recently, divesting from their relationship with AnyVision, who allegedly ran a surveillance program in the West Bank.

And last but not least, Amazon also implemented a one-year moratorium for “police use” of its facial recognition technology, Rekognition. While it’s surprising the company implemented a one-year moratorium solely, it’ll be interesting to see how this story unfolds, especially given Amazon’s history of partnership with law enforcement through its Ring product.

These announcements, however, belie the money spent by Big Tech in lobbying for relaxed policies around the sale of their products, particularly their facial recognition technologies. For context, seven tech giants, including Apple, Facebook, Google, Microsoft, and Uber, account for “nearly half a billion dollars in lobbying” over the past decade. Most notably, Amazon spent over $16 million in 2019—a record for the company. This trend gives us a growing sense that future legislation will focus on scoped facial recognition use rather than banning use in law enforcement communities.

Unpacking the Current Landscape

Before we can dive into what the future holds and how facial recognition tech impacts today’s social climate, it’s critical to look at the current landscape. Prevailing legislative trends indicate a lack of federal consensus on facial biometrics in law enforcement. It’s truly a patchwork slew of solutions across the US:

Only three states have banned facial recognition tech in police body cameras:

While it may seem jarring that only three states have banned these practices, it reflects how much more commonplace it is for cities to preside over facial recognition use. City agencies that have banned facial recognition technologies by law enforcement include San Francisco and Oakland, California and Brookline, Cambridge, Northampton, and Somerville, Massachusetts.

One thing to note in this regulatory variance is that other states do allow for alternative forms of recourse, including:

Although some cities and states are actively making a stand one way or another on this technology, the attitude on the horizon seems to largely be focused on oversight, not bans, despite momentum otherwise. 2019 was tellingly both a big year for states and cities to move against facial recognition technology, as well as a big year for momentum on industry lobbying and influence. Last September, we saw 39 industry groups advocate for considering “viable alternatives to [facial recognition technology] bans”. Moreover , there is still little impetus to change on a national level. The Justice in Policing Act was recently proposed, which includes a ban on biometric facial recognition in police body cameras without a warrant. Its scope, however, has been criticized for largely missing the primary use cases for facial recognition. For example, it lacks meaningful application to surveillance cameras, despite its higher frequency in law enforcement use for facial recognition than some of the other cited use cases. Likewise, there has been no momentum for regulating federal agencies. Most notable are the FBI, with its database of 640 million photos, and ICE, which has conducted countless permissionless facial recognition searches against state DMV databases. 

Another important thing to note is that  pure-play vendors have largely not been part of the tech companies announcing these moratoriums; in fact, the former have continued to court law enforcement agencies. Axon wants to continue producing technology that can “help” reduce systemic bias for law enforcement, but is also one of the largest police technology companies producing body cams. NEC Corp, a Japanese company that serves as a leading producer of law enforcement technology and hardware, is also looking to expand its business into the US, primarily with real-time facial recognition technology. Clearview AI, one of the most controversial facial recognition companies working with law enforcement, has made no statement advocating for a moratorium or ban on their technology. 

On the surface, facial recognition may seem like a foolproof tactic for bringing lawbreakers to justice. But these tools are not at all reliable yet; there has notably been a significant disparity in identifying people of color accurately. One study by MIT found that facial recognition software had an error rate of 0.8% for light-skinned men and 34.7% error rate for dark-skinned women. Considering that over half of American adults are enrolled in some form of facial recognition network readily searchable by law enforcement officials, this should give us pause for concern. It should lead us to reconsider current law enforcement use of facial recognition technology. . 

The OWI Analyst Take

There’s a genuine argument for using local governments as a sort of sandbox for testing the use of biometric data: that the different environments and results will provide critical empirical findings that have otherwise been absent from an overall lack of oversight, and more effectively drive nationwide momentum.. 

At this time, however, we still don’t have enough insight into how pure-play vendors interact with domestic law enforcement. In part, this is due to the traditional lack of oversight and obfuscation in the partnerships between tech and facial recognition technologies, usually until specific post-hoc cases come to light. For example, the ACLU found that Geofeedia, a social media monitoring product, helped law enforcement identify activists and protestors by sourcing data from Facebook, Twitter, and Instagram. Pure-play vendors have also mainly been left out of the media limelight until recently, partially due to announcements by Big Tech overshadowing the former’s continued cooperation with law enforcement. But, given the noted algorithmic bias against minorities (particularly for commercial AI systems as highlighted above) and dissidents, we should be drawing closer and more critical attention to pure-play vendors and the serious flaws in utilizing facial recognition tech for law enforcement. The few arrests made right now are likely not worth the risk of further exacerbating systemic biases found in flawed methodology and historic data.

While there’s much more to unpack—and we’re sure more case studies like those mentioned above are going to come to light during this period of unrest, today—we have enough data and evidence to encourage significant systemic change. Now is the time to call out the risks with facial recognition technology and make the changes necessary to ensure it’s used fairly, accurately, and within reason—if this is at all possible. Doing anything otherwise, as the ACLU puts so well, “threaten[s] to legitimize the infrastructural expansion of powerful face surveillance technology.”

 

Looking for more? Check back next Friday for more OWI Analyst Insights.