Insights & Analyses

NYC launches task force on social impact of algorithms

May 17, 2018

Algorithms shape a wide variety of our everyday interactions – from the ads we see online to the frequency of policing in our neighborhoods. Now New York City is undertaking an 18-month long study to shed some light on the impact algorithms have across the public sector.

Announced by Mayor Bill de Blasio on Wednesday, the Automated Decision Systems Task Force will review how the city deploys automatic decision-making tools across government agencies. The group, comprised of experts from city government, academia, law, and tech, will focus in particular on ensuring that the city’s algorithms are leveraged with “equity, fairness and accountability” in mind.

“We are excited to be the first city in the country bringing our best technology and policy minds together to understand how algorithms affect the daily lives of our constituents,” said City Council Speaker Corey Johnson in a statement. “Whether the city has made a decision about school placements, criminal justice, or the provision of social services, this unprecedented legislation gets us one step closer to making algorithms accountable, transparent, and free of potential bias.”

NYC’s efforts toward greater algorithmic transparency comes after reports of predictive policing programs in Chicago and New Orleans have come under scrutiny for potential violations of privacy and due process. Moreover, a series of recent studies have indicated that automated decision-making processes tend to perpetuate biases based on race and gender. In many cases, even the authors of specific algorithms can’t fully explain how the programs they create reach decisions.

The issue of algorithmic bias has attracted the attention of regulators abroad as well. The EU’s General Data Protection Regulation (GDPR) contains new provisions requiring both government agencies and private organizations to disclose when decisions are being made based on automatic processing or profiling, and gives data subjects the option to object to that processing or request direct human intervention rather than relying on code alone.

New York’s Automated Decision Systems Task Force is the result of a law first passed in December 2017, and aims to release a full report of its findings in December 2019.

OWI Take: Often we in the identity space are focused on attribute collection – the particular points of identity data that digital services collect and share about us. But we have growing volumes of compelling evidence that automatic processing based on those attributes can be not just opaque, but fundamentally flawed. We’re not eliminating the problem of trust by removing the bias of individual human decision-makers, we’re simply relocating that trust to a series of technological tools. We’ll be interested to see how transparent NYC can be in revealing the scope of algorithmic processing in public sector use cases, but in the meantime the issue of “explainable AI” is going to become more prominent in the short term. DARPA’s already got a whole program dedicated to it.