How Can We Reveal Bias in Computer Algorithms?: Katie Cramer

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

8/14/2019 How Can We Reveal Bias in Computer Algorithms?

| The Regulatory Review

A Publication of the Penn Program on Regulation

News | PPR | Oct 10, 2017

How Can We Reveal Bias in Computer Algorithms?


Katie Cramer

A legal scholar and a computer scientist explored how to limit machine


learning biases.

Many of us may take for granted that we can create social media profiles using our own
names. But two years ago Facebook made headlines when a number of Native
American users, including Dana Lone Hill and Lance Browneyes, reportedly were
forced to edit their names to gain access. Others apparently faced suspension from the
social media platform under suspicion that they had provided fake names.
This document is authorized for use by Karina J Mark, from 9/23/2019 to 12/14/2019, in the course:
https://www.theregreview.org/2017/10/10/cramer-bias-computer-algorithms/
MGTA495-990166: Special Topics: Legal and Ethical Issues w/Data (Simon) --MSBA, University of California, San Diego. 1/4
Any unauthorized use or reproduction of this document is strictly prohibited*.
8/14/2019 How Can We Reveal Bias in Computer Algorithms? | The Regulatory Review

Facebook has since changed its name permission policies and, presumably, also
changed the algorithm underlying the suspensions. During a recent workshop at the
University of Pennsylvania Law School, computer scientist Sorelle Friedler highlighted
the Facebook controversy as an example of how bias can creep into computer codes in
ways that designers do not foresee when writing them.

Such encoded biases become an even greater concern when government authorities
turn to algorithms for assistance. In March, Friedler, an assistant professor at
Haverford College, joined Andrew Selbst, a lawyer and currently a Postdoctoral Scholar
at Data & Society Research Institute, to discuss at the Penn Law workshop how legal
and scientific tools could be used to overcome transparency and accountability
concerns associated with government use of computer algorithms.

Selbst framed the conversation around


how authorities could think about
regulating machine-learning systems that
generate inscrutable decisions—that is,
decisions that lack intuitive reason. A
simple requirement that government
provide an explanation does not suffice
when dealing with inscrutable systems,
Professor Cary Coglianese moderates discussion among Selbst argued. Instead, transparency rules
the panelists.
for decisions based on machine learning
should require the agency to address three
questions: What happened in a given individual case? How are algorithmic decisions
made based on the algorithm’s underlying logic? Why did the computer model contain
the normative choices and assumptions built into it that it did?

To help show how government could make machine learning more accountable, Selbst
referred to private sector machine learning practices that are subject to existing
regulations that partially meet his three-part test for an adequate explanation. For
example, two federal laws and accompanying regulations seek to prevent
discrimination and provide consumers with greater transparency in credit decisions.
The policies accomplish these goals by mandating financial institutions explain to each
denied customer why their credit request was rejected. Selbst explained that requiring
adverse action notices amounts to answering the “what” part of his framework. The

This document is authorized for use by Karina J Mark, from 9/23/2019 to 12/14/2019, in the course:
https://www.theregreview.org/2017/10/10/cramer-bias-computer-algorithms/
MGTA495-990166: Special Topics: Legal and Ethical Issues w/Data (Simon) --MSBA, University of California, San Diego. 2/4
Any unauthorized use or reproduction of this document is strictly prohibited*.
8/14/2019 How Can We Reveal Bias in Computer Algorithms? | The Regulatory Review

notices—which must include the key reasons for denial, such as insufficient income or
missing record of address—inform customers what happened in their individual cases.

New regulations in the European Union (EU), meanwhile, answer the “how” part of the
explanation framework, Selbst said. In an effort to shed light on automated decision-
making, the revised EU General Data Protection Regulation requires the company or
agency using machine learning to provide impacted individuals with “meaningful
information about the logic” underlying the algorithm. The new regulations will
become effective in May 2018 but Selbst indicated that the EU’s “meaningful
information” phrasing would help improve transparency and reveal bias if applied to
U.S. agencies’ algorithmic decisions.

Friedler, the computer scientist, explored


technological approaches to auditing
machine learning systems for bias. She
explained that agencies could check
algorithms for unintended, biased
outcomes by creating a second version of
their code using the same inputs to test
for similar outputs. Alternatively,
engineers could audit machine-learning
Andrew Selbst discusses transparency and accountability
systems to understand how much concerns with government reliance on machine learning.
resulting decisions rely on a single
variable. For example, removing income
as a factor in issuing a credit decision would explain how important an applicants’
earnings are to their credit scores.

Another method would allow auditors to uncover indirect influence of a certain


variable included in an algorithm, Friedler continued. Sometimes biased outcomes
result from indirect—or proxy—variables, she explained, making the indirect audit an
important tool for regulators who must consider whether policies disproportionately
impact members of protected classes.

Friedler turned to the private sector for an example. When it launched same-day
delivery last year, Amazon encountered how strongly customers’ zip codes were tied to
their race in some U.S. cities. As a result, the e-commerce company’s decision to roll
out the expedited delivery option in a limited number of zip codes meant that majority-
This document is authorized for use by Karina J Mark, from 9/23/2019 to 12/14/2019, in the course:
https://www.theregreview.org/2017/10/10/cramer-bias-computer-algorithms/
MGTA495-990166: Special Topics: Legal and Ethical Issues w/Data (Simon) --MSBA, University of California, San Diego. 3/4
Any unauthorized use or reproduction of this document is strictly prohibited*.
8/14/2019 How Can We Reveal Bias in Computer Algorithms? | The Regulatory Review

African American neighborhoods were cut out of the service in cities like Boston,
Chicago, and New York. Auditing algorithms for the influence of proxy variables can
help reveal and prevent biased outcomes, Friedler concluded.

Friedler and Selbst agreed machine learning holds great promise to improve
government decision-making in many policy areas. But they both also emphasized that
policymakers must remain vigilant about detecting real, even if unintended, bias that
can result from machine learning. Both computer science and legal tools can aid in this
effort.

The workshop was the sixth installment of the seven-part Optimizing Government
series, which was supported by the Fels Policy Research Initiative. Cary Coglianese, a
professor at Penn Law and the Director of the Penn Program on Regulation, moderated
the discussion.

This essay is part of a seven-part series, entitled Optimizing Government.

Tagged: Administrative Law, Big Data, European Union, machine learning, regulation

This document is authorized for use by Karina J Mark, from 9/23/2019 to 12/14/2019, in the course:
https://www.theregreview.org/2017/10/10/cramer-bias-computer-algorithms/
MGTA495-990166: Special Topics: Legal and Ethical Issues w/Data (Simon) --MSBA, University of California, San Diego. 4/4
Any unauthorized use or reproduction of this document is strictly prohibited*.

You might also like