How Can We Reveal Bias in Computer Algorithms?: Katie Cramer
How Can We Reveal Bias in Computer Algorithms?: Katie Cramer
How Can We Reveal Bias in Computer Algorithms?: Katie Cramer
Many of us may take for granted that we can create social media profiles using our own
names. But two years ago Facebook made headlines when a number of Native
American users, including Dana Lone Hill and Lance Browneyes, reportedly were
forced to edit their names to gain access. Others apparently faced suspension from the
social media platform under suspicion that they had provided fake names.
This document is authorized for use by Karina J Mark, from 9/23/2019 to 12/14/2019, in the course:
https://www.theregreview.org/2017/10/10/cramer-bias-computer-algorithms/
MGTA495-990166: Special Topics: Legal and Ethical Issues w/Data (Simon) --MSBA, University of California, San Diego. 1/4
Any unauthorized use or reproduction of this document is strictly prohibited*.
8/14/2019 How Can We Reveal Bias in Computer Algorithms? | The Regulatory Review
Facebook has since changed its name permission policies and, presumably, also
changed the algorithm underlying the suspensions. During a recent workshop at the
University of Pennsylvania Law School, computer scientist Sorelle Friedler highlighted
the Facebook controversy as an example of how bias can creep into computer codes in
ways that designers do not foresee when writing them.
Such encoded biases become an even greater concern when government authorities
turn to algorithms for assistance. In March, Friedler, an assistant professor at
Haverford College, joined Andrew Selbst, a lawyer and currently a Postdoctoral Scholar
at Data & Society Research Institute, to discuss at the Penn Law workshop how legal
and scientific tools could be used to overcome transparency and accountability
concerns associated with government use of computer algorithms.
To help show how government could make machine learning more accountable, Selbst
referred to private sector machine learning practices that are subject to existing
regulations that partially meet his three-part test for an adequate explanation. For
example, two federal laws and accompanying regulations seek to prevent
discrimination and provide consumers with greater transparency in credit decisions.
The policies accomplish these goals by mandating financial institutions explain to each
denied customer why their credit request was rejected. Selbst explained that requiring
adverse action notices amounts to answering the “what” part of his framework. The
This document is authorized for use by Karina J Mark, from 9/23/2019 to 12/14/2019, in the course:
https://www.theregreview.org/2017/10/10/cramer-bias-computer-algorithms/
MGTA495-990166: Special Topics: Legal and Ethical Issues w/Data (Simon) --MSBA, University of California, San Diego. 2/4
Any unauthorized use or reproduction of this document is strictly prohibited*.
8/14/2019 How Can We Reveal Bias in Computer Algorithms? | The Regulatory Review
notices—which must include the key reasons for denial, such as insufficient income or
missing record of address—inform customers what happened in their individual cases.
New regulations in the European Union (EU), meanwhile, answer the “how” part of the
explanation framework, Selbst said. In an effort to shed light on automated decision-
making, the revised EU General Data Protection Regulation requires the company or
agency using machine learning to provide impacted individuals with “meaningful
information about the logic” underlying the algorithm. The new regulations will
become effective in May 2018 but Selbst indicated that the EU’s “meaningful
information” phrasing would help improve transparency and reveal bias if applied to
U.S. agencies’ algorithmic decisions.
Friedler turned to the private sector for an example. When it launched same-day
delivery last year, Amazon encountered how strongly customers’ zip codes were tied to
their race in some U.S. cities. As a result, the e-commerce company’s decision to roll
out the expedited delivery option in a limited number of zip codes meant that majority-
This document is authorized for use by Karina J Mark, from 9/23/2019 to 12/14/2019, in the course:
https://www.theregreview.org/2017/10/10/cramer-bias-computer-algorithms/
MGTA495-990166: Special Topics: Legal and Ethical Issues w/Data (Simon) --MSBA, University of California, San Diego. 3/4
Any unauthorized use or reproduction of this document is strictly prohibited*.
8/14/2019 How Can We Reveal Bias in Computer Algorithms? | The Regulatory Review
African American neighborhoods were cut out of the service in cities like Boston,
Chicago, and New York. Auditing algorithms for the influence of proxy variables can
help reveal and prevent biased outcomes, Friedler concluded.
Friedler and Selbst agreed machine learning holds great promise to improve
government decision-making in many policy areas. But they both also emphasized that
policymakers must remain vigilant about detecting real, even if unintended, bias that
can result from machine learning. Both computer science and legal tools can aid in this
effort.
The workshop was the sixth installment of the seven-part Optimizing Government
series, which was supported by the Fels Policy Research Initiative. Cary Coglianese, a
professor at Penn Law and the Director of the Penn Program on Regulation, moderated
the discussion.
Tagged: Administrative Law, Big Data, European Union, machine learning, regulation
This document is authorized for use by Karina J Mark, from 9/23/2019 to 12/14/2019, in the course:
https://www.theregreview.org/2017/10/10/cramer-bias-computer-algorithms/
MGTA495-990166: Special Topics: Legal and Ethical Issues w/Data (Simon) --MSBA, University of California, San Diego. 4/4
Any unauthorized use or reproduction of this document is strictly prohibited*.