0% found this document useful (0 votes)
133 views15 pages

Leighton Andrews

Uploaded by

Brayan A Lopez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
133 views15 pages

Leighton Andrews

Uploaded by

Brayan A Lopez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Received: 19 September 2017 Revised: 9 May 2018 Accepted: 3 June 2018

DOI: 10.1111/padm.12534

SYMPOSIUM ARTICLE

Public administration, public leadership and the


construction of public value in the age of the
algorithm and ‘big data’
Leighton Andrews

Cardiff Business School, University of Cardiff,


Cardiff, UK Public administration scholarship has to a significant degree
Correspondence neglected technological change. The age of the algorithm and ‘big
Leighton Andrews, Cardiff Business School, data’ is throwing up new challenges for public leadership, which are
University of Cardiff, Aberconway Building,
already being confronted by public leaders in different jurisdictions.
Colum Drive, Cardiff CF10 3EU, UK.
Email: andrewsl7@cardiff.ac.uk Algorithms may be perceived as presenting new kinds of ‘wicked
problems’ for public authorities. The article offers a tentative over-
view of the kind of algorithmic challenges facing public leaders in
an environment where the discursive context is shaped by corpo-
rate technology companies. Public value theory is assessed as an
analytical framework to examine how public leaders are seeking to
address the ethical and public value issues affecting governance
and regulation, drawing on recent UK experience in particular. The
article suggests that this is a fruitful area for future research.

1 | I N T R O D U CT I O N

In one week in May 2018 the UK Health Secretary blamed a computer algorithm for errors in cancer screening, South
Wales Police admitted that the facial technology system they had used had thrown up thousands of ‘false positives’,
Amnesty International attacked the Metropolitan Police’s Gang Violence Matrix database as being racially discrimina-
tory, and the data profiling company Cambridge Analytica, under legislative scrutiny in the UK and other jurisdictions
over Facebook data harvesting, closed for business (BBC 2018; Burgess 2018; Hansard 2018; Solon and Laughland
2018). The challenges of algorithmic and data governance to public authorities could scarcely have been more dra-
matically revealed.
Yet technological change has largely been neglected by public administration (Dunleavy 2009; Pollitt 2010,
2012). Some researchers have articulated a new paradigm of ‘digital-era governance’, holding out ‘the promise of a
potential transition to a more genuinely integrated, agile and holistic government’ visible in all its aspects to citizens
and employees alike (Dunleavy et al. 2005). This has been developed further as ‘Essentially Digital Governance’ ideal-
ized as a ‘hypothetical new state’ with ‘a small intelligent core, informed by big data, its activities restricted mainly to

This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which
permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no
modifications or adaptations are made.
© 2018 The Author. Public Administration published by John Wiley & Sons Ltd.
ANDREWS 297

policy design, while citizens using a range of internet-based platforms would play a major role in devolved delivery,
leading government (at last) to a truly post-bureaucratic, “Information State”’ (Dunleavy and Margetts 2015). More
cautiously, others see digital technology as an enabler of ‘more efficient, transparent and effective government’,
drawing on ‘mobile applications, open data, social media, technical and organizational networks, the Internet of
things, sensors, data analytics’. These may demand ‘new styles of leadership, new decision-making processes, differ-
ent ways of organizing and delivering services, and new concepts of citizenship’ (Gil-Garcia et al. 2018). Others
(Fountain 2001; Borins et al. 2007) recognize the growing role of political leadership in shaping digital government.
While Dunleavy et al. (2006) addressed the power of large IT corporations with long-term outsourcing contracts over
government bodies, concerns about the dominance of the corporate technology sector over government remain
largely at the margins of these accounts.
Technology deployed in the public sector is becoming more and more sophisticated, with many examples of
‘machine learning’ systems:

These models, often colloquially simply referred to as algorithms, are commonly accused of being
inscrutable to the public and even their designers, slipping through processes of democracy and
accountability by being misleadingly cloaked in the neutral language of technology. (Veale
et al. 2018)

These are also accused, the authors note, of ‘replicating problematic biases inherent in the historical datasets
used to train them’.
This article explores a number of research questions. Are there algorithmic risks to the public? Are these algorith-
mic risks a form of ‘wicked problem’ requiring new and transformative solutions? What is the discursive context for
determining policy options? How are public bodies ensuring the ‘governance readiness’ (Lodge and Wegrich 2014) of
their organizations in the age of the algorithm and big data? The article explicitly examines public value theory
(Moore 1995) as an analytical framework to diagnose the actions taken by the UK to be ‘governance ready’ for these
new challenges. The benefit of Moore’s theory, modified from his original work, is that it provides a dynamic legiti-
mizing framework for the development of public value objectives, the gaining of support from the authorizing envi-
ronment of the public sphere, and the development of the necessary capacity to act.

2 | DATA, METHODS, THEORY

The article draws on an 18-month qualitative review of academic papers and articles, documentary materials such as
governmental and legislative papers, media content, surveys and reports from consultancies and corporate organiza-
tions. Inductive analysis of this material has been used to develop an overview of algorithmic risks, asking whether
the challenges they pose for public administrators can be considered to be ‘wicked problems’ (Rittell and Webber
1973). Drawing on the qualitative analysis, the discussion section then examines the ways in which public leaders are
attempting to make sense of the challenges of algorithmic regulation, including some of the policy proposals now
being advanced in the UK in particular. These are live issues and this is a fast-moving field.
The article adopts a multi-theoretical approach to the research questions identified. It makes use of Alford and
Head’s recent (2017) assessments of ‘wicked problems’. Were the article to focus on domain-specific policy chal-
lenges, for example in respect of social media, Kingdon’s multiple streams analysis and Sabatier’s advocacy coalition
framework might have been utilized to examine developments. As the article addresses broader questions of gover-
nance of a new technology, the article draws on Spar’s (2001) four-phase cycle of technological regulation to illus-
trate how the discourse on technological regulation is shaped. The article uses public value theory (PVT) as an
analytical framework to identify how issues of value and ethics have been addressed in public policy, noting that PVT
has rarely been used to underpin regulatory action. In terms of a theoretical contribution it seeks to suggest the need
298 ANDREWS

for greater research into the role of PVT as an analytic framework for deliberative development of governance and
regulatory approaches in new areas where public value is contested.

3 | A L G O R I T H M S , BI G D A T A A N D T H E S E A R C H F O R P U B L I C V A L U E

The dictionary definition of an algorithm is ‘a process or set of rules to be followed in calculations or other problem-
solving operations, especially by a computer’. Mergel et al. (2016) define big data as: ‘high-volume data that fre-
quently combines highly structured administrative data actively collected by public sector organizations with continu-
ously and automatically collected structured and unstructured real-time data that are often passively created by
public and private entities through their Internet interactions’. The problem arises in the age of machine learning and
big data, with algorithms which are designed for self-learning and adjustment, but are based, of course, on inbuilt
human judgements or biases at their creation (Diakopoulos 2015; Turing 2017).
Pasquale (2015) says ‘authority is increasingly expressed algorithmically’. Yeung (2017) speaks of ‘algorithmic
power’. This is not, of course, to say that all algorithms require governance or regulatory intervention. Gillespie
(2014) has called certain kinds of algorithms ‘public relevance algorithms’ which have ‘political valence’. In this article
I will examine some of the potential harms or risks which have already been identified as challenges for governance,
since it is the identification of risks which is the likely catalyst for political or governmental action (Beck 1992).
The relevance of PVT is underpinned by two principal research observations. First, values are at the heart of dis-
cussions on algorithms and big data (Mittelstadt et al. 2016). Ethical dilemmas require resolution if big data is to con-
tribute to public value (Mergel et al. 2016, following boyd and Crawford 2012). These include who can use data and
for what purpose; how can privacy be protected when data can be collected in a variety of ways which collectively
allow the identification of individuals; how can security issues be managed; how much data can be effectively man-
aged by public bodies and how might ‘digital exhaust’—data captured for other purposes—be legitimately used for
public decision-making? Since algorithms are being trained on big data, the issues that they identify have a bearing
on the regulation of algorithms too. Second, as Veale et al. (2018) point out, practitioners are already deploying these
systems in the public sector and ‘are facing immediate, value laden challenges’. They suggest that researchers should
not assume that public sector practitioners are naive about ‘challenges such as fairness and accountability’, and urge
greater engagement, based on trust, between public bodies and researchers.
PVT was originally conceived as a way of assessing quality public management (Jorgensen and Bozeman 2007).
Public value can be thought of both as what the public values and as what adds value to the public sphere, defined
as ‘a democratic space which includes, but is not coterminous with the state’ (Benington 2011). The notion of the
public sphere as conceived by Habermas (1962) is of course itself contested (Lunt and Livingstone 2013), but ‘its nor-
mative value remains considerable’ (Sparks 2004).
As Bryson et al. (2017) note, PVT’s normative focus was developed via Moore’s strategic triangle, which urges
public managers to be clear about their purpose in creating public value, engage publicly to respond to and shape the
wider authorizing environment of the public sphere, and focus on ensuring that their organization has the necessary
operational capacity to deliver public value. They say this is ‘an easily understandable and useful heuristic guide to
practical reasoning’ for public managers, but argue that there has been little empirical research on the framework in
operation. The authors of the original 2002 UK Cabinet Office paper on the potential role of public value theory in
UK governance argued that PVT’s uses might include ‘government regulation’ (Kelly et al. 2002). Moore (2013) looks
at some cases of the creation of public value by regulatory bodies. PVT has been used to underpin governance of the
BBC, ‘probably the most fully developed set of reflections on public value and the implementation of a public value-
based regime of any UK public body’ (Collins 2007). PVT has had a significant impact as an operational tool across a
variety of EU member states (Donders and Moe 2011). In the public administration literature the application of PVT
in the regulation of the BBC is only briefly referenced (Alford and O’Flynn 2009; Benington and Moore 2011; Wil-
liams and Shearer 2011; Dahl and Soss 2012).
ANDREWS 299

Alford and O’Flynn (2009) argue that PVT is both an empirical and a normative approach, which is capable of
being read as paradigm, rhetoric, narrative and performance measure. Some (Rhodes and Wanna 2007) have argued
that PVT is in danger of eliding the different roles of public managers and political leaders and may have less rele-
vance in systems following the Westminster Model where there are clear demarcations. However, there are many
examples of regulation in Westminster Model polities where broad principles are established by politicians and the
detail is left to the regulators (Majone 1997), such as OFCOM in the field of UK communications technology policy
(Lunt and Livingstone 2012). It is also possible for democratic polities to lay down principles for ensuring clarity
between public managers and elected political leaders (Public Governance 2005). Moore (2014) acknowledges that
there were weaknesses in his original formulation, and demonstrates his awareness of the importance of political
leadership in setting goals. Meanwhile Hartley et al. (2015) identify political astuteness as a necessary skill for public
managers. Bryson et al. (2017) emphasize Moore’s commitment to ‘the important role of politicians, political leader-
ship and politics in public value production in a democratic society’.
One valid criticism has been that of Dahl and Soss (2014) that PVT has largely eschewed ‘foundational questions
of power and conflict’ and devotes little attention ‘to the state’s traditional role as a “countervailing power”’
(Galbraith 1952). Jacobs (2014) has argued that PVT can too easily be incorporated into neo-liberal rationality. How-
ever, Moore (2014) has argued that the word ‘value implicitly rejected neoliberal ideas that sought to limit govern-
ment’s concerns to technical efforts to counter various forms of market failure’, reasserting the role of government in
promoting equity and justice, using state authority.
The potential application of public value theory (PVT) to wicked problems has been addressed by Moore himself
(2013). He joins with co-authors (Geuijen et al. 2017) to add further dimensions: separate institutional platforms
(government, civil society, commercial) and multiple ‘spheres’ of action (international, national, state or federal, local
government, grass roots level). With adjustment they suggest that Moore’s strategic triangle is directly relevant to
‘global wicked issues’. They also call attention to the way in which specific political discourses speak only to those
elements of public value which fit their narrative. Morse, meanwhile, identifies ‘integrative’ public leadership as a pro-
cess in which numerous actors from different spheres work together to create public value: public value, therefore, is
‘a social construct’ (Morse 2010).
I will now consider the findings of the empirical research under the headings of algorithmic risks, wicked prob-
lems and the discursive context before turning to a discussion of the relevance of public value theory to governance
readiness for algorithmic problems.

4 | ALGORITHMIC RISKS

Below I develop six broad examples of algorithmic challenges for public policy. My intention is illustrative: it indicates
the broad algorithmic challenges facing public leaders at local, federal, national and international levels, in order to
demonstrate the multi-governmental levels at which administrative and regulatory capacity is having to be built.
The first issue is what I call algorithmic selection error, as witnessed in the UK cancer screening and police facial
recognition algorithms. However, there are also examples of algorithms whose selection mechanisms have been
found to operate in discriminatory ways. These include algorithms designed for checking credit-worthiness, or eligi-
bility for driving licences, or job applications, for predictive policing, in education or for advertising or other services.
Google’s voice recognition system has been found to have significant issues in recognizing women’s voices (Tatman
2016). Google advertising was more likely to show men high-paying CEO jobs than women (Datta et al. 2015).
Advertising search brought up ads featuring ‘arrest’ more frequently for black-identifying first names than white-
identifying first names (Sweeney 2013). Facial recognition technology was found to be biased to recognition of white
people (Buolamwini 2017). A predictive policing algorithm resulted in racial targeting of black neighbourhoods (Lum
and Isaac 2016). Algorithmic judgements on individuals’ risks of reoffending were found to be racially biased (Angwin
2016). Meanwhile, teachers were unfairly sacked on the basis of algorithms (O’Neil 2016, 2017a, 2017b).
300 ANDREWS

Second, we are seeing a growing number of cases of algorithmic law-breaking. The car manufacturer Volkswagen
used a ‘defeat device’ to evade emissions limitation legislation. This algorithm recognized when the car was in a com-
pliance test situation rather than a real-time road situation, and activated pollution-controlling software to reduce
exhaust emissions when the car was being tested. When the car was on the road, the pollution controlling devices
were switched off, meaning that higher levels of air pollutants were emitted than under testing. Switching off these
devices resulted in higher on-road performance and more economic fuel usage than would happen with the fully
active emission control system (Congressional Research Service 2016). Civil and criminal cases were taken forward in
the United States (Environmental Protection Agency 2017). Actions or threats of action have followed in other juris-
dictions. Meanwhile, Uber has not been allowed to operate in some cities, and public officials have put in place mea-
sures to seek to track its attempts to operate where it has been banned. These have included ‘sting’ operations
whereby city officials seek to use the Uber app to hail rides to demonstrate that the company is operating in breach
of local laws, regulations or agreements. In retaliation, it is said that Uber employees have taken steps to seek to
identify public officials who may be seeking to catch them out, identifying the hailing of rides around civic buildings
as likely attempts at ‘stings’, or seeking to profile public officials from social media and tagging them with a piece of
code that said ‘Greyball’ and a string of numbers. If someone tagged called a car, Uber could mobilize ‘ghost’ cars in a
fake version of their app, or show that no cars were available to be summoned. If drivers picked up anyone flagged
as a ‘Greyball’, Uber might call the driver, instructing them to end the ride (Isaac 2017).
The third issue is algorithmic manipulation or gaming. There has been considerable focus recently on the phe-
nomenon of ‘fake news’ and its political influence. Fake news is sustained by advertising revenues derived from
online platforms. More likes, more shares, and more clicks lead to more money for advertisers and platforms (Tambini
2017). Facebook’s vast range makes the platform particularly attractive to advertisers—and its ability to micro-target
audiences, based on the data accumulated about users, and bought from elsewhere (Halpern 2016) is at the heart of
this. The algorithm behind Facebook’s Newsfeed organizes information according to its learned understanding of per-
sonal likes and interests, in order to maintain their attention and keep people on its site (Luckerson 2015; Wu 2016).
Fake news creators target users with emotive news stories designed to appeal to their interests and increase the like-
lihood of these being shared with like-minded partisans. To illustrate, 140 pro-Trump fake news websites were being
run for profit from the single Macedonian town of Veles. Engagement with fake news stories exceeded engagement
with real news stories on Facebook in the months preceding the US Presidential election (Silverman 2016a, 2016b).
Research has shown that ‘that by mining a person’s Facebook “likes”, a computer was able to predict a person’s
personality more accurately than most of their friends and family’ (Youyou et al. 2015). The micro-targeting of Face-
book advertising during the UK Brexit campaign and US Presidential election by commercial organizations with expe-
rience in psychological warfare or ‘psy-ops’ has been the subject of a series of news and now regulatory and
legislative investigations in respect of their involvement in the UK Brexit vote (Cadwalladr 2017), the US Presidential
election (Grassegger and Krogerus 2017) and other jurisdictions (Keter 2017). The UK’s Information Commissioner
(2018a) is investigating the political use of private data. The UK Electoral Commission (2018) is examining allegedly
coordinated efforts by the different Brexit ‘Leave’ campaigns. In the United States, the Federal Trade Commission
(FTC) has an open investigation into Facebook’s privacy practices (FTC 2018). Tambini et al. (2017) say that these pri-
vate companies: ‘were not designed to play such a significant role in the public sphere. Their codes of practice are
insufficient, they do not make their data transparent, and their proprietary algorithms lack independent oversight.’
These issues are now under scrutiny in legislatures in the USA, the UK and Canada in particular (Senate 2018; House
of Commons 2018a; Parliament of Canada 2018).
The fourth example is what I call algorithmic propaganda. The US Intelligence community—the CIA, FBI and
National Security Agency—stated that Russian propaganda activities in the 2016 US Presidential election campaign
had relied on both covert intelligence operations, such as cyber activity, along with more overt efforts by Russian
government agencies, state-funded media, third-party intermediaries, and paid social media users or trolls, and bots
orchestrated from the Internet Research Agency, a ‘troll farm’ backed by Russia. This was a deliberate attempt to
‘undermine the US-led liberal democratic order … undermine public faith in the US democratic process, denigrate
Secretary Clinton, and harm her electability and potential presidency’. The agencies said they had high confidence in
ANDREWS 301

these judgements (Office of the Director of National Intelligence 2017). Facebook has conceded that Russia-backed
posts reached 126 million Americans; Twitter has suspended 50,000 fake accounts (Solon and Siddiqui 2017;
Swaine 2018).
The fifth risk is algorithmic brand contamination. According to the Interactive Advertising Bureau (IAB), ‘in the
last few years programmatic trading has enjoyed a meteoric rise in the digital ad serving space’. Programmatic adver-
tising is defined by the IAB as ‘the use of automated systems and processes to buy and sell inventory. This includes,
but is not limited to, trading that uses real time bidding auctions’ (IAB 2014). Programmatic advertising is personal-
ized and designed to deliver to consumers in real-time advertising thought to be of interest to them as they surf web-
sites or social media platforms or search engines. It therefore requires information on the things that are of interest
to them or likely to trigger buying decisions by them. During 2017, a series of newspaper exposés have provoked
advertisers to look more closely at where their advertising was being placed. This has resulted in pressure on internet
intermediaries such as Google (particularly in relation to its subsidiary, Youtube) and Facebook, the removal of mate-
rial, calls for regulation, and boycotts by advertisers. (Mostrous and Dean 2017; Solon 2017; Vizard 2017).
The sixth area is what we might term algorithmic unknowns. This raises the question of whether machine learn-
ing means that algorithms are becoming too complicated for humans to understand or unpick. The notion of technol-
ogy ‘out of control’ has been a theme in political thought for centuries (Winner 1977, 1986). Chollet (2018) identifies
the commonly expressed fear ‘that AI will gain an agency of its own, become superhuman, and choose to destroy
us’—the notion of ‘General AI’—as one of the challenges facing AI researchers. Floridi (2017) warns that information
societies are being built without any kind of plan, and that we are surrounded by misinformation about the future,
scaremongering warnings about technological sci-fi scenarios, and ignorance, obscurantism, populism of all kinds. As
boyd and Crawford (2012) argue, ‘like other socio-technical phenomena, Big Data triggers both utopian and dysto-
pian rhetoric’. The notion of ‘computational agency’ (Tufekci 2015) underpins this sense that things could move
beyond human control. Scientists dispute how long, or if ever, ‘General AI’ will take to be developed as distinct from
artificial intelligence able to operate in specific domains (Stone et al. 2016; Grace et al. 2017). Machine learning’s
capacity for producing algorithmic outcomes beyond human understanding has propelled the issue of algorithmic
accountability into prominence, leading to calls for regulatory approaches (Pasquale 2015; Mulgan 2016) and early
engagement with ethical issues (Mittelstadt et al. 2016).

5 | W I C K E D P R O B L E MS ?

For Head and Alford (2015), concerns about wicked problems are associated with social pluralism (i.e., multiple stake-
holder interests), institutional complexity (including multilevel governance) and scientific uncertainty. They urge the
development of a scale of problem types, noting Heifetz’s (1994) suggestion of three types: the first or easiest, where
the definition of the problem and the likely solution are clear to the decision-maker; the second where definition is clear
but the solution is not; and the third type where both problem definition and solution are unclear. They note that deci-
sions on problem definition and solution identification also depend on stakeholder perspectives, drawing on Kingdon
(1984) and Sabatier (1988)—in other words, technical issues are only part of the discussion. Issues are contested—there
are not only ‘cognitive-analytical challenges but also communicative, political and institutional challenges’.
Separately they have argued that the term ‘wicked problem’ is ‘inflated and over-used’ and has become ‘a totaliz-
ing approach’ (Alford and Head 2017). There is pressure for ‘a dramatic transformative intervention’ rather than
incrementalist approaches. Genuinely wicked problems which are ‘technically complex’ require ‘thoughtful analysis,
dialogue and action’ on the part of affected stakeholders. Wicked problems are more likely to be those which have
structural complexity, are ‘unknowable’—that is, information is hidden, disguised or intangible; where knowledge is
fragmented or has less visibility because of its framing, where there are significant conflicts of interest and unequal
power between stakeholders. They argue for a more contingent approach, therefore, to the identification and classifi-
cation of problems which can lead to more appropriate interventions.
302 ANDREWS

Although many of the algorithmic issues might initially appear to be wicked problems, the first five all represent
issues in which regulatory or other state bodies are taking action, where there is a high degree of legislative and
media scrutiny, and where solutions appear to be at hand. While regulators at local, federal, state or international
levels may have had to augment their technical understanding, these largely fall into the area of Heifetz’s first two
types of problem. It is clear that some of the issues raised by big data, algorithms and artificial intelligence may cross
regulatory boundaries: the regulation of political advertising, based on personalized advertisements targeted through
data analysis, to take one example, could engage electoral regulators, media regulators, advertising regulators and
data protection authorities, requiring cross-organizational attention. That makes them complicated, but not necessar-
ily ‘wicked’. Algorithms which challenge human comprehension are the ones that could present as ‘wicked problems’.
Addressing complex problems can be as much an issue of problem setting as of problem solving. As Schoen
(1983, p. 40) writes, problem setting is:

the process by which we define the decision to be made, the ends to be achieved, the means which may
be chosen. In real-world practice, problems do not present themselves to the practitioners as givens. They
must be constructed from the materials or problematic situations which are puzzling, troubling, and
uncertain.

Within organizations, individuals apply a form of sense-making depending on the social and historical context in
which they find themselves (Weick et al. 2005; Weber and Glynn 2006). Hoppe (2011) argues that there is a useful
heuristic to be found for policy design in thinking through problems in a series of articulated stages: problem sensing,
problem definition and problem solving. This helps us conceive of wicked problems not as static but as evolving and
capable of being shaped and managed. Grint (2010) suggests that ‘the leader’s role with a Wicked Problem … is to
ask the right questions rather than provide the right answers’. The challenge for public leadership in the age of the
algorithm is as much about the framing of problems as their resolution.

6 | T H E D I S CUR SI V E C ON TE XT

Information asymmetry between governance and regulatory institutions and technology companies is one of the fac-
tors affecting whether or not a problem might be defined as ‘wicked’ and solutions found (Danaher et al. 2017).
Power relationships between governments and private actors are unbalanced in the ‘depleted state’ (Lodge 2013),
and private actors have the financial resources to recruit available talent with rewards packages that dwarf those on
offer from government or academia. Technology entrepreneurs, and the companies they control, are able to shape
not only knowledge about but also discourse around the technology, using their ‘control of technical language’
(Marvin 1988) ‘discursively to frame their services and technologies’ (Gillespie 2010), as an example of their per-
ceived ‘thought leadership’ (Drezner 2017) and ‘epistemic authority’ (Coni-Zimmer et al. 2017). In this context, the
word ‘algorithm’ is used to suggest objective decisions shorn of human biases: Facebook’s Trending Topics were ‘sur-
faced by an algorithm’, the company said in 2016 after it was accused of anti-conservative bias (Osofsky 2016).
In Silicon Valley, say Levina and Hasinoff (2017), ‘disruption is portrayed as a strategy that both drives technolog-
ical progress and improves the market by helping to dismantle ossified government regulations and inefficient
monopolies, which is said to liberate and empower individuals’. This is the doctrine of ‘disruptive innovation’
(Christensen 1997) as expressed in the Facebook formulation ‘Move fast and break things’ (Taplin 2017). As Beck
(1992) noted, the notion of technology as progress has become the hegemonic position. I call this approach ‘Silicon
values’, as opposed to public or human values. In 2016, President Obama made a deliberate and considered defence
of public value over Silicon values, stating that ‘government will never run the way Silicon Valley runs’ since govern-
ment had to deal with problems that no one else wanted to address (White House 2016).
ANDREWS 303

Carr (2016) suggests that political leadership in the ‘information age’ requires understanding that politics can
shape technology. In reaction to the assertion that the internet was ungovernable, Spar (2001) analysed earlier devel-
opments in communications technologies to identify phases of evolution towards rules-based governance, arguing
that when a technology is new, it often looks ungovernable. She identifies four phases of development: innovation,
commercialization, creative anarchy, and rules (see also Kohl 2012). She identifies the challenge of rule-making: that
‘old laws are unlikely to cover emerging technologies and new ones take time to create’. Entrepreneurs may storm
into ‘an unformed market’ planning to dominate it. But soon there becomes a need for clear ownership rules, coordi-
nation of technical standards, and avoidance of monopoly, or regulation where natural monopolies are formed.
Sometimes the pressure comes from the technological pioneers, sometimes their competitors, or ‘sometimes it is the
state, and sometimes a coalition of societal groups affected by the new technology and the market it has wrought’.
Regulation is never neutral: as Moe (1990) said, ‘for most issues, most of the time, a set of organized interest groups
already occupies and structures the upper reaches of political decision making’. He suggests that compromise is often
built into the construction of regulatory institutions, whether they are agencies or laws. Governance and regulation
develop in a contested context.

7 | DISCUSSION

Questions of ethics and value are central to development of governance of the most complex algorithms (Walport
2017). This section will examine the search for public value in policy-making utilizing Moore’s organizing principle of
the strategic triangle (Benington and Moore 2011):

• The development of a clear public purpose


• Management of the authorizing environment
• Development of the relevant capacity.

Moore’s revised ‘philosophical basis’ (2014) for PVT has direct relevance. As Geuijen et al. (2017) argue, setting
the public value goal needs to take into account vindication of rights and enforcement of duties, balancing social
costs and benefits, in the interests of a collectively conceived global just society. Cath et al. (2018) suggest that the
concept of human dignity assumed in the European General Data Protection Regulation (GDPR), which draws on the
1948 Universal Declaration of Human Rights, should be the pivotal concept for the ‘good AI society’. The report on
data governance by the Royal Society and British Academy (2017) argues that ‘the promotion of human flourishing is
the overarching principle that should guide the development of systems of data governance’.
Alford et al. (2017) recognize that in Moore’s account building legitimizing constituencies is a necessary part of
strategic public management and can include ‘lawmakers, interest groups, regulators, clients and … the general pub-
lic’. Creating the authorizing environment means building a public demand for action. The empirical evidence shows
that that means problematizing, in political terms, the issues which bear on people’s everyday lives, rather than algo-
rithms per se. Establishing any case for action is unlikely to be uncontested. Those with existing power, such as cor-
porate technology companies, may argue that intervention is both unnecessary and also a threat to innovation. In
some cases, governments partner with them in making policy, as has been the case with Facebook and UK policy on
artificial intelligence (Hall and Pesenti 2017). There may be competing policy priorities: privacy issues may dominate
in one domain but economic competitiveness aspirations may compete with safety concerns in another (for example,
driverless cars). Political challenges cannot be wished away (McConnell 2018). The argument is being played out on a
case-by-case basis in each policy domain, developing wider understanding of the challenges across government,
Parliament and regulatory networks, in the media, and through public engagement.
The debate on algorithmic governance rests within elite political, policy and media circles, although sometimes
structured public dialogue with focus groups, polling and discussions has been carried out (Royal Society/IPSOSMori
304 ANDREWS

2017). The empirical research suggests that specific actions have been undertaken to call into being an authorizing
environment, including:

• a clear mobilizing narrative (Royal Society/British Academy 2017)


• the endorsement of experts in the field (Hall and Pesenti 2017)
• broad, cross-party political endorsement (House of Lords 2018).

The overall conclusion from the Royal Society/IPSOSMori research and further survey evidence from the EU’s
Digital Single Market programme (European Commission 2017) suggests that people are open to exploring the role
of artificial intelligence, although they believe that these technologies require ‘careful management’.
Lodge and Wegrich (2014) highlight four capacities necessary for governance readiness: delivery, regulatory,
coordination, and analytical. These are said to be necessary to address ‘wicked problems’. Such capacities may
include new powers, including in respect of enforcement, like those tabled by the UK government for the Information
Commissioner in 2018 (House of Commons 2018b) or augmented finance, staffing and organizational learning
(Denham 2018; Information Commissioner’s Office 2018b). In terms of algorithms in high-frequency trading, the
objective of the regulator, the Financial Conduct Authority, was ‘not to let the best become the enemy of the good’,
recognizing that ‘perfection is, frankly, an impossibility’ (Wheatley 2014). This illustrates the real-time dilemmas fac-
ing regulators: they need to operate on the basis of judgements and heuristics, rather than on absolutely final
laboratory-controlled tests. Regulatory readiness, therefore, is not a settled state but a dynamic and interactive pro-
cess of learning and adjustment, in which regulators are always, to a degree, ‘catching up’ with the technology
(Gomber and Gsell 2006).
The empirical analysis has identified a number of ways in which governmental institutions are seeking to make
themselves ‘governance ready’ for the algorithmic age. These have included the publication of authoritative scientific
evidence from internal government experts (Walport 2013, 2016; Executive Office of the President 2016); the
commissioning of external ethical and analytical advice (Hall and Pesenti 2017; Royal Society/British Academy
2017); the organization of deliberative encounters with the public and opinion polling (Royal Society/IPSOSMori
2017); formal public consultation (European Commission 2017); the engagement of committees of legislators in
evidence-based inquiries into these areas (European Parliament 2016; House of Commons 2016, 2017, 2018a,
2018c; House of Lords 2016, 2018); the creation of new institutions such as the Allan Turing Institute, the Centre
for Data Ethics and Innovation and the Office for AI (DBIS 2014; DCMS 2018); and sectoral investment (DBEIS/
DCMS 2018).
Empirical analysis of contemporary UK regulatory discussions has identified specific policy solutions advocated
for future regulation of algorithms and big data, which include technical, governance, regulatory, legislative and insti-
tutional solutions (for a fuller summary see Andrews 2017). In addition, there will be sector-specific challenges on
algorithmic regulation (Royal Society 2017). Limited attention appears to have been given to issues of multi-level
governance at local or federal level, although international cooperation has been widely discussed (European Parlia-
ment 2016; Cath et al. 2018).

8 | C O N CL U S I O N

Technological change remains under-researched and under-theorized in the public administration literature, but the
technological challenges facing public administration practitioners are growing in complexity. This article has
reviewed some of these in respect of the governance of algorithms, big data, machine learning and artificial intelli-
gence as they are presented in the media and public policy context. The article identifies that certain kinds of algo-
rithms may be considered ‘wicked problems’, but that others are being addressed through existing laws such as data
protection, privacy and equality and human rights laws, or regulatory procedures. Regulators may have to develop
ANDREWS 305

new capacity to make sense of the challenges that are arising, and in some cases laws may have to be updated. In
respect of algorithmic challenges which do not test the boundaries of human comprehension, there is a need for
domain-by-domain analysis of the challenges and likely future risks (Reisman et al. 2018). Regulators need to give
attention to the ways in which problems are constructed by market participants, including large corporate technology
companies. Algorithms whose workings raise issues that challenge human comprehension—often identified as ‘black
boxes’ (Pasquale 2015)—should be considered ‘wicked problems’, and the article extends our understanding of the
nature of ‘wicked problems’ in that light.
The article has also considered whether public value theory can be considered as an analytical framework for
examining how regulators and governments address complex and novel issues. The article discusses this in the spe-
cific context of work in the UK on data and algorithmic governance. The empirical analysis outlined here suggests
that it can. Moore’s original (1995) work developed PVT from detailed case examination of the ways in which public
managers conducted themselves. More recently, he and others (Geuijen et al. 2017) have considered how PVT might
be utilized to address wicked problems. In this article I have looked at how those engaged in issues of data and algo-
rithmic governance have clearly identified a public value objective, explored the issues raised deliberatively in a con-
structed if tentative ‘authorizing environment’, and considered whether the necessary governance capacity exists,
leading to specific recommendations such as the creation of a new Centre for Data Ethics and Innovation, now being
established ‘with a specific remit for algorithms’ (House of Commons 2018c) and other capacity-building measures
(House of Lords 2018). Just as public value was being developed before Moore constructed PVT, so data scientists,
ethicists, lawyers and public leaders are creating public value in a new field of governance, even if PVT is not explic-
itly cited as underpinning their work. Clearly, there is a need for more research into the use of PVT as both an analyt-
ical and normative framework for regulatory assessment, using case studies, qualitative interviews, documentary
analysis and quantitative modelling. This might include empirical analysis of how regulators address new challenges
on a case-by-case, domain-by-domain or comparative basis. The article therefore hints at new and fruitful ways in
which PVT might be explored in governance and regulatory contexts, giving additional support to Moore’s (2014)
philosophical analysis.

AC KNOWLEDG EME NT S
The author would like to thank the three independent reviewers whose comments helped re-frame the argument
and strengthen its focus.

OR CI D

Leighton Andrews https://orcid.org/0000-0001-9166-0116

RE FE RE NCE S
Alford, J., & Head, B. W. (2017). Wicked and less wicked problems: A typology and a contingency framework. Policy and Soci-
ety, 36, 397–413.
Alford, J., Douglas, S., Geuijen, K., & t’Hart, P. (2017). Ventures in public value management: Introduction to the symposium.
Public Management Review, 19, 589–604.
Alford, J., & O’Flynn, J. (2009). Making sense of public value: Concepts, critiques and emergent meanings. International
Journal of Public Administration, 32, 171–191.
Andrews, L. (2017). Algorithms, governance and regulation: Beyond ‘the necessary hashtags’. In L. Andrews, B. Benbouzid,
J. Brice, L. A. Bygrave, D. Demortain, A. Griffiths, … & K. Yeung. Algorithmic regulation. LSE Discussion Paper
85, September.
Angwin, J. (2016). Make algorithms accountable. New York Times, 1 August. https://mobile.nytimes.com/2016/08/01/
opinion/make-algorithms-accountable.html?referer=&_r=0
BBC (2018). Met Police 'gangs matrix' 'not fit for purpose'. 9 May. http://www.bbc.co.uk/news/
uk-england-london-44045914
Beck, U. (1992). Risk society. London: Sage.
306 ANDREWS

Benington, J. (2011). From private choice to public value? In J. Benington & M. Moore (Eds.), Public value: Theory and practice
(pp. 31–51). London: Palgrave Macmillan.
Benington, J., & Moore, M. (Eds.) (2011). Public value: Theory and practice. London: Palgrave Macmillan.
Borins, S., Kernaghan, K., Brown, D., Bontis, M., 6, P., & Thompson, F. (2007). Digital state at the leading edge. Toronto: Uni-
versity of Toronto Press.
boyd, d., & Crawford, K. (2012). Critical questions for big data. Information, Communication and Society, 15, 662–679.
Bryson, J., Sancino, A., Benington, J., & Sorensen, E. (2017). Towards a multi-actor theory of public value co-creation. Public
Management Review, 19, 640–654.
Buolamwini, J. (2017) When algorithms are racist. Observer, 28 May. https://www.theguardian.com/technology/2017/
may/28/joy-buolamwini-when-algorithms-are-racist-facial-recognition-bias
Burgess, M. (2018). Facial recognition tech used by UK police is making a ton of mistakes. WIRED, 4 May. https://www.
wired.co.uk/article/face-recognition-police-uk-south-wales-met-notting-hill-carnival
Cadwalladr, C. (2017). The great British Brexit robbery: How our democracy was hijacked. Observer, 7 May.
Carr, M. (2016). US power and the internet in international relations: The irony of the information age. London: Palgrave
Macmillan.
Cath, C. J. N., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial intelligence and the 'good society': The
US, EU, and UK approach. Science and Engineering Ethics, 24, 505–528.
Chollet, F. (2018). What worries me about AI. Medium, 28 March. https://medium.com/@francois.chollet/
what-worries-me-about-ai-ed9df072b704
Christensen, C. M. (1997). The innovator's dilemma: When new technologies cause great firms to fail. Boston, MA: Harvard Busi-
ness School Press.
Collins, R. (2007). The BBC and ‘public value’. M & K Media und Kommunicationschwissenschaft, 55, 164–184.
Congressional Research Service (2016). Volkswagen, defeat devices, and the Clean Air Act: Frequently asked questions.
1 September. https://fas.org/sgp/crs/misc/R44372.pdf
Coni-Zimmer, M., Wolf, K. D., & Collin, P. (2017). Editorial to the Issue on Legitimization of Private and Public Regulation:
Past and Present. Politics and Governance, 5, 1–5.
Dahl, A., & Soss, J. (2012). Neoliberalism for the common good: Public value governance and the downsizing of democracy. Cen-
tre for Integrative Leadership, University of Minnesota.
Dahl, A., & Soss, J. (2014). Neoliberalism for the common good: Public value governance and the downsizing of democracy.
Public Administration Review, 74, 496–504.
Danaher, P., Hogan, M. J., Noone, C., Kennedy, R., Behan, A., De Paor, A., … & Shankar, K. (2017). Algorithmic governance:
Developing a research agenda through the power of collective intelligence. Big Data and Society, 4, 1–21.
Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated experiments on ad privacy settings. Proceedings on Privacy Enhancing
Technologies, 2015, 92–112.
DBIS (Department for Business, Innovation and Science) (2014). Plans for world class research centre in the UK. 19 March.
https://www.gov.uk/government/news/plans-for-world-class-research-centre-in-the-uk
DBEIS/DCMS (Department for Business, Energy and Industrial Strategy/Department of Digital, Culture, Media and Sport)
(2018). AI sector deal. 26 April. https://www.gov.uk/government/publications/artificial-intelligence-sector-deal/
ai-sector-deal#foreword
DCMS (Department of Digital, Culture, Media and Sport) (2018). Search for leader of Centre for Data Ethics and Innovation
launched. 18 January. https://www.gov.uk/government/news/search-for-leader-of-centre-for-data-ethics-and-
innovation-launched
Denham, E. (2018). Elizabeth Denham’s keynote speech at the IAPP Europe Data Protection Intensive 2018. 18 April.
https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2018/04/iapp-europe-data-protection-intensive-2018/
Diakopoulos, N. (2015). Algorithmic accountability. Digital Journalism, 3, 398–415.
Donders, K., & Moe, H. (2011). Exporting the public value test. Goteburg: Nordicom.
Drezner, D. (2017). The ideas industry. Oxford: Oxford University Press.
Dunleavy, P. (2009). Governance and state organization in the digital era. In C. Avgerou, R. Mansell, D. Quah, &
R. Silverstone (Eds.), The Oxford handbook of information and communication technologies. Oxford: Oxford University
Press.
Dunleavy, P., & Margetts, H. (2015). Design principles for essentially digital governance. LSE Research Online. http://eprints.
lse.ac.uk/64125/
Dunleavy, P., Margetts, H., Bastow, S., & Tinkler, J. (2005). New Public Management is dead: Long live digital-era gover-
nance. Journal of Public Administration Research and Theory, 16, 467–494.
Dunleavy, P., Margetts, H., Bastow, S., & Tinkler, J. (2006). Digital era governance: IT corporations, the state and e-government.
Oxford: Oxford University Press.
Electoral Commission (2018). Electoral Commission statement on investigation into Leave.EU. 21 April. https://www.
electoralcommission.org.uk/i-am-a/journalist/electoral-commission-media-centre/news-releases-referendums/electoral-
commission-statement-on-investigation-into-leave.eu
ANDREWS 307

Environmental Protection Agency (2017). Volkswagen Clean Air Act civil settlement. http://www.epaarchive.cc/
enforcement/volkswagen-clean-air-act-partial-settlement.html
European Commission (2017). Attitudes towards the impact of digitisation and automation on daily life. Digital Single Market
programme. 10 May. https://ec.europa.eu/digital-single-market/en/news/attitudes-towards-impact-digitisation-and-aut
omation-daily-life
European Parliament (2016). Draft report with recommendations to the Commission on Civil Law Rules on Robotics
(2015/2103(INL)) Committee on Legal Affairs, 31 May. http://www.europarl.europa.eu/sides/getDoc.do?pubRef=
-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN
Executive Office of the President (2016). Artificial intelligence, automation and the economy. Washington, DC. https://
obamawhitehouse.archives.gov/blog/2016/12/20/artificial-intelligence-automation-and-economy
Federal Trade Commission (2018). Statement by the Acting Director of FTC’s Bureau of Consumer Protection regarding
reported concerns about Facebook privacy practices. 26 March. https://www.ftc.gov/news-events/
press-releases/2018/03/statement-acting-director-ftcs-bureau-consumer-protection
Floridi, L. (2017). Information societies are being cobbled together without a plan. The National, 5 May. http://www.
thenational.ae/opinion/comment/information-societies-are-being-cobbled-together-without-a-plan?platform=hootsuite
Fountain, J. E. (2001). Building the virtual state. Washington, DC: Brookings Institution.
Galbraith, J. K. (1952). American capitalism: The concept of countervailing power. London: Penguin.
Geuijen, K., Moore, M., Cederquist, A., Ronning, R., & van Twist, M. (2017). Creating public value in global wicked problems.
Public Management Review, 19, 621–639.
Gil-Garcia, J. R., Dawes, S. S., & Pardo, T. A. (2018). Digital government and public management research: Finding the cross-
roads. Public Management Review, 20, 633–646.
Gillespie, T. (2010). The politics of platforms. New Media & Society, 12, 347–364.
Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies:
Essays on communication, materiality, and society (pp. 167–193). Cambridge, MA: MIT Press.
Gomber, P., & Gsell, M. (2006). Catching up with technology: The impact of regulatory changes on ECNs/MTFs and the trad-
ing venue landscape in Europe. Competition and Regulation in Network Industries, 1, 535–557.
Grace, K., Salvatier, J., Dafoe, A., Zhang, B., & Evans, O. (2017). When will AI exceed human performance? Evidence from AI
experts. Arxiv e-prints, 30 May. https://arxiv.org/abs/1705.08807
Grassegger, H., & Krogerus, M. (2017). The data that turned the world upside down. Motherboard, Vice.com, 28 January.
https://motherboard.vice.com/en_us/article/mg9vvn/how-our-likes-helped-trump-win
Grint, K. (2010). Wicked problems and clumsy solutions: The role of leadership. In S. Brookes & K. Grint (Eds.), The new public
leadership challenge (pp. 169–186). London: Palgrave Macmillan.
Habermas, J. (1962). The structural transformation of the public sphere. Boston, MA: Beacon Press.
Hall, W., & Pesenti, J. (2017). Growing the artificial intelligence industry in the United Kingdom. 15 October. https://www.gov.
uk/government/publications/growing-the-artificial-intelligence-industry-in-the-uk
Halpern, S. (2016). They have, right now, another you. New York Review of Books, 22 December. http://www.nybooks.com/
articles/2016/12/22/they-have-right-now-another-you/
Hansard (2018). House of Commons oral statement: ‘Breast cancer screening’. https://hansard.parliament.uk/
commons/2018-05-02/debates/BE9DB48A-C9FF-401B-AC54-FF53BC5BD83E/BreastCancerScreening
Hartley, J., Alford, J., & Hughes, O. (2015). Political astuteness as an aid to discerning and creating public value. In
J. A. Bryson, B. C. Crosby, & L. Bloomberg (Eds.), Public value and public administration (pp. 25–38). Washington, DC:
Georgetown University Press.
Head, B. W., & Alford, J. (2015). Wicked problems: Implications for public policy and management. Administration and Soci-
ety, 47, 711–739.
Heifetz, R. A. (1994). Leadership with no easy answers. Cambridge, MA: Harvard University Press.
Hoppe, R. (2011). The governance of problems. Bristol: Policy Press.
House of Commons (2016). Select Committee on Science and Technology, Robotics and artificial intelligence, 5th Report of
2016–17. https://publications.parliament.uk/pa/cm201617/cmselect/cmsctech/145/14502.htm
House of Commons (2017). Select Committee on Science and Technology, Algorithms in decision-making inquiry. https://
www.parliament.uk/business/committees/committees-a-z/commons-select/science-and-technology-committee/
inquiries/parliament-2015/inquiry9/
House of Commons (2018a). Select Committee on Digital, Culture, Media and Sport, Fake news inquiry. https://www.
parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/inquiries/
parliament-2017/fake-news-17-19/
House of Commons (2018b). Notices of Amendments, 27 April. https://publications.parliament.uk/pa/bills/
cbill/2017-2019/0190/amend/data_rm_rep_0427.1-7.html
House of Commons (2018c). Select Committee on Science and Technology, Algorithms in decision-making inquiry, Oral evi-
dence, 23 January. http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/
science-and-technology-committee/algorithms-in-decisionmaking/oral/77536.html
308 ANDREWS

House of Lords (2016). EU Internal Market Sub-committee, Online platforms and the EU Digital Single Market inquiry. https://
www.parliament.uk/online-platforms
House of Lords (2018). Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able? HL Paper 100, 16 April.
Retrieved from: https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10002.htm
Information Commissioner’s Office (2018a). ICO statement: Investigation into data analytics for political purposes. https://
ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2018/05/ico-statement-investigation-into-data-analytics-
for-political-purposes/
Information Commissioner’s Office (2018b). Technology strategy, 2018–2021. https://ico.org.uk/about-the-ico/
our-information/our-strategies-and-plans/
Interactive Advertising Bureau (2014). The programmatic handbook. https://iabuk.net/sites/default/files/The%
20Programmatic%20Handbook.pdf
Isaac, M. (2017). How Uber deceived the authorities worldwide. New York Times, 3 March. https://www.nytimes.
com/2017/03/03/technology/uber-greyball-program-evade-authorities.html?_r=0
Jacobs, L. R. (2014). The contested politics of public value. Public Administration Review, 74, 480–494.
Jorgensen, T. B., & Bozeman, B. (2007). Public values: An inventory. Administration and Society, 39, 354–381.
Kelly, G., Mulgan, G., & Muers, S. (2002). Creating public value: An analytical framework for public service reform. Strategy Unit,
Cabinet Office. http://webarchive.nationalarchives.gov.uk/20100407164622/http://www.cabinetoffice.gov.uk/
strategy/seminars/public_value.aspx
Keter, G. (2017). Uhuru hires firm behind Trump, Brexit victories. The Star, Kenya, 10 May, http://www.the-star.co.ke/
news/2017/05/10/uhuru-hires-data-firm-behind-trump-brexit-victories_c1557720
Kingdon, J. W. (1984). Agendas, alternatives and public policies. London: Longman.
Kohl, U. (2012). The rise and rise of online intermediaries in the governance of the Internet and beyond—connectivity inter-
mediaries. International Review of Law, Computers and Technology, 26, 185–210.
Levina, M., & Hasinoff, A A. (2017). The Silicon Valley ethos: Tech Industry products, discourses, and practices. Television
and New Media, 18, 489–495.
Lodge, M. (2013). Crisis, resources and the state: Executive politics in the age of the depleted state. Political Studies Review,
11, 378–390.
Lodge, M., & Wegrich, K. (2014). Setting the scene: Challenges to the state, governance readiness, and administrative capaci-
ties. In M. Lodge & K. Wegrich (Eds.), The governance report (pp. 15–26). Hertie School of Governance. Oxford: Oxford
University Press.
Luckerson, V. (2015). Here’s how Facebook’s News Feed really works. 9 July. http://time.com/collection-post/3950525/
facebook-news-feed-algorithm/
Lum, K., & Issac, W. (2016). To predict and serve? Significance, 13, 14–19.
Lunt, P., & Livingstone, S. (2012). Media regulation. London: Sage.
Lunt, P., & Livingstone, S. (2013). Media studies’ fascination with the concept of the public sphere: Critical reflections and
emerging debates. Media, Culture and Society, 35, 87–96.
Majone, G. (1997). From the positive to the regulatory state: Causes and consequences of change in the mode of gover-
nance. Journal of Public Policy, 17, 139–167.
Marvin, C. (1988). When old technologies were new. Oxford: Oxford University Press.
McConnell, A. (2018). Rethinking wicked problems as political problems and policy problems. Policy & Politics, 46, 165–180.
Mergel, I., Rethemeyer, R. K., & Isett, K. (2016). Big data in public affairs. Public Administration Review, 76, 928–937.
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big
Data and Society, 3, 1–12.
Moe, T. (1990). The politics of structural choice: Toward a theory of public bureaucracy. Republished in Williamson, O. E.
(Ed.) (1995). Organization theory: From Chester Barnard to the present and beyond. Oxford: Oxford University Press.
Moore, M. (1995). Creating public value. Cambridge, MA: Harvard University Press.
Moore, M. (2013). Recognising public value. Cambridge, MA: Harvard University Press.
Moore, M. (2014). Public value accounting: Establishing the philosophical basis. Public Administration Review, 74, 465–477.
Morse, R. (2010). Integrative public leadership: Catalysing collaboration to create public value. Leadership Quarterly, 21,
231–245.
Mostrous, A., & Dean, J. (2017). Top brands pull Google adverts in protest at hate video links. The Times, 23 March.
Mulgan, G. (2016). A machine intelligence commission for the UK: How to grow informed public trust and maximise the pos-
itive impact of smart machines. http://www.nesta.org.uk/sites/default/files/a_machine_intelligence_commission_for_
the_uk_-_geoff_mulgan.pdf
Office of the Director of National Intelligence (2017). Background to ‘Assessing Russian Activities and Intentions in Recent
US Elections’: The analytic process and cyber incident attribution. https://www.intelligence.senate.gov/sites/default/
files/documents/ICA_2017_01.pdf
O’Neil, C. (2016). Weapons of math destruction. New York: Crown Random House.
O’Neil, C. (2017a). The math whizzes who nearly brought down Wall Street. Saturday Evening Post, March/April. http://
www.saturdayeveningpost.com/2017/04/03/in-the-magazine/weapons-math-destruction.html
ANDREWS 309

O’Neil, C. (2017b). Don't grade teachers with a bad algorithm. Bloomberg View, 15 May. https://www.bloomberg.com/view/
articles/2017-05-15/don-t-grade-teachers-with-a-bad-algorithm?utm_content=view&utm_campaign=socialflow-
organic&utm_source=twitter&utm_medium=social&cmpid%3D=socialflow-twitter-view
Osofsky, J. (2016). Information about Trending Topics. Facebook Newsroom. https://newsroom.fb.com/news/2016/05/
information-about-trending-topics/
Parliament of Canada (2018). Committee evidence, 26 April. https://www.ourcommons.ca/Parliamentarians/en/
PublicationSearch?targetLang=&Text=AggregateIQ+Data+Services+Ltd.&PubType=40017&ParlSes=&Topic=&Proc=&
Per=&com=&oob=&PubId=&Cauc=&Prov=&PartType=&Page=1&RPP=15#
Pasquale, F. (2015). The black box society. Cambridge, MA: Harvard University Press.
Pollitt, C. (2010). Technological change: A central yet neglected feature of public administration. NISPAcee Journal of Public
Administration and Policy, 3, 31–53.
Pollitt, C. (2012). Time, policy, management: Governing with the past. Oxford: Oxford University Press.
Public Governance (2005). Code for Chief Executive Excellence in Denmark. http://publicgovernance.dk/?siteid=672&menu_
start=778&menu_start1=672
Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic impact assessments: A practical framework for public
agency accountability. AINow Institute, 9 April. https://ainowinstitute.org/reports.html
Rhodes, R. A. W., & Wanna, J. (2007). The limits to public value, or rescuing responsible government from the platonic guard-
ians. Australian Journal of Public Administration, 66, 406–421.
Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4, 155–169.
Royal Society (2017). Written evidence to the House of Commons Science and Technology Committee. http://data.
parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/science-and-technology-committee/
algorithms-in-decisionmaking/written/69105.pdf
Royal Society and British Academy (2017). Data management and use: Governance in the 21st century. https://royalsociety.
org/topics-policy/projects/data-governance/
Royal Society/IPSOSMori (2017). Public views of machine learning. April. Retrieved from: https://royalsociety.org/~/media/
policy/projects/machine-learning/digital-natives-16-10-2017.pdf
Sabatier, P. (1988). An advocacy coalition framework of policy change and the role of policy-oriented learning therein. Policy
Sciences, 21, 129–168.
Schoen, D. (1983). The reflective practitioner: How professionals think in action. New York: Basic Books.
Senate (2018). US Senate Committee on Commerce, Science and Transportation, 10 April. https://www.commerce.senate.
gov/public/index.cfm/2018/4/facebook-social-media-privacy-and-the-use-and-abuse-of-data
Silverman, C. (2016a). How teens in the Balkans are duping Trump supporters with fake news. 3 November. https://www.
buzzfeed.com/craigsilverman/how-macedonia-became-a-global-hub-for-pro-trumpmisinfo?utm_term=.qldRxB71G#.
nrxmZgPA8
Silverman, C. (2016b). This analysis shows how viral fake election news stories outperformed real news on Facebook.
16 November. https://www.buzzfeed.com/craigsilverman/viral-fake-election-news-outperformed-real-news-onfaceboo
k?utm_term=.gb9VMa70Z#.jwp5XPv94
Solon, O. (2017). Google's bad week: YouTube loses millions as advertising row reaches US. Observer, 25 March.
Solon, O., & Laughland, O. (2018). Cambridge Analytica closing after Facebook data harvesting scandal. Guardian, 2 May.
https://www.theguardian.com/uk-news/2018/may/02/
cambridge-analytica-closing-down-after-facebook-row-reports-say
Solon, O., & Siddiqui, S. (2017). Russia-backed Facebook posts 'reached 126m Americans' during US election. Guardian,
31 October. https://www.theguardian.com/technology/2017/oct/30/facebook-russia-fake-accounts-126-million
Spar, D. (2001). Ruling the waves. New York: Harcourt.
Sparks, C. (2004). The global, the local and the public sphere. In R. C. Allen & A. Hill (Eds.), The television studies reader (pp.
139–150). London: Routledge.
Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., … & Teller, A. (2016). Artificial intelligence and life in
2030: One hundred year study on artificial intelligence: Report of the 2015–2016 Study Panel. Stanford University, Stanford,
CA, September 2016. Doc: http://ai100.stanford.edu/2016-report.
Swaine, J. (2018). Twitter admits far more Russian bots posted on election than it had disclosed. Guardian, 20 January.
https://www.theguardian.com/technology/2018/jan/19/
twitter-admits-far-more-russian-bots-posted-on-election-than-it-had-disclosed
Sweeney, L. (2013). Discrimination in online ad delivery. Queue, 11, 3. http://dl.acm.org/citation.cfm?id=2460276&picked=
prox&cfid=777433242&cftoken=41783991
Tambini, D. (2017). How advertising fuels fake news. Inforrm Blog, 26 February. https://inforrm.wordpress.
com/2017/02/26/how-advertising-fuels-fake-news-damian-tambini/
Tambini, D., Labo, S., Goodman, E., & Moore, M. (2017). The new political campaigning. LSE Media Policy Project, Media Pol-
icy Brief 19.
Taplin, J. (2017). Move fast and break things. London: Macmillan.
310 ANDREWS

Tatman, R. (2016). Google’s speech recognition has a gender bias. https://makingnoiseandhearingthings.com/2016/07/12/


googles-speech-recognition-has-a-gender-bias/
Tufekci, Z. (2015). Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency. Colorado
Technology Law Journal, 13, 203.
Turing, A. (2017). How can we design fair, transparent, and accountable AI and robotics? Alan Turing Institute Blog retrieved
from: https://www.turing.ac.uk/blog/can-design-fair-transparent-accountable-ai-robotics/
Veale, M., van Kleek, M., & Binns, R. (2018). Fairness and accountability design needs for algorithmic support in high-stakes
public sector decision-making. Chi 2018 paper, April. Retrieved from: https://arxiv.org/pdf/1802.01029.pdf
Vizard, S. (2017). Vodafone blocks ads from appearing on sites that promote hate speech or fake news. Marketing Week,
6 June.
Walport, M. (2013). Letter to the Prime Minister ‘The Age of Algorithms’. Council for Science and Technology. https://www.
gov.uk/government/uploads/system/uploads/attachment_data/file/224953/13-923-age-of-algorithms-letter-to-prime-
minister__1_.pdf
Walport, M. (2016). Letter to the Prime Minister ‘Robotics, automation and artificial intelligence (RAAI)’. Council for Science
and Technology. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/592423/Robotics_
automation_and_artificial_intelligence_-_cst_letter.pdf
Walport, M. (2017). When a computer judges, how do we judge the computer? WIRED, April.
Weber, K., & Glynn, M. A. (2006). Making sense with institutions: Context, thought and action in Karl Weick’s theory. Organi-
zation Studies, 27, 1639–1660.
Weick, K. E., Sutcliffe, K. M., & Obstfeld, D. (2005). Organizing and the process of sensemaking. Organization Science, 16,
409–421.
Wheatley, M. (2014). Regulating high frequency trading. Speech by Martin Wheatley, CEO, the FCA, at the Global Exchange
and Brokerage conference, New York, 4 June. https://www.fca.org.uk/news/speeches/regulating-high-frequency-trading
White House (2016). Remarks by the President in Opening Remarks and Panel Discussion at White House Frontiers Confer-
ence, Office of the Press Secretary, The White House, 13 October. https://www.whitehouse.gov/
the-press-office/2016/10/13/remarks-president-opening-remarks-and-panel-discussion-white-house
Williams, I., & Shearer, H. (2011). Appraising public value: Past, present, and futures. Public Administration, 89, 1367–1384.
Winner, L. (1977). Autonomous technology. Cambridge, MA: MIT Press.
Winner, L. (1986). Do artifacts have politics? In L. Winner (Ed.), The whale and the reactor: A search for limits in an age of high
technology (pp. 19–39). Chicago, IL: University of Chicago Press.
Wu, T. (2016). The attention merchants: The epic scramble to get inside our heads. New York: Knopf.
Yeung, K. (2017). Algorithmic regulation: A critical interrogation. TLI Think! Paper 62/2017; Regulation & Governance. King's
College London Law School Research Paper No. 2017–27. Available at SSRN: https://ssrn.com/abstract=2972505
Youyou, W., Kosinski, M., & Stilwell, D. (2015). Computer-based personality judgments are more accurate than those made
by humans. Proceedings of the National Academy of Sciences, USA, 112, 1036–11040.

How to cite this article: Andrews L. Public administration, public leadership and the construction of public
value in the age of the algorithm and ‘big data’. Public Admin. 2019;97:296–310. https://doi.org/10.1111/
padm.12534

You might also like