AI As Agency Without Intelligence
AI As Agency Without Intelligence
AI As Agency Without Intelligence
Luciano Floridi 1, 2
1
Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS, UK
2
Department of Legal Studies, University of Bologna, Via Zamboni, 27/29, 40126,
Bologna, IT
Email of correspondence author: luciano.floridi@oii.ox.ac.uk
Abstract
The article discusses the recent advancements in artificial intelligence (AI) and the
development of large language models (LLMs) such as ChatGPT. The article
argues that these LLMs can process texts with extraordinary success and often in a
way that is indistinguishable from human output, while lacking any intelligence,
understanding or cognitive ability. It also highlights the limitations of these LLMs,
such as their brittleness (susceptibility to catastrophic failure), unreliability (false
or made-up information), and the occasional inability to make elementary logical
inferences or deal with simple mathematics. The article concludes that LLMs,
represent a decoupling of agency and intelligence. While extremely powerful and
potentially very useful, they should not be relied upon for complex reasoning or
crucial information, but could be used to gain a deeper understanding of a text’s
content and context, rather than as a replacement for human input. The best author
is neither an LLM nor a human being, but a human being using an LLM proficiently
and insightfully.
LLM Disclaimer: this manuscript has been written using ChatGPT Jan 30 Version
to generate the three Figures.
by the syntax, that is, by how the dictionary words are structured into sentences
(Borges 2000). The second idea is old: all the words in the dictionary are present in
the alphabet, the difference is made by morphology, that is, by how the letters of
the alphabet are structured into words (Clarke 1967). The third idea is old: all the
letters are present in the digital code, the difference is made by how the finite strings
of zeros and ones of the digital code are structured into letters (Lodder 2008). The
fourth idea is also old: all strings of zeros and ones are present in two
computational devices (Mano 1979). But the fifth idea is revolutionary: today,
texts with extraordinary success and often in a way that is indistinguishable from
how human beings would be able to do it. These are the so-called large language
The most famous LLMs are GPT3, ChatGPT (also known as GPT3.5,
reason or understand, they are not a step towards any sci-fi AI, and they have
nothing to do with the cognitive processes present in the animal world, and above
all, in the human brain and mind, to manage semantic contents successfully (Bishop
2021). However, with the staggering growth of available data, quantity and speed
of calculation, and ever-better algorithms, they can do statistically – that is, working
1To be precise, LaMDA (Language Model for Dialogue Applications) is the Google language model,
Bard is the name of the service.
Their abilities are extraordinary, as even the most sceptical must admit.
One may criticize the summary because it is longer than 50 words, and because The
Divine Comedy is not an epic poem – although there is a debate on this topic on the
Internet, hence the ChatGPT summary – but rather a tragedy, as Dante himself
suggested. That said, the summary is not bad, and certainly better than one produced
ChatGPT, but to teach how to use the right prompts (the question or request that
generates the text, see the first line of my request), check the result, know what to
correct in the text produced by ChatGPT, discover that there is a debate on which
literary genre best applies to The Divine Comedy, and in the meantime, in doing all
this, learn many things not only about the software but above all about The Divine
Comedy itself. As I used to teach my students at Oxford in the 1990s, one of the
and try to improve its translation into English (thus one learns to check the original);
clarify the less clear passages with a more accessible paraphrase (thus one sees if
one has really understood the text); try to criticise or refine the arguments,
modifying or strengthening them (thus one realizes that others have tried to do the
same, and that is not so easy); and while doing all this learn the nature, internal
structure, dynamics and mechanisms of the content on which one is working. Or,
to change the example, one really knows a topic not when one knows how to write
a Wikipedia entry about it – this can be done by ChatGPT increasingly well – but
when one knows how to correct it. One should use the software as a tool to get
one’s hands on the text/mechanism, and get them dirty even by messing it up, as
long as one masters the nature and the logic of the artefact called text.
The limitations of these LLMs are now obvious even to the most
enthusiastic. They are fragile, because when they do not work, they fail
demonstration failure that cost Google over $100 billion in stock losses,2 is a good
reminder that doing things with zero intelligence, whether digital or human, is
sometimes very painful (Bing Chat also has its problems3). There is now a line of
research that produces very sophisticated analyses on how, when, and why these
LLMs, which seem incorrigible, have an unlimited number of Achilles heels (when
2 https://www.reuters.com/technology/google-ai-chatbot-bard-offers-inaccurate-information-
company-ad-2023-02-08/
3 https://arstechnica-com.cdn.ampproject.org/c/s/arstechnica.com/information-
technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack/amp/
AI system). They make up texts, answers, or references when they do not know
how to answer; make obvious factual mistakes; sometimes fail to make the most
linguistic blind spots where they get stuck ( Floridi and Chiriatti 2020; Cobbe et al.
2021; Perez et al. 2022; Arkoudas 2023; Borji 2023; Christian 2023; Rumbelow
2023). A simple example in English illustrates well the limits of a mechanism that
manages texts without understanding anything of their content. When asked – using
the Saxon genitive – what is the name of Laura’s mother’s only daughter, the
4 https://venturebeat.com/business/researchers-find-that-large-language-models-struggle-with-math/
Given the enormous successes and equally broad limitations, some have
anything (Bender et al. 2021). The analogy helps, but only partially, not only
because parrots have an intelligence of their own that would be the envy of any AI
but, above all, because LLMs synthesise texts in new ways, restructuring the
contents on which they have been trained, not providing simple repetitions or
juxtapositions. They look much more like the autocomplete function of a search
engine. And in their capacity for synthesis, they approach those mediocre or lazy
students who, to write a short essay, use a dozen relevant references suggested by
the teacher, and by taking a little here and a little there, put together an eclectic text,
coherent, but without having understood much or added anything. As a college tutor
at Oxford, I corrected many of them every term. They can now be produced more
The Betrothed (Manzoni 2016). In a famous scene in which Renzo (one of the main
characters) meets a lawyer, we read: “While the doctor [the lawyer] was uttering
all these words, Renzo was looking at him with ecstatic attention, like a gullible
bussolotti], which, after stuffing tow and tow and tow into its mouth, takes out tape
and tape and tape, which never ends [the word “nastro” should be traduced more
reminds one of the endless tape of a Turing Machine]”. LLMs are like that trickster:
the “tape” of their information, it is good to pay close attention to how it was
produced, why, and with what impact. And here we come to more interesting things.
The impact of LLMs and the various AI systems that produce content of all
kinds today will be enormous. Just think of DALL-E, which, as ChatGPT says (I
OpenAI that generates original images starting from textual descriptions. It uses
match input text, including captions, keywords, and even simple sentences. With
DALL-E, users can enter a text description of the image they want, and the system
will produce an image that matches the description.” There are ethical and legal
issues: just think of copyright and the re-production rights linked to the data sources
on which the AI in question is trained. The first lawsuits have already begun, 5 and
there have already been the first plagiarism scandals.6 There are human costs:
consider the use of contractors in Kenya, paid less than $2/hour to label harmful
content to train ChatGPT; they were unable to access adequate mental health
resources, and many have been left traumatized. 7 There are human problems, like
5 https://news.bloomberglaw.com/ip-law/first-ai-art-generator-lawsuits-threaten-future-of-emerging-
tech
6 https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/
7 https://time.com/6247678/openai-chatgpt-kenya-workers/
8 https://ethicalreckoner.substack.com/p/er13-on-community-chatgpt-and-human
algorithmic poisoning of the AI’s training data. Or think of the financial and
environmental costs of these new systems (Cowls et al. 2021): is such type of
innovation fair and sustainable? Then there are questions related to the best use of
contexts such as customer service, or in the drafting of any text, including scientific
articles or new legislation. Some jobs will disappear, others are already emerging,
But above all, for a philosopher, there are many challenging questions
seamless way, with LLMs acting as a AI2AI kind of bridge to make them
interoperable, as a sort of “confederated AI”;9 the relationship between form and its
syntax, and content and its semantics; the nature of personalisation of content and
the fragmentation of shared experience (AI can easily produce a unique, single
novel on-demand, for a single reader, for example); the concept of interpretability,
and the value of the process and the context of the production of meaning; our
uniqueness and originality as producers of meaning and sense, and of new contents;
9 I owe this remark to Vincent Wang who reminded me of two interesting examples (1) having
ChatGPT and Wolfram Alpha talk to each other; ChatGPT outsources mathematics questions to
Wolfram Alpha, which has considerable ability by itself to parse mathematical questions in natural
language format, see https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-
bring-computational-knowledge-superpowers-to-chatgpt/; (2) “Socratic Models” for multimodal
grounding/reasoning, where the idea is to tag different forms of data e.g. sounds and images, with text
descriptions so that an LLM can serve as “central processing” allowing different narrow AIs to talk to
each other. https://socraticmodels.github.io/.
1984, whoever controls the questions controls the answers, and whoever controls
this new form of agency. As Vincent Wang reminded me, ChatGPT leapfrogged
and can “learn” and improve its behaviour without having to be intelligent in doing
so. It is a form of agency that is alien to any culture in any past, because humanity
has always and everywhere seen this kind of agency – which is not that of a sea
wave, which makes the difference, but can make nothing but that difference,
We have gone from being in constant contact with animal agents and what
we believed to be spiritual agents (gods and forces of nature, angels and demons,
souls or ghosts, good and evil spirits) to having to understand, and learn to interact
with, artificial agents created by us, as new demiurges of such a form of agency.
We have decoupled the ability to act successfully from the need to be intelligent,
AI – understood as Agere sine Intelligere, with a bit of high school Latin – is yet to
Acknowledgements
Blanshard, Emmie Hine, Joshua Jaffe, Claudio Novelli, Mariarosaria Taddeo, and