09 - Philosophy of AI b

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 18

Intelligent

Systems
3. Philosophy of Artificial
Intelligence

Richard Watson
November 2022
Outline
• Why philosophy?

• Mind-body problem

• Problem of other minds

• Symbol-grounding problem

• Frame problem

• Strong vs. weak

• Neat vs. scruffy

2
Why Philosophy?
• The deepest problems in philosophy are implicated
in AI:

– How can matter be a mind?

– How can matter be meaningful?

– How can matter be purposive?

– How can matter be conscious?

3
Mind Over Matter
• One solution to these problems is to consider mind
to be immaterial – not made of matter.

• Superficially attractive: the mind is not like the


foot, it is more like the “soul” or the “self”.

• Also: mind has properties that seem absent from


mere matter – meaning, subjectivity, etc. René
Descartes

• But: if mind is immaterial, how can it exert an


influence on the body, which is made of matter?

• Materialists assert: brains cause minds. But how?


4
Problem of Other Minds
• I am convinced that I have a mind.

• How can I tell if other things are minded?

– Look inside?

•What to look for?


– Neurons?
– Complex machinery?
– Arranged how?

– Examine behaviour?

•Is any behaviour conclusive?


5
Turing Test
• Turing opted for a behavioural test for intelligence…

• If C cannot tell which of A and B is the


computer, then A must be intelligent.
• A passes. Then we look inside:
– What if there is a look-up table?
– ...or a just few wires?
• A fails. The barrier is removed:
– What if A is a French person?
– What if A is a tiny child?
6
What Qs would you ask?
How would you look for in answers?

7
Good about the TT as a test of
intelligence?
Bad about the TT as a test of
intelligence?

8
Symbol-Grounding Problem
• If intelligence involves thinking thoughts, what
makes a thought about a particular thing?

• What is “aboutness”? What makes mental states


meaningful?

• A thought of a cat means “cat” because:


Stripy (Cat)
• …it looks like a cat?

• …it was caused by a cat?

• …it was brought about by a mechanism with the job


of creating thoughts about cats?
9
Physical Symbol System Hypothesis
“A physical symbol system has the necessary and
sufficient means for general intelligent action.”
– Alan Newell and
Herbert Simon (1976)

Human intelligence is a kind of symbol manipulation

Machines that manipulate symbols can be intelligent

• The PSSH is central to classical AI

• It loads symbols and formal symbol manipulation


with the burden of instantiating intelligence.
10
Could Symbols Really be Enough?
• Classic AI takes thoughts to be like sentences:

• The cat is on the mat = On(Cat, Mat)

• Shape(Mat,Flat), Colour(Mat,Brown),
Tastes(Mat,Bad)

• Thought is manipulation of such symbolic


expressions. But:

1. Gödel placed formal limits on this kind of system

2. Wittgenstein argued that concepts such as Game


or even Chair are not easily captured by sets of
11
propositions.
Searle’s Chinese Room
• John Searle’s Chinese Room argument uses the symbol-
grounding problem to challenge AI.

• He imagines being part of a room that


can pass the Turing Test - in Chinese.

• But it/he doesn’t understand the content of conversation.

• The room’s syntax captures the rules of content, but the


room doesn’t have the right semantics: the right meaning.

• Searle anticipates many replies to his argument.

• R&N appear to be satisfied by the System Response...

12
Grounding Symbols
• Two possibilities for grounding symbols:

1. An interpreter or observer does it

2. The system does it itself

• Most AI systems take option 1. Real AI must take option 2.

• The “robot response” appeals to a naïve version of option 2.

• It attempts to connect the formal, logico-syntactic


operations inside the room to the real world outside.

Meaningful symbol use is grounded by action in the world.


(cf. Stevan Harnad, ECS)

13
The frame problem
• Broad sense: The problem of knowing which facts
about the world have changed as a consequence of
an action or event

• Narrow sense: all the propositions that are


unnafected…
– assume there are no ‘side effects’…
– then add exceptions…?
– See non-monotonic logic
– There are ways to solve this

14
• end

15
imagine being the designer of a robot that has to carry out an
everyday task, such as making a cup of tea.
(with explicitly stored, sentence-like representations of the world).

Now, suppose the robot has to take a tea-cup from the cupboard. The present
location of the cup is represented as a sentence in its database of facts
alongside those representing innumerable other features of the ongoing
situation, such as the ambient temperature, the configuration of its arms, the
current date, the colour of the tea-pot, and so on. Having grasped the cup and
withdrawn it from the cupboard, the robot needs to update this database. The
location of the cup has clearly changed, so that's one fact that demands
revision. But which other sentences require modification? The ambient
temperature is unaffected. The location of the tea-pot is unaffected. But if it so
happens that a spoon was resting in the cup, then the spoon's new location,
inherited from its container, must also be updated.

How could the robot limit the scope of the propositions it must reconsider in
the light of its actions? In a sufficiently simple robot, this doesn't seem like
much of a problem. Surely the robot can simply examine its entire database of
propositions one-by-one and work out which require modification. But if we
imagine that our robot has near human-level intelligence, and is therefore
burdened with an enormous database of facts to examine every time it so
much as spins a motor, such a strategy starts to look computationally
intractable.
16
http://plato.stanford.edu/entries/frame-problem/
Background Reading…
AIAMA2e Chapter 26 + cited references
Boden, M. (ed.) (1990). The philosophy of artificial intelligence. OUP.
cont: McCulloch & Pitts, Turing, Searle, Boden, Newell &
Simon, Marr, Dennett, Sloman, Rumelhart, Clark, Dreyfus,
Churchland, Cussins…
Boden, M. (ed.) (1996). The philosophy of artificial life. OUP.
cont: Langton, Boden, Ray, Maynard Smith, McFarland,
Wheeler, Kirsh, Clark. Godfrey-Smith, Bedau, Sober, Pattee…
Dennett, S. (1991). Consciousness explained. Penguin Press.
Haugeland, J. (ed.) (1997). Mind design II. MIT Press.*
cont: Turing, Haugeland, Dennett, Newell & Simon,
Minsky, Dreyfus, Searle, Rumelhart, Churchland, Fodor, Clark,
Brooks, van Gelder…
Steels, L. & Brooks, R. A. (eds.) (1994). The artificial life route to
artificial intelligence: Building embodied situated agents. Lawrence
Erlbaum. cont: Varela, Brooks, Steels, Smithers, Mataric,
Online Resources…
• Wikipedia Entries

– Philosophy of AI – Problem of Other Minds

– Frame Problem – Swamp Man

– Symbol Grounding – Qualia

– Chinese Room – Mind-Body Problem

– Turing Test

• Great Debate: Can Computers Think

You might also like