Talk:Chinese room
Philosophy: Logic / Mind / Analytic / Contemporary B‑class High‑importance | |||||||||||||||||||||||||||||||
|
To-do list for Chinese room:
|
This page has archives. Sections older than 90 days may be automatically archived by Lowercase sigmabot III when more than 4 sections are present. |
Need to say that some people think the Chinese room is not Turing complete
See the discussion above, at #Unable to learn. To recap: I am pretty confident that Searle would say that the argument only makes sense if the room is Turing complete. But we need to research this and nail it down, because there are replies in the literature that assume it is not. I think this belongs in a footnote to the section Computers vs. machines vs. brains. ---- CharlesGillingham (talk) 21:37, 10 February 2011 (UTC)
- I found one that is unambiguous: Hanoch Ben-Yami (1993). A Note on the Chinese Room. Synthese 95 (2):169-72: "such a room is impossible: the man won't be able to respond correctly to questions like 'What is the time'?" Ben-Yami's critique explicitly assumes that the rule-book is fixed, i.e. there is no eraser, i.e. the room is not Turing complete. ---- CharlesGillingham (talk) 18:30, 11 December 2011 (UTC)
- The Chinese Room can't possibly be Turing complete, because Turing completeness requires an unlimited tape. The Chinese Room would be a finite-state machine, not a Turing machine. Looie496 (talk) 18:56, 11 December 2011 (UTC)
- Ah, yikes, yes of course that's true. Note however that no computer in the real world has an infinite amount of tape either, so Turing complete machines can not exist. The article sneaks past this point. We could add a few more clauses of the form "given enough memory and time" to the article where this point is a problem. Or we could dive in with a paragraph dealing with Infinitely Large machines vs. Arbitrarily Large machines. Is this a hair worth splitting? ---- CharlesGillingham (talk) 19:13, 11 December 2011 (UTC)
Looie: I think I'm starting to agree with you that the "Turing completeness" paragraph needs to be softened and tied closer to Searle's published comments. Ideally, I'd like to be able to quote Searle that the Chinese room "implements" a Turing machine or that the man in the room is "acting as" a Turing machine or something to that effect. Then we can make the point about Turing completeness in a footnote or as part of the discussion in "Redesigning the room", where it's actually relevant.
The only thing I have at this point is this: "Now when I first read [Fodor's criticism], I really didn't know whether to laugh or cry for Alan Turing, because, remember, this is Turing's definition that we're using here and what Fodor in effect is saying is that Turing doesn't know what a Turing machine is, and that's a very nervy thing to say."[1] This isn't exactly on point, and worse, it isn't even published; it's just a tape recording of something he said. ---- CharlesGillingham (talk) 22:29, 25 February 2012 (UTC)
- Forgive me but this discussion seems to me to entirely miss the point i.e. that the Chinese Room is a thought experiment. The practical issues of a real experiment only impact on a thought experiment if they somehow change some significant element of it not if they relate to practical issues. If it is factually incorrect to suggest that the turing machine test fails then reference to the turing test should be eliminated as it is insignificant to the thought experiment itself irrespective of whether or not Searle included it in his description. A thought experiment, at every step, includes the words such as: "theoretically speaking". In the Chinese room it is irrelevant whether the person does or does not have an eraser as the point is simply that a human could, "theoretically speaking", execute the instructions that a computer would by following the same program. It would simply take the human being far longer to do (so long that the person may die before they have sufficient time to answer even one question, but again this is irelevant to the experiment). LookingGlass (talk) 13:29, 4 December 2012 (UTC)
- I don't think I fully understand your note, LookingGlass, so I don't want to put words in your mouth, but you appear to be conflating the "Turing test" with "Turing machines". They are completely different things. It is unfortunate, I guess, that they are both relevant to this subject, because it is easy to make this mistake. There is no such thing as "Turing machine test" as far as I am aware. The point in this section is whether the man-room system is a Turing-complete system, ie equivalent to a Turing machine. This section has nothing to do with the Turing test, as far as I can tell. The eraser is necessary for the system to be Turing-complete. Whether the man has an eraser is hugely important in some sense at least—if the man did not have an eraser then I don't think any computationalist would dare assert that the system understands anything let alone the Chinese language. :) This also helps us understand how wild and out in the weeds the CRA claim is, since as far as I'm aware we have no reason or even hint to imagine that there is some kind of computer more powerful than a Turing-equivalent machine, yet that is what Searle claims the brain is. To the question at hand, while I'm not aware of Searle ever mentioning Turing machines by name, I had always interpreted the language that he does use—eg "formal symbol manipulation" etc—as referring to "what computers do" ie Turing machines. That is after all the context in which Searle is writing, isn't it? I agree it would be nice to have a source in the article to back this up if there is one, but is there really much question about this? I'm not so sure the Ben-Yami paper is really saying what we're saying it's saying; it doesn't even mention Turing machines, for example. ErikHaugen (talk | contribs) 17:46, 4 December 2012 (UTC)
- My apologies ErikHaugen. Please read my remarks with the word "machine" deleted. As far as I can determine, Searle's experiment is unaffected by the details of the situation. The point at hand is that a computer program can, in theory, be executed by a human being. LookingGlass (talk) 12:11, 6 December 2012 (UTC)
- The point is relevant to the "replies" which this article categorizes as "redesigning the room". There are many such criticisms. Searle argues that any argument which requires Searle to change the room is actually a refutation of strong AI, for exactly the reason you state: it should be obvious that anyone can execute any program by hand, even a program which Strong AI claims is "conscious" or "sentient" or whatever. If, for some reason, you can't execute the program by hand and get consciousness, then computation is not sufficient for consciousness, therefor strong AI is false.
- "Turing completeness" is a way to say this that makes sense to people who are trained in computer science. ---- CharlesGillingham (talk) 06:35, 8 January 2013 (UTC)
- Many thanks Charles. Searle's argument seems beautifully elegant to me. LookingGlass (talk) 20:56, 8 January 2013 (UTC)
Need to say that some people think that Searle is saying there are limits to how intelligently computers can behave
Similarly, some people also lump Searle in with Dreyfus, Penrose and others who have said that there are limits to what AI can achieve. This also will require some research, because Searle is rarely crystal clear about this. This belongs in a footnote to the section Strong AI vs. AI research. ---- CharlesGillingham (talk) 21:37, 10 February 2011 (UTC)
- He seems clear enough to me: he doesn't claim that there are limits on computer behavior, only that there are limits on what can be inferred from that behavior. Looie496 (talk) 23:24, 5 April 2011 (UTC)
- Yes, I think so too, but I have a strong feeling that there are some people who have written entire papers that were motivated by the assumption that Searle was saying that AI would never succeed in creating "human level intelligence". I think these papers are misguided, as I take it you do. Nevertheless, I think they exist, so we might want to mention them. ---- CharlesGillingham (talk) 08:19, 6 April 2011 (UTC)
- Is this the same as asking if computers can understand, or that there are limits to their understanding? What does it mean to limit intelligence, or intelligent behaviour? Myrvin (talk) 10:16, 6 April 2011 (UTC)
- There is eg. this paraphrase of Searle: "Adding a few lines of code cannot give intelligence to an unintelliget system. Therefore, we cannot hope to program a computer to exhibit understanding." Arbib & Hesse, The construction of reality p. 29. Myrvin (talk) 13:19, 6 April 2011 (UTC)
- Yes, I think so too, but I have a strong feeling that there are some people who have written entire papers that were motivated by the assumption that Searle was saying that AI would never succeed in creating "human level intelligence". I think these papers are misguided, as I take it you do. Nevertheless, I think they exist, so we might want to mention them. ---- CharlesGillingham (talk) 08:19, 6 April 2011 (UTC)
- I think that, even in this quote, Searle still holds that there is a distinction between "real" intelligence and "simulated" intelligence. He accepts that "simulated" intelligence is possible. So the article always needs to make a clear distinction between intelligent behavior (which Searle thinks is possible) and "real" intelligence and understanding (which he does not think is possible).
- The article covers this interpretation. The source is Russell and Norvig, the leading AI textbook.
- What the article doesn't have is a source that disagrees with this interpretation: i.e. a source that thinks that Searle is saying there are limits to how much simulated intelligent behavior that a machine can demonstrate. I don't have this source, but I'm pretty sure it exists somewhere. ---- CharlesGillingham (talk) 17:32, 6 April 2011 (UTC)
- Oops! I responded thinking that the quote came from Searle. Sorry if that was confusing. Perhaps Arbib & Hesse are the source I was looking for. Do they believe that Searle is saying there are limits to how intelligent a machine can behave? ---- CharlesGillingham (talk) 07:34, 7 April 2011 (UTC)
- See what you think CG. It's in Google books at: [2]. Myrvin (talk) 08:27, 7 April 2011 (UTC)
- Reading that quote one more time, I think that A&H do disagree with the article. They say (Searle says) a computer can't "exhibit understanding". Russell and Norvig disagree (I think). They say (Searle says) even if a computer can "exhibit" understanding, this doesn't mean that it actually understands.
- With this issue, it's really difficult to tell the difference between these two positions from out-of-context quotes. If the writer isn't fully cognizant of the issue, they will tend to write sentences that can be read either way. ---- CharlesGillingham (talk) 19:27, 12 April 2011 (UTC)
"The Real Thing"
I removed a reference to "The Real Thing" in the "Strong AI vs. AI research" and replaced it with "human like cognition". The original phrase was intended to refer to cognition, but in the context of the sentence could be easily misconstrued to refer to intelligence. I felt the distinction was worthy of clarification because in Searles' conception, computers do in fact have "real" intelligence under a variety of understandings of the term but lack the capacity for awareness of the use of that intelligence. Jaydubya93 (talk) 13:44, 16 January 2014 (UTC)
- I agree with you, however, I think "a simulation of human cognition" could be construed as "human-like cognition" (as opposed to, say, Rodney Brook's "bug-like cognition"). In this reading, Searle's argument presumes that machines with "human-like cognition" are possible, so the sentence as you changed it doesn't quite work either. I changed it to explicitly mention "mind" and "consciousness" (i.e. "awareness", as you said above) so that the issue is clearer. ---- CharlesGillingham (talk) 19:59, 18 January 2014 (UTC)
- No. Searle's argument does not presumes "...that machines with *human-like cognition* are possible" ; it presumes that "that machines with "human-like cognition" WILL BE possible", if the scientific mentality changes. Luizpuodzius (talk) 19:33, 24 December 2014 (UTC)
Odd placement of citations and notes
Why does this article have citations at the beginning of paragraphs instead of the ends. Is this some WP innovation? Myrvin (talk) 06:46, 6 July 2015 (UTC)
- The footnotes you're referring to are actually supposed to be attached to the bold title, but changes in formatting over the years have moved them down a line and to the front of the paragraph. I should move them after the first sentence. ---- CharlesGillingham (talk) 04:52, 10 September 2015 (UTC)
- Fixed ---- CharlesGillingham (talk) 05:05, 10 September 2015 (UTC)
What am I missing?
By Searle's logic, there is no way to tell if the person inside the room is actually understanding Chinese or just following a set of rules without really understanding them. Therefore there is no way to tell whether a human actually *understands* Chinese either, right? — Preceding unsigned comment added by 195.82.64.222 (talk) 09:24, 2 September 2015 (UTC)
- No, Searle claimed that it is intuitively obvious that the person in the room does not understand Chinese. He claimed that it is intuitively obvious that a person speaking Chinese in the usual way is doing something different from the person in the room. Looie496 (talk) 15:57, 2 September 2015 (UTC)
- Bravo. Yep, that seems to be the natural conclusion from Searle's writings. I don't think you're missing anything. I think the argument is more famous for its influence, rather than for any validity. --C S (talk) 02:55, 9 September 2015 (UTC)
- Yes you are right. This is known as the "other minds" reply. Searle thinks that this reply is a cop-out, because no one is asking whether humans have minds. We know we have minds. Searle knows that he himself has a mind, and he is satisfied that human minds are created by the biological machinery of neurons. This position (known as mechanism) is an adequate scientific explanation of human minds and an honest person (who isn't too religious) shouldn't have a problem with it.
- But this explanation doesn't apply to the room: there is no similar or equivalent machinery in the room. Searle doesn't see anything biological or mechanical going on in there. Just a guy following rules. There's no machine in the room and Searle is certain that the mind is a machine. Searle sees Strong AI as arguing that you can (somehow) have a mind without a machine -- that the mind is somehow NOT a machine. This runs against 500 years of philosophy and science that strongly suggests that the mind is a machine. Searle doesn't buy it; he thinks the idea of a mind without the machine is speculation, wishful thinking or flat out non-sense. The idea that the mind is composed of "computation" or "information" is (to Searle) as silly as the idea that the mind is composed of "soul" or "spirit". ---- CharlesGillingham (talk) 04:23, 10 September 2015 (UTC)
- I don't think that's right. Searle -- at least, the Searle who wrote this stuff several decades ago -- argued that it takes more than a machine to have a mind: it takes a machine with certain (not very clearly specified) causal powers. To Searle the difference between the brain and an arbitrary computer is that the brain is embodied. giving it a specific set of causal powers. Looie496 (talk) 13:03, 10 September 2015 (UTC)
- On the contrary, Searle makes it very clear that the mind is just a machine, nothing more. He makes that point in the original paper.
- Other than that, I'm not sure where you disagree. Searle's "causal powers" are biological and chemical -- they are mechanical, and, as you say "embodied" in that they are made by real physical stuff. They are "causal" in the ordinary, real world way that physical things have cause and effect. ---- CharlesGillingham (talk) 04:32, 16 September 2015 (UTC)
- I don't think Searle says per se that "the mind is just a machine, nothing more". Here is a short passage from near the end of the paper:
- "Could a machine think?"
- I don't think Searle says per se that "the mind is just a machine, nothing more". Here is a short passage from near the end of the paper:
- I don't think that's right. Searle -- at least, the Searle who wrote this stuff several decades ago -- argued that it takes more than a machine to have a mind: it takes a machine with certain (not very clearly specified) causal powers. To Searle the difference between the brain and an arbitrary computer is that the brain is embodied. giving it a specific set of causal powers. Looie496 (talk) 13:03, 10 September 2015 (UTC)
- The answer is, obviously, yes. We are precisely such machines.
- "Yes, but could an artifact, a man-made machine, think?"
- Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question.
- "OK, but could a digital computer think?"
- If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.
- "But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?"
- This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.
Where I agree and disagree with Searle
I agree with everything John Searle says in his famous Chinese Room argument article, but I disagree with him when he says that brains are machines and are digital computers.
I draw the line by saying that machines and digital computers, when debating these sorts of matters, have to be considered as made by man.
However, let's take either the F1FO ATP synthase or flagellum as a philosophical example. Bacteria and many other kinds of cells use flagella to move around. The F1FO ATP synthase is located at the base of the flagellum, and it produces the power that physically rotates the flagellum. For decades scientists have likened both the flagellum and the ATP synthase to machines because they are composed of parts that physically move with respect to each other such that the end result is physical rotation of the flagellum. They both evolved at the same time as bacteria -- 4 billion years ago. Homo sapiens did not evolve until 200,000 years ago.
I personally agree with Searle (and disagree with John McCarthy) that thermostats can not have beliefs, can not have consciousness, can not have intentionality, and can not understand anything, because thermostats are machines, and machines are not living things. I believe that only living things can have those abilities, and I believe that machines will never have those abilities.
The exception I make is that someday machines and living matter may become one through science and engineering. In those instances the distinction between machines and living things will be much harder to pinpoint than it is today.
The F1FO ATP synthase and flagellum are not living things. They are components of living things. Since scientists have been studying them, they have been trying to apply that knowledge to creating nano-scale machines. Synthetic biologists for several years have been able to synthesize from scratch the DNA to create new species of bacteria. At some point in the future we will have to ask ourselves whether we are thus creating machines. Synthetic biologists, and I, would say that synthetic biological creations are indeed living things, so eventually we may have to admit that these creations we will make in the future are living machines. At that point in time, what falls into the category of 'machine' will be far more subjective than it is today.
For many years scientists have been creating systems consisting of wet biological nervous tissue connected to computerized inputs and outputs. Likewise, as such technologies develop, we will have to ask whether such systems are living versus machines, and the distinction will get decreasingly clear.
These are only two examples of cybernetic organisms, or cyborgs. Synthetic biology will eventually be applied to the latter example. Our use of the word 'synthetic', not to mention the scientific field, will eventually obfuscate our definition of the word 'machine'.
Now let's revisit my comment, "I draw the line by saying that machines and digital computers, when debating these sorts of matters, have to be considered as made by man."
For many decades biologists have genetically engineered microorganisms to produce proteins of our choice. For almost a decade synthetic biologists have been able to create new bacterial species with the help of naturally occurring bacteria, so for at least this amount of time, what falls into the category of man-made has not been clear, and this trend will continue.
Natural computing, which uses biological molecules, is a science that also pertains to these questions. DNA computing is one example of natural computing.
Also in the future scientists will be able to create new proteins composed of both naturally occurring and man-made amino acids.
Several years ago the scientist Miguel Nicolelis connected the brains of two rats via wires, and trained them to entrain each other to improve at a left-versus-right lever-pushing task -- the information traveled from brain to brain via the wire. Similar experiments have been done on rhesus monkeys.
Scientists with the press of a button have recently been able to initiate and terminate flying in a beetle by stimulating certain ganglia with wirelessly controlled electrodes. They are currently trying to gain the ability to steer its flight in three dimensions.
- B-Class Philosophy articles
- High-importance Philosophy articles
- B-Class logic articles
- High-importance logic articles
- Logic task force articles
- B-Class philosophy of mind articles
- High-importance philosophy of mind articles
- Philosophy of mind task force articles
- B-Class Analytic philosophy articles
- High-importance Analytic philosophy articles
- Analytic philosophy task force articles
- B-Class Contemporary philosophy articles
- High-importance Contemporary philosophy articles
- Contemporary philosophy task force articles
- Wikipedia pages with to-do lists