I've always thought that Searle's argument relied on misleading and/or tautological definitions, and I liked Nils Nilsson's rebuttal:
"For the purposes that Searle has in mind, it is difficult to maintain a useful distinction between programs that multiply and programs that simulate programs that multiply. If a program behaves as if it were multiplying, most of us would say that it is, in fact, multiplying. For all I know, Searle may only be behaving as if he were thinking deeply about these matters. But, even though I disagree with him, his simulation is pretty good, so I’m willing to credit him with real thought.."
I also find Searle's rebuttal to the systems reply to be unconvincing.
"If he doesn't understand, then there is no way the system could understand because the system is just a part of him."
Perhaps the overall argument is both more and less convincing in the age of LLMs, which are very good at translation and other tasks but still make seemingly obvious mistakes. I wonder (though I doubt) whether Searle might have been convinced if by following the instructions the operator of the room ended up creating, among other recognizable and tangible artifacts, an accurate scale model of the city of Beijing.
In any case, I'm sad that Prof. Searle is no longer with us to argue with - it sounds like he may have been mistreated by multiple parties including his university.
It all started with ELIZA. Although Weizenbaum, the author of the chatbot, always emphasized that the program was performing a rather simple manipulation of the input, mostly based on pattern matching and rephrasing, popular press completely overhyped the capabilities of the program, with some serious articles debating whether it would be a good substitute for psychiatrists, etc.
So, many people, including Searle, wanted to push back on reading too much into what the program was doing. This was a completely justified reaction -- ELIZA simply lacked the complexity which is presumably required to implement anything resembling flexible understanding of conversation.
That was the setting. In his original (in)famous article, Searle started with a great question, which went something like: "What is required for a machine to understand anything?"
Unfortunately, instead of trying to sketch out what might be required for understanding, and what kinds of machines would have such facilities (which of course is very hard even now), he went into dazzling the readers with a "shocking" but a rather irrelevant story. This is how stage magicians operate -- they distract a member of the audience with some glaring nonsense, while stuffing their pockets with pigeons and handkerchiefs. That is what Searle did in his article -- "if a Turing Machine were implemented by a living person, the person would not understand a bit of the program that they were running! Oh my God! So shocking!" And yet this distracted just about everyone from the original question. Even now philosophers have two hundred different types of answers to Searle's article!
Although one could and should have explained that ELIZA could not "think" or "understand" -- which was Searle's original motivation, this of course doesn't imply any kind of fundamental principle that no machine could ever think or understand -- after all, many people agree that biological brains are extremely complex, but nevertheless governed by the ordinary physics "machines".
Searle himself was rather evasive regarding what exactly he wanted to say in this regard -- from what I understand, his position has evolved considerably over the years in response to criticism, but he avoided stating this clearly. In later years he was willing to admit that brains were machines, and that such machines could think and understand, but somehow he still believed that man-made computers could never implement a virtual brain.
Meaning bootstrapped consciousness, just ask dna and rna.
I don't get any of these anthropocentric arguments, meaning predates humanity and consciousness, that's what dna is, meaning primitives are just state changes the same thing as physical primitives.
syntactic meaning exists even without an interpreter in the same way physical "rock" structures existed before there were observers, it just picks up causal leverage when there is one.
Only a stateless universe would have no meaning. Nothing doesn't exist, meaninglessness doesn't exist, these are just abstraction we've invented.
Call it the logos if that's what you need, call it field pertubations, reality has just traveling up the meaning complexity chain, but complex meaning is just structural arrangement of meaning simples.
Stars emit photons, humans emit complex meaning. Maybe we'll be part of the causal chain that solves entropy, until then we are the only empirically observed, random walk write heads of maximally complex meaning in the universe.
We are super rare and special as far as we've empirically observed, doesn't mean we get our own weird metaphysical (if that even exists) carve out.
There's much more meaning than can be loaded into statements, thoughts, etc. And conscious will is a post-hoc after effect.
Any computer has far less access to the meaning load we experience since we don't compute thoughts, thoughts aren't about things, there is no content to thoughts, there are no references, representations, symbols, grammars, words in brains.
Searle is only at the beginning of this refutation of computers, we're far more along now.
It's just actions, syntax and space. Meaning is both an illusion and fantastically exponential. That contradiction has to be continually made correlational.
meaning is an illusion? That's absurdly wrong, it's a performative contradiction to even say such a thing, you might not like semantic meaning but it, like information, physically exists, and even if you're a solipsist you can't deny state change, and state change is a meaning primitive, meaning primitives are one thing that must exist.
this isn't woo, this is just empirical observation, and no one is capable of credibly denying state change.
If a computer could have an intelligent conversation, then a person
could manually execute the same program to the same effect, and
since that person could do so without understanding the
conversation, computers aren't sentient.
Analogously, some day I might be on life support. The life support
machines won't understand what I'm saying. Therefore I won't mean
it.
A ridiculous argument. Turing machines don't know anything about the program they are executing. In fact, Turing machines don't "know" anything. Turing machines don't know how to fly a plane, translate a language, or play chess. The program does. And Searle puts the man in the room in the place of the Turing machine.
So what, in the analogy, would be the program? Surely it's not the printed rules, so I think you're making the "systems reply" - that the program that knows Chinese is some sort of metaphysical "system" that arises from the man using the rules - which is the first thing Searle tries to rebut.
> let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.
In other words, even if you put the man in place of everything, there's still a gap between mechanically manipulating symbols and actual understanding.
People are doing things they personally do not understand, just by following the rules, all the time. One does not need to understand why celestial navigation works in order to do it, for example. Heck, most kids can learn arithmetic (and perform it in their heads) without being able to explain why it works, and many (including their teachers, sometimes) never achieve that understanding. Searle’s failure to recognize this very real possibility amounts to tacit question-begging.
Only because "actual understanding" is ambiguously defined. Meaning is an association of A with B. Our brains have a large associative array with the symbols for the sound "dog" is associated with the image of "dog' which is associated with the behavior of "dog" which is associated with the feel of "dog", ... We associate the symbols for the word "hamburger" with the symbols for the taste of "hamburger", with ... We undersand something when our past associations match current inputs and can predict furture inputs.
"Actual understanding" means you have a grounding for the word down to conscious experience and you have a sense of certainty about its associations. I don't understand "sweetness" because I competently use the word "sweet." I understand sweetness because I have a sense of it all the way down to the experience of sweetness AND the natural positive associations and feelings I have with it. There HAS to be some distinction between understanding all the way down to sensation and a competent or convincing deployment of that symbol without those sensations. If we think about how we "train" AI to "understand" sweetness, we're basically telling it when and when not to use that symbol in the context of other symbols (or visual inputs). We don't do this when we teach a child that word. The child has an inner experience he can associate with other tastes.
The irony here is that performing like an LLM the very thing that Searle has the human operator do. If it the sort of interaction that does not need intelligence, then no conclusion about the feasibility of AGI can be drawn from contemplating it. Searle’s arguments have been overtaken by technology.
Can you expand on this? The thought experiment is just about showing that there is more to having a mind than having a program. It’s not an argument about the capabilities of LLMs or AGI. Though it’s worth noting that behavioral criteria continue to lead to people overestimating the capabilities of promise of AI.
You can't attack others like this on HN, regardless of how wrong they are or you feel they are. It's not what this site is for, and destroys what it is for.
Btw, it's particularly important not to do this when your argument is the correct one, since if you happen to be right, you end up discrediting the truth by posting like this, and that ends up hurting everybody. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
I agree with your comment, FWIW - I have no idea what OP is trying to demonstrate - but to maybe suggest some context: Gödel incompleteness is a commonly suggested "proof" as to why computers can't be intelligent in the way that humans can, because (very very roughly) humans can "step out" of formal systems to escape the Gödel trap. Regardless of your feelings about AI it's a silly argument; I think possibly the popularizer of this line of thinking was Roger Penrose in "The Emperor's New Mind".
I haven't re-read Searle since college but as far as I recall he never brings it up.
Wnat about Gödel incompleteness? Comptuers aren't formal systems. Turing machines have no notion of truth. Their programs may. So a program can have M > N axioms in which case one of the N+1 axioms recognizes the truth that G ≡ ¬ Prov_S(⌜ G ⌝) because it was constructed to be true. Alternatively, construct a system that generates "truth" statements, subject to further verification. After all, some humans think that "Apollo never put men on the moon" is a true statement.
As for intentionality, programs have intentionality.
The CRA proves that the algorithm by itself can never understand anything by virtue of solely symbol manipulation. Syntax, by itself, is insufficient to produce semantics.
It proves nothing, and has, in fact, been overtaken by events (see another of my replies [1]).
As for its alleged conclusion, I love David Chalmers’ parody “Cakes are crumbly, recipes are syntactic, syntax is not sufficient for crumbliness, so implementation of a recipe is not sufficient for making a cake.”
Simulation of a recipe is not sufficient for crumbliness which is the only thing a computer can do at the end of the day. It can perform arithmetic operations & nothing else. If you know of a computer that can do more than boolean arithmetic then I'd like to see that computer & its implementation.
I've always thought that Searle's argument relied on misleading and/or tautological definitions, and I liked Nils Nilsson's rebuttal:
"For the purposes that Searle has in mind, it is difficult to maintain a useful distinction between programs that multiply and programs that simulate programs that multiply. If a program behaves as if it were multiplying, most of us would say that it is, in fact, multiplying. For all I know, Searle may only be behaving as if he were thinking deeply about these matters. But, even though I disagree with him, his simulation is pretty good, so I’m willing to credit him with real thought.."
I also find Searle's rebuttal to the systems reply to be unconvincing.
"If he doesn't understand, then there is no way the system could understand because the system is just a part of him."
Perhaps the overall argument is both more and less convincing in the age of LLMs, which are very good at translation and other tasks but still make seemingly obvious mistakes. I wonder (though I doubt) whether Searle might have been convinced if by following the instructions the operator of the room ended up creating, among other recognizable and tangible artifacts, an accurate scale model of the city of Beijing.
In any case, I'm sad that Prof. Searle is no longer with us to argue with - it sounds like he may have been mistreated by multiple parties including his university.
It all started with ELIZA. Although Weizenbaum, the author of the chatbot, always emphasized that the program was performing a rather simple manipulation of the input, mostly based on pattern matching and rephrasing, popular press completely overhyped the capabilities of the program, with some serious articles debating whether it would be a good substitute for psychiatrists, etc.
So, many people, including Searle, wanted to push back on reading too much into what the program was doing. This was a completely justified reaction -- ELIZA simply lacked the complexity which is presumably required to implement anything resembling flexible understanding of conversation.
That was the setting. In his original (in)famous article, Searle started with a great question, which went something like: "What is required for a machine to understand anything?"
Unfortunately, instead of trying to sketch out what might be required for understanding, and what kinds of machines would have such facilities (which of course is very hard even now), he went into dazzling the readers with a "shocking" but a rather irrelevant story. This is how stage magicians operate -- they distract a member of the audience with some glaring nonsense, while stuffing their pockets with pigeons and handkerchiefs. That is what Searle did in his article -- "if a Turing Machine were implemented by a living person, the person would not understand a bit of the program that they were running! Oh my God! So shocking!" And yet this distracted just about everyone from the original question. Even now philosophers have two hundred different types of answers to Searle's article!
Although one could and should have explained that ELIZA could not "think" or "understand" -- which was Searle's original motivation, this of course doesn't imply any kind of fundamental principle that no machine could ever think or understand -- after all, many people agree that biological brains are extremely complex, but nevertheless governed by the ordinary physics "machines".
Searle himself was rather evasive regarding what exactly he wanted to say in this regard -- from what I understand, his position has evolved considerably over the years in response to criticism, but he avoided stating this clearly. In later years he was willing to admit that brains were machines, and that such machines could think and understand, but somehow he still believed that man-made computers could never implement a virtual brain.
RIP John Searle, and thanks for all the fish.
Long read, I'm sure it's fascinating, will get through it in time
Just Googling the author, he died last month sadly
It's the responses & counter-responses that are long. The actual article by Searle is only a few pages.
Meaning bootstrapped consciousness, just ask dna and rna.
I don't get any of these anthropocentric arguments, meaning predates humanity and consciousness, that's what dna is, meaning primitives are just state changes the same thing as physical primitives.
syntactic meaning exists even without an interpreter in the same way physical "rock" structures existed before there were observers, it just picks up causal leverage when there is one.
Only a stateless universe would have no meaning. Nothing doesn't exist, meaninglessness doesn't exist, these are just abstraction we've invented.
Call it the logos if that's what you need, call it field pertubations, reality has just traveling up the meaning complexity chain, but complex meaning is just structural arrangement of meaning simples.
Stars emit photons, humans emit complex meaning. Maybe we'll be part of the causal chain that solves entropy, until then we are the only empirically observed, random walk write heads of maximally complex meaning in the universe.
We are super rare and special as far as we've empirically observed, doesn't mean we get our own weird metaphysical (if that even exists) carve out.
There's much more meaning than can be loaded into statements, thoughts, etc. And conscious will is a post-hoc after effect.
Any computer has far less access to the meaning load we experience since we don't compute thoughts, thoughts aren't about things, there is no content to thoughts, there are no references, representations, symbols, grammars, words in brains.
Searle is only at the beginning of this refutation of computers, we're far more along now.
It's just actions, syntax and space. Meaning is both an illusion and fantastically exponential. That contradiction has to be continually made correlational.
meaning is an illusion? That's absurdly wrong, it's a performative contradiction to even say such a thing, you might not like semantic meaning but it, like information, physically exists, and even if you're a solipsist you can't deny state change, and state change is a meaning primitive, meaning primitives are one thing that must exist.
this isn't woo, this is just empirical observation, and no one is capable of credibly denying state change.
tl;dr:
If a computer could have an intelligent conversation, then a person could manually execute the same program to the same effect, and since that person could do so without understanding the conversation, computers aren't sentient.
Analogously, some day I might be on life support. The life support machines won't understand what I'm saying. Therefore I won't mean it.
Wow. That was remarkably way off base.
A ridiculous argument. Turing machines don't know anything about the program they are executing. In fact, Turing machines don't "know" anything. Turing machines don't know how to fly a plane, translate a language, or play chess. The program does. And Searle puts the man in the room in the place of the Turing machine.
So what, in the analogy, would be the program? Surely it's not the printed rules, so I think you're making the "systems reply" - that the program that knows Chinese is some sort of metaphysical "system" that arises from the man using the rules - which is the first thing Searle tries to rebut.
> let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.
In other words, even if you put the man in place of everything, there's still a gap between mechanically manipulating symbols and actual understanding.
People are doing things they personally do not understand, just by following the rules, all the time. One does not need to understand why celestial navigation works in order to do it, for example. Heck, most kids can learn arithmetic (and perform it in their heads) without being able to explain why it works, and many (including their teachers, sometimes) never achieve that understanding. Searle’s failure to recognize this very real possibility amounts to tacit question-begging.
Only because "actual understanding" is ambiguously defined. Meaning is an association of A with B. Our brains have a large associative array with the symbols for the sound "dog" is associated with the image of "dog' which is associated with the behavior of "dog" which is associated with the feel of "dog", ... We associate the symbols for the word "hamburger" with the symbols for the taste of "hamburger", with ... We undersand something when our past associations match current inputs and can predict furture inputs.
"Actual understanding" means you have a grounding for the word down to conscious experience and you have a sense of certainty about its associations. I don't understand "sweetness" because I competently use the word "sweet." I understand sweetness because I have a sense of it all the way down to the experience of sweetness AND the natural positive associations and feelings I have with it. There HAS to be some distinction between understanding all the way down to sensation and a competent or convincing deployment of that symbol without those sensations. If we think about how we "train" AI to "understand" sweetness, we're basically telling it when and when not to use that symbol in the context of other symbols (or visual inputs). We don't do this when we teach a child that word. The child has an inner experience he can associate with other tastes.
The irony here is that performing like an LLM the very thing that Searle has the human operator do. If it the sort of interaction that does not need intelligence, then no conclusion about the feasibility of AGI can be drawn from contemplating it. Searle’s arguments have been overtaken by technology.
Can you expand on this? The thought experiment is just about showing that there is more to having a mind than having a program. It’s not an argument about the capabilities of LLMs or AGI. Though it’s worth noting that behavioral criteria continue to lead to people overestimating the capabilities of promise of AI.
[flagged]
[flagged]
You can't attack others like this on HN, regardless of how wrong they are or you feel they are. It's not what this site is for, and destroys what it is for.
Btw, it's particularly important not to do this when your argument is the correct one, since if you happen to be right, you end up discrediting the truth by posting like this, and that ends up hurting everybody. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.
> Gödel incompleteness
I agree with your comment, FWIW - I have no idea what OP is trying to demonstrate - but to maybe suggest some context: Gödel incompleteness is a commonly suggested "proof" as to why computers can't be intelligent in the way that humans can, because (very very roughly) humans can "step out" of formal systems to escape the Gödel trap. Regardless of your feelings about AI it's a silly argument; I think possibly the popularizer of this line of thinking was Roger Penrose in "The Emperor's New Mind".
I haven't re-read Searle since college but as far as I recall he never brings it up.
Wnat about Gödel incompleteness? Comptuers aren't formal systems. Turing machines have no notion of truth. Their programs may. So a program can have M > N axioms in which case one of the N+1 axioms recognizes the truth that G ≡ ¬ Prov_S(⌜ G ⌝) because it was constructed to be true. Alternatively, construct a system that generates "truth" statements, subject to further verification. After all, some humans think that "Apollo never put men on the moon" is a true statement.
As for intentionality, programs have intentionality.
Mostly agree, but the Chinese Room doesn’t prove anything about whether the algorithm understands Chinese. It’s a bait and switch.
The CRA proves that the algorithm by itself can never understand anything by virtue of solely symbol manipulation. Syntax, by itself, is insufficient to produce semantics.
It proves nothing, and has, in fact, been overtaken by events (see another of my replies [1]).
As for its alleged conclusion, I love David Chalmers’ parody “Cakes are crumbly, recipes are syntactic, syntax is not sufficient for crumbliness, so implementation of a recipe is not sufficient for making a cake.”
[1] https://news.ycombinator.com/item?id=45664226
Simulation of a recipe is not sufficient for crumbliness which is the only thing a computer can do at the end of the day. It can perform arithmetic operations & nothing else. If you know of a computer that can do more than boolean arithmetic then I'd like to see that computer & its implementation.