The Chinese Room
-
- Posts: 223
- Joined: June 9th, 2021, 12:39 am
The Chinese Room
This thought experiment, conceived by Philosopher John Searle supposed to show that while advanced computers may appear to understand and converse in natural language, they are not capable of understanding language. This is because computers are strictly limited to the exchange of symbolic strings. The Chinese Room was meant to be an argument against artificial intelligence, but it’s a rather simplistic view of current AI and where it’s likely headed, including the advent of generalized, learning intelligence, (AGI) and the potential for artificial consciousness.
That said, Searle is right in his suggestion that there is the potential for an AI to act and behave as if there’s conscious awareness and understanding. This is problematic because it may be convincing to us humans that true comprehension is going on where there is none. We should be careful, therefore, around seemingly “smart” machine minds. And so the question remains: Will philosophy unlock the puzzle that is artificial intelligence?
-
- Posts: 10339
- Joined: June 15th, 2011, 5:53 pm
Re: The Chinese Room
It's perfectly possible that not even the invention of genuine artificial intelligence would unlock the puzzle of artificial intelligence. It's possible that a genuinely artificially intelligent device could be made by assembling the required components and training it, while never at any stage knowing how those components are interacting to result in intelligence. The fact that artificial structures like computer programs are deterministic sometimes gives the false impression that this goes along with being predictable. But it doesn't. Determinism of the parts of a system doesn't automatically entail predictability of the large scale behaviour of a system. Conversely, randomness in the behaviour of the parts doesn't automatically entail randomness of the large scale behaviour, as the solidity of the laws of thermodynamics demonstrates.WanderingGaze22 wrote:Will philosophy unlock the puzzle that is artificial intelligence?
- LuckyR
- Moderator
- Posts: 7935
- Joined: January 18th, 2015, 1:16 am
Re: The Chinese Room
This thought experiment is taking a Black Box issue, in this case intelligence or thought and supposing that the audience can see into the Black Box and lo we are underwhelmed by what we see.WanderingGaze22 wrote: ↑November 23rd, 2021, 4:04 am Imagine someone who knows only English alone in a room following English instructions for manipulating strings of Chinese characters, leading those outside of the room to believe that the person inside the room understands Chinese.
This thought experiment, conceived by Philosopher John Searle supposed to show that while advanced computers may appear to understand and converse in natural language, they are not capable of understanding language. This is because computers are strictly limited to the exchange of symbolic strings. The Chinese Room was meant to be an argument against artificial intelligence, but it’s a rather simplistic view of current AI and where it’s likely headed, including the advent of generalized, learning intelligence, (AGI) and the potential for artificial consciousness.
That said, Searle is right in his suggestion that there is the potential for an AI to act and behave as if there’s conscious awareness and understanding. This is problematic because it may be convincing to us humans that true comprehension is going on where there is none. We should be careful, therefore, around seemingly “smart” machine minds. And so the question remains: Will philosophy unlock the puzzle that is artificial intelligence?
As long as our lack of understanding requires Black Box analogies (which might be forever) proposing that this or that thing resides within will remain a lame thought experiment with little if any insight into understanding properties of Real Life.
-
- Posts: 4696
- Joined: February 1st, 2017, 1:06 am
Re: The Chinese Room
"This is problematic because it may be convincing to us humans that true comprehension is going on where there is none."WanderingGaze22 wrote: ↑November 23rd, 2021, 4:04 am Imagine someone who knows only English alone in a room following English instructions for manipulating strings of Chinese characters, leading those outside of the room to believe that the person inside the room understands Chinese.
This thought experiment, conceived by Philosopher John Searle supposed to show that while advanced computers may appear to understand and converse in natural language, they are not capable of understanding language. This is because computers are strictly limited to the exchange of symbolic strings. The Chinese Room was meant to be an argument against artificial intelligence, but it’s a rather simplistic view of current AI and where it’s likely headed, including the advent of generalized, learning intelligence, (AGI) and the potential for artificial consciousness.
That said, Searle is right in his suggestion that there is the potential for an AI to act and behave as if there’s conscious awareness and understanding. This is problematic because it may be convincing to us humans that true comprehension is going on where there is none. We should be careful, therefore, around seemingly “smart” machine minds. And so the question remains: Will philosophy unlock the puzzle that is artificial intelligence?
Unfortunately, the only means we have for determining whether "true comprehension" is going on or not, in any person (or machine) other than ourselves, is that person's (or machine's) behavior.
This is the gist of the "Other Minds" response to Searle's argument. It is summarized here:
https://plato.stanford.edu/entries/chin ... heMindRepl
Searle's response, as quoted in the above article, is,
"The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn’t be just computational processes and their output because the computational processes and their output can exist without the cognitive state. It is no answer to this argument to feign anesthesia. In ‘cognitive sciences’ one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects."
But that response doesn't answer the objection; it merely re-frames the question from, "Do other people or machines have conscious states?," to, "When are we warranted in imputing conscious states to other people or machines?"
But the answer is the same: when they exhibit certain qualifying behaviors.
Searles' original paper, BTW, is here:
https://rintintin.colorado.edu/~vancecd ... Searle.pdf
-
- Posts: 4696
- Joined: February 1st, 2017, 1:06 am
Re: The Chinese Room
I disagree. That the behavior of a system is predictable is the only ground we have for claiming that it is deterministic. If it is not predictable such a claim is an hypothesis, a conjecture, a theoretical postulate, not an empirically observable fact.
Same thing. The only ground we have for claiming a phenomenon is random is that it is un-predictable. But you're right that the fact that individual processes occurring in a system are unpredictable doesn't mean that their aggregate behavior is unpredictable. E.g., we can't predict when a particular radium atom will fission, but we can predict that half of a lump of radium will fission in 1600 years.Conversely, randomness in the behaviour of the parts doesn't automatically entail randomness of the large scale behaviour, as the solidity of the laws of thermodynamics demonstrates.
Agree with the first sentence there, but not the second. Systems which display consciousness, or sentience --- the biological systems we take as paradigms of those properties --- are complex adaptive systems (CAS's), which are inherently unpredictable, because the number of variables involved is astronomically high. But because they're not predictable we're not entitled to claim they are "deterministic" systems. There we have the opposite of the radium example --- we can predict that if neuron A is stimulated it will stimulate neurons B and C. But we can't predict the aggregate behavior of all 86 billion neurons.It's perfectly possible that not even the invention of genuine artificial intelligence would unlock the puzzle of artificial intelligence. It's possible that a genuinely artificially intelligent device could be made by assembling the required components and training it, while never at any stage knowing how those components are interacting to result in intelligence. The fact that artificial structures like computer programs are deterministic sometimes gives the false impression that this goes along with being predictable.
- grantcas
- New Trial Member
- Posts: 2
- Joined: November 28th, 2021, 2:27 pm
Re: The Chinese Room
The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is available at Jeff Krichmar's website at UC Irvine.
- GrayArea
- Posts: 374
- Joined: March 16th, 2021, 12:17 am
Re: The Chinese Room
This sounds interesting. Do you mind summarizing the theory's own description of how consciousness arises for me?grantcas wrote: ↑November 28th, 2021, 2:30 pm It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is available at Jeff Krichmar's website at UC Irvine.
- grantcas
- New Trial Member
- Posts: 2
- Joined: November 28th, 2021, 2:27 pm
Re: The Chinese Room
- Leontiskos
- Posts: 695
- Joined: July 20th, 2021, 11:27 pm
- Favorite Philosopher: Aristotle and Aquinas
Re: The Chinese Room
This is a strong post all around.GE Morton wrote: ↑November 26th, 2021, 3:00 pmI disagree. That the behavior of a system is predictable is the only ground we have for claiming that it is deterministic. If it is not predictable such a claim is an hypothesis, a conjecture, a theoretical postulate, not an empirically observable fact...
Steve3007 seems to have trouble with this relation between predictability and determinism. He thinks that something can be "both deterministic and fundamentally unpredictable" (link). I have never seen him answer when pressed on this topic (for example, here).
Socrates: He's like that, Hippias, not refined. He's garbage, he cares about nothing but the truth.
- Thomyum2
- Posts: 366
- Joined: June 10th, 2019, 4:21 pm
- Favorite Philosopher: Robert Pirsig + William James
Re: The Chinese Room
I think there is validity to both statements. From a scientific/mathematical perspective a system can deterministic, yet be unpredictable only because we lack sufficient precision in our observational tools or capacity of computational ability to be able to predict an outcome. But something indeterminate can never be predicted, regardless of the degree to which we gather observation data. In other words, there is a semantic distinction the term 'predictable' is one that describes our abilities whereas 'deterministic' describes the nature of the system under observation - 'predictable' is a practical matter; 'deterministic' is theoretical.Leontiskos wrote: ↑December 23rd, 2021, 1:56 amThis is a strong post all around.GE Morton wrote: ↑November 26th, 2021, 3:00 pmI disagree. That the behavior of a system is predictable is the only ground we have for claiming that it is deterministic. If it is not predictable such a claim is an hypothesis, a conjecture, a theoretical postulate, not an empirically observable fact...
@Steve3007 seems to have trouble with this relation between predictability and determinism. He thinks that something can be "both deterministic and fundamentally unpredictable" (link). I have never seen him answer when pressed on this topic (for example, here).
But GE has a valid point too - from a philosophical perspective, since the claim that anything is or is not deterministic rests in the particular theory or model that makes those predictions, it can only be held to be true as long as empirical evidence continues to support it, until such time as new or contradictory evidence is obtained that renders the existing model no longer effective in predicting outcomes.
— Epictetus
- Leontiskos
- Posts: 695
- Joined: July 20th, 2021, 11:27 pm
- Favorite Philosopher: Aristotle and Aquinas
Re: The Chinese Room
True, something could be predictable in principle but not in practice, but here is the original quote:Thomyum2 wrote: ↑December 23rd, 2021, 2:12 pmI think there is validity to both statements. From a scientific/mathematical perspective a system can deterministic, yet be unpredictable only because we lack sufficient precision in our observational tools or capacity of computational ability to be able to predict an outcome. But something indeterminate can never be predicted, regardless of the degree to which we gather observation data. In other words, there is a semantic distinction the term 'predictable' is one that describes our abilities whereas 'deterministic' describes the nature of the system under observation - 'predictable' is a practical matter; 'deterministic' is theoretical.Leontiskos wrote: ↑December 23rd, 2021, 1:56 amThis is a strong post all around.GE Morton wrote: ↑November 26th, 2021, 3:00 pmI disagree. That the behavior of a system is predictable is the only ground we have for claiming that it is deterministic. If it is not predictable such a claim is an hypothesis, a conjecture, a theoretical postulate, not an empirically observable fact...
@Steve3007 seems to have trouble with this relation between predictability and determinism. He thinks that something can be "both deterministic and fundamentally unpredictable" (link). I have never seen him answer when pressed on this topic (for example, here).
Socrates: He's like that, Hippias, not refined. He's garbage, he cares about nothing but the truth.
-
- Posts: 4696
- Joined: February 1st, 2017, 1:06 am
Re: The Chinese Room
"In principle" here can only mean, "In theory." And as you noted earlier, whether a theory is viable depends upon its predictions panning out.Leontiskos wrote: ↑December 23rd, 2021, 5:54 pm
True, something could be predictable in principle but not in practice, but here is the original quote:
- Count Lucanor
- Posts: 2318
- Joined: May 6th, 2017, 5:08 pm
- Favorite Philosopher: Umberto Eco
- Location: Panama
- Contact:
Re: The Chinese Room
Would you explain how the experiment does not address the core of the argument of AI: that minds work as computers? Even if one devised a far more complex setup as the one in Searle's thought experiment, the argument stands: the difference between syntactic and semantic rules. In what way current AI involves semantics?WanderingGaze22 wrote: ↑November 23rd, 2021, 4:04 am The Chinese Room was meant to be an argument against artificial intelligence, but it’s a rather simplistic view of current AI and where it’s likely headed, including the advent of generalized, learning intelligence, (AGI) and the potential for artificial consciousness.
― Marcus Tullius Cicero
- Mragan1994
- New Trial Member
- Posts: 1
- Joined: June 19th, 2022, 10:34 am
Re: The Chinese Room
I will place a cat to the right of this man and ask him In English -"where is the cat?" And he will know how to respond.
In spanish , I can also ask "donde esta el gato" and he will give the same response assuming the cat is in the same place.
However, I can also ask him, "where is the gato?". He will not be able to respond correctly, just like iphone Siri would not be able to combine two languages.
Someone who can understand both english and spanish would be able to connect gato with cat to make an english statement or connect "where is the" to "donde esta el" and be able to respond.
This man would not be able to respond and thus cannot understand spanish.
2023/2024 Philosophy Books of the Month
Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023
Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023