The Chinese Room

Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
Post Reply
WanderingGaze22
Posts: 121
Joined: June 9th, 2021, 12:39 am

The Chinese Room

Post by WanderingGaze22 »

Imagine someone who knows only English alone in a room following English instructions for manipulating strings of Chinese characters, leading those outside of the room to believe that the person inside the room understands Chinese.

This thought experiment, conceived by Philosopher John Searle supposed to show that while advanced computers may appear to understand and converse in natural language, they are not capable of understanding language. This is because computers are strictly limited to the exchange of symbolic strings. The Chinese Room was meant to be an argument against artificial intelligence, but it’s a rather simplistic view of current AI and where it’s likely headed, including the advent of generalized, learning intelligence, (AGI) and the potential for artificial consciousness.

That said, Searle is right in his suggestion that there is the potential for an AI to act and behave as if there’s conscious awareness and understanding. This is problematic because it may be convincing to us humans that true comprehension is going on where there is none. We should be careful, therefore, around seemingly “smart” machine minds. And so the question remains: Will philosophy unlock the puzzle that is artificial intelligence?
User avatar
Steve3007
Posts: 10351
Joined: June 15th, 2011, 5:53 pm
Favorite Philosopher: Eratosthenes of Cyrene
Location: UK

Re: The Chinese Room

Post by Steve3007 »

WanderingGaze22 wrote:Will philosophy unlock the puzzle that is artificial intelligence?
It's perfectly possible that not even the invention of genuine artificial intelligence would unlock the puzzle of artificial intelligence. It's possible that a genuinely artificially intelligent device could be made by assembling the required components and training it, while never at any stage knowing how those components are interacting to result in intelligence. The fact that artificial structures like computer programs are deterministic sometimes gives the false impression that this goes along with being predictable. But it doesn't. Determinism of the parts of a system doesn't automatically entail predictability of the large scale behaviour of a system. Conversely, randomness in the behaviour of the parts doesn't automatically entail randomness of the large scale behaviour, as the solidity of the laws of thermodynamics demonstrates.
Even men with steel hearts love to see a dog on the pitch.
User avatar
LuckyR
Moderator
Posts: 5729
Joined: January 18th, 2015, 1:16 am

Re: The Chinese Room

Post by LuckyR »

WanderingGaze22 wrote: November 23rd, 2021, 4:04 am Imagine someone who knows only English alone in a room following English instructions for manipulating strings of Chinese characters, leading those outside of the room to believe that the person inside the room understands Chinese.

This thought experiment, conceived by Philosopher John Searle supposed to show that while advanced computers may appear to understand and converse in natural language, they are not capable of understanding language. This is because computers are strictly limited to the exchange of symbolic strings. The Chinese Room was meant to be an argument against artificial intelligence, but it’s a rather simplistic view of current AI and where it’s likely headed, including the advent of generalized, learning intelligence, (AGI) and the potential for artificial consciousness.

That said, Searle is right in his suggestion that there is the potential for an AI to act and behave as if there’s conscious awareness and understanding. This is problematic because it may be convincing to us humans that true comprehension is going on where there is none. We should be careful, therefore, around seemingly “smart” machine minds. And so the question remains: Will philosophy unlock the puzzle that is artificial intelligence?
This thought experiment is taking a Black Box issue, in this case intelligence or thought and supposing that the audience can see into the Black Box and lo we are underwhelmed by what we see.

As long as our lack of understanding requires Black Box analogies (which might be forever) proposing that this or that thing resides within will remain a lame thought experiment with little if any insight into understanding properties of Real Life.
"As usual... it depends."
GE Morton
Posts: 2521
Joined: February 1st, 2017, 1:06 am

Re: The Chinese Room

Post by GE Morton »

WanderingGaze22 wrote: November 23rd, 2021, 4:04 am Imagine someone who knows only English alone in a room following English instructions for manipulating strings of Chinese characters, leading those outside of the room to believe that the person inside the room understands Chinese.

This thought experiment, conceived by Philosopher John Searle supposed to show that while advanced computers may appear to understand and converse in natural language, they are not capable of understanding language. This is because computers are strictly limited to the exchange of symbolic strings. The Chinese Room was meant to be an argument against artificial intelligence, but it’s a rather simplistic view of current AI and where it’s likely headed, including the advent of generalized, learning intelligence, (AGI) and the potential for artificial consciousness.

That said, Searle is right in his suggestion that there is the potential for an AI to act and behave as if there’s conscious awareness and understanding. This is problematic because it may be convincing to us humans that true comprehension is going on where there is none. We should be careful, therefore, around seemingly “smart” machine minds. And so the question remains: Will philosophy unlock the puzzle that is artificial intelligence?
"This is problematic because it may be convincing to us humans that true comprehension is going on where there is none."

Unfortunately, the only means we have for determining whether "true comprehension" is going on or not, in any person (or machine) other than ourselves, is that person's (or machine's) behavior.

This is the gist of the "Other Minds" response to Searle's argument. It is summarized here:

https://plato.stanford.edu/entries/chin ... heMindRepl

Searle's response, as quoted in the above article, is,

"The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn’t be just computational processes and their output because the computational processes and their output can exist without the cognitive state. It is no answer to this argument to feign anesthesia. In ‘cognitive sciences’ one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects."

But that response doesn't answer the objection; it merely re-frames the question from, "Do other people or machines have conscious states?," to, "When are we warranted in imputing conscious states to other people or machines?"

But the answer is the same: when they exhibit certain qualifying behaviors.

Searles' original paper, BTW, is here:

https://rintintin.colorado.edu/~vancecd ... Searle.pdf
GE Morton
Posts: 2521
Joined: February 1st, 2017, 1:06 am

Re: The Chinese Room

Post by GE Morton »

Steve3007 wrote: November 23rd, 2021, 7:18 am
The fact that artificial structures like computer programs are deterministic sometimes gives the false impression that this goes along with being predictable. But it doesn't.
I disagree. That the behavior of a system is predictable is the only ground we have for claiming that it is deterministic. If it is not predictable such a claim is an hypothesis, a conjecture, a theoretical postulate, not an empirically observable fact.
Conversely, randomness in the behaviour of the parts doesn't automatically entail randomness of the large scale behaviour, as the solidity of the laws of thermodynamics demonstrates.
Same thing. The only ground we have for claiming a phenomenon is random is that it is un-predictable. But you're right that the fact that individual processes occurring in a system are unpredictable doesn't mean that their aggregate behavior is unpredictable. E.g., we can't predict when a particular radium atom will fission, but we can predict that half of a lump of radium will fission in 1600 years.
It's perfectly possible that not even the invention of genuine artificial intelligence would unlock the puzzle of artificial intelligence. It's possible that a genuinely artificially intelligent device could be made by assembling the required components and training it, while never at any stage knowing how those components are interacting to result in intelligence. The fact that artificial structures like computer programs are deterministic sometimes gives the false impression that this goes along with being predictable.
Agree with the first sentence there, but not the second. Systems which display consciousness, or sentience --- the biological systems we take as paradigms of those properties --- are complex adaptive systems (CAS's), which are inherently unpredictable, because the number of variables involved is astronomically high. But because they're not predictable we're not entitled to claim they are "deterministic" systems. There we have the opposite of the radium example --- we can predict that if neuron A is stimulated it will stimulate neurons B and C. But we can't predict the aggregate behavior of all 86 billion neurons.
User avatar
grantcas
New Trial Member
Posts: 1
Joined: Yesterday, 2:27 pm

Re: The Chinese Room

Post by grantcas »

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is available at Jeff Krichmar's website at UC Irvine.
Post Reply

Return to “General Philosophy”

Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021