Philosophical Zombies

Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
User avatar
Consul
Posts: 6036
Joined: February 21st, 2014, 6:32 am
Location: Germany

Re: Philosophical Zombies

Post by Consul »

Djacob7 wrote:Good question. I'll make it a bit more pronounced:
Is there anything psychological in what this zombie said: "When my baby died I experienced a feeling I've never felt before. It was horrible! I've felt pain before, but nothing like this experience."
Does the zombie know/understand what she just said?
If (events of) knowing and understanding are kinds of cognitive or intellectual experiences, then zombies know and understand nothing.
Anyway, all experiential reports of zombies, i.e. everything they say about their own experiences, are false and fictional, since they don't really have any experiences (despite their saying that they do). Moreover, it is arguable that the capacity for making experiential reports (e.g. about one's emotions) requires the possession of (phenomenal) concepts of experiences, and that the acquisition of such concepts requires phenomenal consciousness and (higher-order) introspective consciousness, which is precisely what zombies lack—such that it is unclear how they could acquire and possess concepts of experiential states and properties (qualia). Without such concepts, zombies couldn't even make fictional reports about their nonexistent experiences.
"We may philosophize well or ill, but we must philosophize." – Wilfrid Sellars
Chili
Posts: 392
Joined: September 29th, 2017, 4:59 pm

Re: Philosophical Zombies

Post by Chili »

If zombies are machines, then they don't have abilities - only responses.

If the reports of experiences are taken to be evidence of actual subjective experiences, this implies dualism, or the incompleteness of mechanism in some other way.
User avatar
Consul
Posts: 6036
Joined: February 21st, 2014, 6:32 am
Location: Germany

Re: Philosophical Zombies

Post by Consul »

Chili wrote:If zombies are machines, then they don't have abilities - only responses.
I don't think the having of abilities (active powers, powers to do something) requires consciousness. Of course, whether abilities of nonconscious agents are properly called mental or psychological is another question.

"Machines that we take to be experienceless appear to share many abilities with us. They appear to be able to play chess, perform calculations, detect and distinguish the colors of things, sort circles from squares, estimate distances, judge size, and so on. When we consider ourselves, we think of these abilities as mental abilities or at least as abilities that involve mental abilities. And obviously we take it that to possess a mental ability is to possess a mental property. But when we consider machines, most of us are very strongly disinclined to call the abilities they have 'mental abilities' or to think that they possess any mental properties at all. And the main reason why this is so is that we suppose that machines are experienceless."

(Strawson, Galen. Mental Reality. 2nd ed. Cambridge, MA: MIT Press, 2009. p. 146)

"I will call the properties that we and experienceless machines appear to share the 'X properties'. Here are five possible responses to the problem the X properties appear to pose.

First response. [A] We are mental beings. We and experienceless beings have the X properties in common. [C] We count the X properties as mental properties in our own case. Hence [D] we must count the X properties as mental properties in the case of experienceless beings. For if [E] the X properties are correctly considered as mental properties in some cases, then [F] the X properties are correctly considered as mental properties in all cases. Hence [G] experienceless beings can have mental properties.

Second response (extends the first). Given [A] through [F], it follows [G] that experienceless beings can have mental properties. Hence [H] we should count experienceless beings as mental beings. For having a mental property is sufficient for being a mental being.

Third response (contraposes the argument in the second response). Since [not-H] experienceless beings are obviously not mental beings, it follows that [not-G] they cannot have mental properties, for having a mental property is sufficient for being a mental being. So assuming that it is indeed true that we and experienceless beings have the X properties in common, it follows that [not-E] the X properties are not mental properties after all, properly speaking.

Fourth response (rejects the reasoning shared by the second and third responses, as well as part of the reasoning in the first). [A] We are mental beings. [not-H] Experienceless beings are not mental beings. It is true that [C] we count the X properties as mental properties in our own case. But it does not follow that [D] we must count the X properties as mental properties in the case of experienceless beings. For [E] does not entail [F]. It does not follow, from the fact that [E] the X properties are correctly considered as mental properties in some cases, that [F] the X properties are correctly considered as mental properties in all cases.

Rejection of the move from [C] to [D] (and hence of the move from [E] to [F]) need not sound completely counterintuitive, although it cheerfully violates Leibniz's Law. It may naturally be said that my ability to play chess is (obviously) a mental property that I possess, whereas the computer's ability to play chess is (obviously) not a mental property that it possesses. On this view, both the machine and I can play chess, but it only counts as a mental ability in my case.

Fifth response (arguably just a variant of the fourth). It is true that [F] the X properties are mental properties tout court. But it is false that we and the experienceless beings have the X properties in common. That is, it is in fact wrong to suppose that the X properties are really properties of such a kind that the experienceless machine and I both possess them. The machine cannot really play chess at all, or tell the difference between circles and squares—not in the sense that I can. To play chess is necessarily to have (or to be disposed to have) certain experiences, in addition to making certain moves, following certain rules, and so on. Leibniz's Law is preserved."


(Strawson, Galen. Mental Reality. 2nd ed. Cambridge, MA: MIT Press, 2009. pp. 146-48)

Chili wrote:If the reports of experiences are taken to be evidence of actual subjective experiences, this implies dualism, or the incompleteness of mechanism in some other way.


No, veridical experiential reports don't imply dualism, since actual subjective experiences may well be purely physicochemical processes in the brain.
"We may philosophize well or ill, but we must philosophize." – Wilfrid Sellars
Chili
Posts: 392
Joined: September 29th, 2017, 4:59 pm

Re: Philosophical Zombies

Post by Chili »

I am referring more to the definition of "ability" rather than consciousness, and how our intuitions of "ability" run counter to your models of cause-and-effect.

A being / machine purely of cause-and-effect can be caused to do things, but cannot up-and-cause things to happen. Thus, no abilities. You run the simulation again, you get the same results. If the system has the "ability" to surprise us, that's on us in this situation.

-- Updated October 27th, 2017, 1:18 pm to add the following --

correction - meant to say *our* models not your

-- Updated October 27th, 2017, 1:19 pm to add the following --
Consul wrote: No, veridical experiential reports don't imply dualism, since actual subjective experiences may well be purely physicochemical processes in the brain.
Do you believe that vending machines may be having subjective experiences ? Why or why not ?
User avatar
Consul
Posts: 6036
Joined: February 21st, 2014, 6:32 am
Location: Germany

Re: Philosophical Zombies

Post by Consul »

Chili wrote:I am referring more to the definition of "ability" rather than consciousness, and how our intuitions of "ability" run counter to [our] models of cause-and-effect.

A being / machine purely of cause-and-effect can be caused to do things, but cannot up-and-cause things to happen. Thus, no abilities. You run the simulation again, you get the same results. If the system has the "ability" to surprise us, that's on us in this situation.
What do you think are "our intuitions of 'ability'"?
The distinction between doings and mere happenings is tricky to define; but, intuitively speaking, I see no reason to deny that nonconscious machines can have abilities (active, causal powers)—those ones at least which are intelligent (computational) agents in the AI sense. (See this quote!)
Chili wrote:
Consul wrote:No, veridical experiential reports don't imply dualism, since actual subjective experiences may well be purely physicochemical processes in the brain.
Do you believe that vending machines may be having subjective experiences ? Why or why not ?
No, I don't, because they (their CPUs) lack the requisite degree of structural and functional complexity that is (naturally) exhibited by organic brains only. (The human brain is the most complex material object in the known universe.)
"We may philosophize well or ill, but we must philosophize." – Wilfrid Sellars
Chili
Posts: 392
Joined: September 29th, 2017, 4:59 pm

Re: Philosophical Zombies

Post by Chili »

Take a Newtonian machine - composed of switches, that only switch when they are switched. You can make it more and more complex - adding switches - until the cows come home and yet at any particular juncture you still have just dominoes falling over. It never up-and-causes anything, it never reaches a requisite degree so that it does anything other than whatever is implied by the previous moment's set of switch settings.
User avatar
Consul
Posts: 6036
Joined: February 21st, 2014, 6:32 am
Location: Germany

Re: Philosophical Zombies

Post by Consul »

Chili wrote:Take a Newtonian machine - composed of switches, that only switch when they are switched. You can make it more and more complex - adding switches - until the cows come home and yet at any particular juncture you still have just dominoes falling over. It never up-and-causes anything, it never reaches a requisite degree so that it does anything other than whatever is implied by the previous moment's set of switch settings.
Do we ever do "anything other than whatever is implied by the previous moment's set of switch settings" in our brains? This is the question of causal determinism. Do you think agency or the having of abilities (active, causal powers) requires libertarian free will in the following sense?

"[E]ach of us, when we act, is a prime mover unmoved. In doing what we do, we cause certain events to happen, and nothing—or no one—causes us to cause those events to happen."

(Chisholm, Roderick M. On Metaphysics. Minneapolis, MN: University of Minnesota Press, 1989. p. 12)

-- Updated October 27th, 2017, 3:03 pm to add the following --
Consul wrote:
Chili wrote:Do you believe that vending machines may be having subjective experiences ? Why or why not ?
No, I don't, because they (their CPUs) lack the requisite degree of structural and functional complexity that is (naturally) exhibited by organic brains only. (The human brain is the most complex material object in the known universe.)
I'm not skeptical about the possibility of non-biological artificial intelligence (AI), but I'm skeptical about the possibility of non-biological artificial consciousness/experience. I tend to believe that to have consciousness is to have a conscious life, with there being an essential connection between consciousness and life, such that the creation of artificial experience (AE) would entail the creation of artificial life (AL).
"We may philosophize well or ill, but we must philosophize." – Wilfrid Sellars
Chili
Posts: 392
Joined: September 29th, 2017, 4:59 pm

Re: Philosophical Zombies

Post by Chili »

Consul wrote:Do we ever do "anything other than whatever is implied by the previous moment's set of switch settings" in our brains? This is the question of causal determinism. Do you think agency or the having of abilities (active, causal powers) requires libertarian free will in the following sense?

"[E]ach of us, when we act, is a prime mover unmoved. In doing what we do, we cause certain events to happen, and nothing—or no one—causes us to cause those events to happen."


Free will is one of those things which seems pretty straightforward until you analyze life.

Agency is another name for the this same subject.

It is a completely different topic from discussions around first-person subjective experience.

Even if one is unabashedly dualistic, with a mind which is over matter, pulling levers, a whole set of questions about where the impulses and understandings come from that have arrived in that mind enters into the discussion, and the very idea of being "unmoved" comes apart. Why does one want what one wants? Why does one understand what one understands? So much of that content one can place outside the individual mind right off the bat, and other elements in the mind make sense as coming from outside upon reflection.
User avatar
JamesOfSeattle
Premium Member
Posts: 509
Joined: October 16th, 2015, 11:20 pm

Re: Philosophical Zombies

Post by JamesOfSeattle »

Consul, I appreciate your providing references to the literature, and I assume that you are not arguing by reference to authority. I have some serious problems with the terminology frequently used in these references, particularly with regards to the terms "state" and "feel". For example, you wrote
Arguably, experiential states or properties (qualia) are not functionally reducible and completely describable in terms of cause-effect or input-output relations.
What is an experiential state? A state is a condition in which nothing is changing. How can there be experience if nothing is changing? If a system is in a state, then that is a state prior to a given experience, or in the middle of processing, or in the state of just having had an experience. Chalmers uses similarly problematic language when he writes
At the root of all this lie two quite distinct concepts of mind. The first is the phenomenal concept of mind. This is the concept of mind as conscious experience, and of a mental state as a consciously experienced mental state.
What is a consciously experienced mental state? Is that the state immediately after an experience?

Another problem is the way many of these philosophers use the word "feel". Take Chalmers statement:
On the phenomenal concept, mind is characterized by the way it feels; on the psychological concept, mind is characterized by what it does.
But feeling is just another type of doing. To say that a pin feels sharp is not really a statement about a property of a pin. It's a statement about what happens when you touch the pin.

Now given a model of input --[agent] --> output, I can make a case for what is a mental state. A mental state is a functional description of the state of the agent such that if the agent is presented with x, it will produce an output of y. If the agent is in a different state the output would be different (including the possibility of no output). Apparently, (by what I read in the SEP article), most (all?) functionalist accounts also refer to "states" exclusively. The functional descriptions of these states are what Chalmers refers to as psychological states. The problem is that, assuming the input --> [agent] --> output model, a functional description of the agent does not tell the whole story. Said story also requires a functional description of the input and output. I submit that the functional description of the input is what we call qualia. More specifically, qualia are references to the meanings of symbolic signs which are inputs. Thus, at a physical level the agent may be responding to an input of a specific neurotransmitter. At the functional level, that same agent may be responding to an input of "red". At the functional level, the agent doesn't know anything about neurotransmitters. It only knows about redness.

*
User avatar
Consul
Posts: 6036
Joined: February 21st, 2014, 6:32 am
Location: Germany

Re: Philosophical Zombies

Post by Consul »

JamesOfSeattle wrote:Consul, I appreciate your providing references to the literature, and I assume that you are not arguing by reference to authority. I have some serious problems with the terminology frequently used in these references, particularly with regards to the terms "state" and "feel". For example, you wrote
Arguably, experiential states or properties (qualia) are not functionally reducible and completely describable in terms of cause-effect or input-output relations.
What is an experiential state? A state is a condition in which nothing is changing. How can there be experience if nothing is changing? If a system is in a state, then that is a state prior to a given experience, or in the middle of processing, or in the state of just having had an experience.
There is a connotation to the effect that states are "unchanges", i.e. static/nondynamic; but most philosophers (of mind) use "state" in a quite general sense lacking this connotation. Nonetheless, one can draw an ontological distinction between a narrow and a broad concept of states:

1. states
1.1 narrow: only static/nondynamic ("unchanges", nonevents, nonhappenings, nonprocesses)
1.2 broad: static/nondynamic or nonstatic/dynamic (including changes, events, happenings, processes)

In the broad ontological sense of the term, events and processes can be defined as dynamic states; but if you prefer to regard states as nondynamic by definition, you may use the term "event" or "process" instead (and speak of mental/experiential events/processes rather than of mental/experiential states).

By the way, some experiences are unchanging states. For example, when I look at a uniformly red wall without moving my head, I have a static visual impression of redness.
JamesOfSeattle wrote:Chalmers uses similarly problematic language when he writes
At the root of all this lie two quite distinct concepts of mind. The first is the phenomenal concept of mind. This is the concept of mind as conscious experience, and of a mental state as a consciously experienced mental state.
What is a consciously experienced mental state? Is that the state immediately after an experience?
No, it's simply the experiential state, the experience itself.
JamesOfSeattle wrote:Another problem is the way many of these philosophers use the word "feel". Take Chalmers statement:
On the phenomenal concept, mind is characterized by the way it feels; on the psychological concept, mind is characterized by what it does.
But feeling is just another type of doing. To say that a pin feels sharp is not really a statement about a property of a pin. It's a statement about what happens when you touch the pin.
For example, Charlie Broad writes that…

"The irreducible minimum involved in mentality would seem to be the fact which we express by the phrase 'feeling somehow'[.]"

(Broad, C. D. The Mind and its Place in Nature. 1925. Reprint, Abingdon: Routledge, 2000. p. 634)

He and Chalmers use "feeling" or "feel" in the most general sense, in which it doesn't only refer to tactile feelings associated with the sense of touch, but to the intrinsic qualitative aspects of experiences in general—their "raw feels", as Edward Tolman called them, which term refers to what it feels or is like for a subject to experience something.
It is not the case that feelings in this sense are doings. They are the feeling-qualities, the qualia of subjective experience (sensation, emotion, and imagination), which are constitutive of it.
JamesOfSeattle wrote:Now given a model of input --[agent] --> output, I can make a case for what is a mental state. A mental state is a functional description of the state of the agent such that if the agent is presented with x, it will produce an output of y. If the agent is in a different state the output would be different (including the possibility of no output). Apparently, (by what I read in the SEP article), most (all?) functionalist accounts also refer to "states" exclusively. The functional descriptions of these states are what Chalmers refers to as psychological states. The problem is that, assuming the input --> [agent] --> output model, a functional description of the agent does not tell the whole story. Said story also requires a functional description of the input and output. I submit that the functional description of the input is what we call qualia. More specifically, qualia are references to the meanings of symbolic signs which are inputs. Thus, at a physical level the agent may be responding to an input of a specific neurotransmitter. At the functional level, that same agent may be responding to an input of "red". At the functional level, the agent doesn't know anything about neurotransmitters. It only knows about redness.
It is doubtless true that subjective experiences innerly reveal nothing about the neurophysiological processes underlying them. An introspective neurology practiced from the first-person point of view is impossible in principle.

Qualia are the intrinsic contents of experiences (experiential states/events). They may be said to result from (body-external or -internal) input and result in output (behavior, or inner psychological or physiological reactions), but their being is not reducible to and not exhaustively describable in terms of their causal profile or role (provided they have any, being non-epiphenomenal).

David Armstrong defines the concept of a mental state as follows:

"The concept of a mental state is primarily the concept of a state of the person apt for bringing about a certain sort of behaviour. Sacrificing all accuracy for brevity we can say that, although mind is not behaviour, it is the cause of behaviour. In the case of some mental states only there are also states of the person apt for being brought about by a certain sort of stimulus. But this latter formula is a secondary one."
(p. 82)

"[T]he mind is not to be identified with behaviour, but only with the inner principle of behaviour."
(p. 85)

"Suppose now we accept for argument's sake the view that in talking about mental states we are simply talking about states of the person apt for the bringing about of behaviour of a certain sort."
(p. 89)

"[T]he concept of a mental state is the concept of a state of the person apt for the production of certain sorts of behaviour[.]"
(p. 90)

(Armstrong, D. M. A Materialist Theory of the Mind. London: Routledge & Kegan Paul, 1968.)

(Having been a reductive materialist, Armstrong believed that all mental states are physicochemical states of the brain; so for him mental causation was just a kind of physical/chemical causation.)

Experiential states are mental states, and a functionalist definition such as Armstrong's presupposes that mental states are non-epiphenomenal. But this presupposition certainly begs the question against those who think that they are in fact epiphenomenal, i.e. causally powerless, effectless. An epiphenomenal mental state is certainly not "apt for bringing about a certain sort of behaviour."

Even if the qualitative contents of experiential states are non-epiphenomenal, the point is that there is more to them than their causal role or function. Qualia themselves as experiential/phenomenal qualities aren't (reducible to) functional properties.

"F is a functional property (or kind) just in case F can be characterized by a definition of the following form:

For something x to have F (or to be an F) =def for x to have some property P such that C(P), where C(P) is a specification of the causal work that P is supposed to do in x.

… Now we can define what it is for a property to 'realize,' or to be a 'realizer' of, a functional property:

Let F be a functional property defined by a functional definition, as above. Property Q is said to realize F, or be a realizer or a realization of F, in system x if and only if C(Q), that is, Q fits the specification C in x (which is to say, Q in fact performs the specified causal work in system x)."

(pp. 120-21)

"Schematically, we can think of reductions of this kind in the following three steps:

Step 1. Property, F, to be reduced is given a functional definition, or characterization, of the following form:
Having F = def having some property, or mechanism, P, such that C(P), where C specifies the causal task to be performed by P.

Step 2. Find the property, or mechanism, that does the causal work specified by C—that is, identify the 'realizer' (or 'realizers') of F, in the system, or population of systems, under investigation.

Step 3. Develop a theory that explains how the realizer(s) of F so identified perform the causal task C in the given system or population."

(p. 280)

(Kim, Jaegwon. Philosophy of Mind. 2nd ed. Boulder, CO: Westview, 2006.)

"Are mental properties in general functionalizable and hence functionally reducible? Or are they 'emergent' and irreducible? I believe that there is reason to think that intentional/cognitive properties are functionalizable. However, I am with those who believe that phenomenal properties are not functional properties.

I believe there are substantial and weighty reasons, and a sufficiently broad consensus among the philosophers who work in this area (19, to believe that qualia are functionally irreducible.

(19: To mention a few: Ned Block, Christopher Hill, Frank Jackson, Joseph Levine, Colin McGinn, and Brian McLaughlin.)"


(Kim, Jaegwon. Physicalism, or Something Near Enough. Princeton, NJ: Princeton University Press, 2005. pp. 26-7)
"We may philosophize well or ill, but we must philosophize." – Wilfrid Sellars
Djacob7
Posts: 35
Joined: October 23rd, 2017, 4:26 am

Re: Philosophical Zombies

Post by Djacob7 »

Consul wrote: "Without such concepts, zombies couldn't even make fictional reports about their nonexistent experiences."

Wow! That pleases me, if I'm reading it right.
It seems like you're agreeing with me that P-zombies are absurd and inconceivable.
User avatar
JamesOfSeattle
Premium Member
Posts: 509
Joined: October 16th, 2015, 11:20 pm

Re: Philosophical Zombies

Post by JamesOfSeattle »

Consul, I accept there can be a narrow sense and broader sense of the idea of "state". So in a broader sense, you can be in a state of seeing a red wall, but in fact you are perceiving the red wall over and over again. You are in the state of perceiving the wall, say, 40 times per second. I suspect the problems arise when philosophers start using one sense and then switch to the other sense without telling you. That's why I think they should be more specific. Equating a state to a single process or event, however, I think is simply abusing language.

As for all of those hard-to-follow descriptions of "function", maybe you can help me by applying them to a specific case, namely, a lock on a door. In describing the state of the lock, would you include a reference to a key? If so, does the key have a functional role? If so, how would you describe the functional role of the key? Is the key part of the causal description of the lock? Does the key have a causal function in the absence of the lock?

You write:
Qualia are the intrinsic contents of experiences (experiential states/events). They may be said to result from (body-external or -internal) input and result in output (behavior, or inner psychological or physiological reactions), but their being is not reducible to and not exhaustively describable in terms of their causal profile or role (provided they have any, being non-epiphenomenal).
You say this as if it is a foregone conclusion that "their being is not reducible". What if qualia does not result from an event, like something that you have in hand after the event, but instead is a part of a description of an event. The intrinsic content of an event includes the functional description of the input. Any time such an event happens, it can be said to be an experience of or about the input. Any experience of "red" will necessarily have an input whose functional description will be "red". Any such experience can be said to be a feeling of "red" (or "redness"). Any such experience can be said to be very much like other experiences of "red" (such as "red + ball") and somewhat like experiences of "blue", but not so much like, say, experiences of "anxiety" or "2". The degree to which event/experience A is "like" event/experience B will depend largely on how much the functional descriptions of the inputs and/or outputs are "like" each other.

*
Djacob7
Posts: 35
Joined: October 23rd, 2017, 4:26 am

Re: Philosophical Zombies

Post by Djacob7 »

A P-zombie is materially identical to a normal, conscious, human (assuming consciousness exists anywhere).
Its brain is likewise identical to a human brain, down to every atom and subatomic particle,......... but functions differently?
You have two identical cars coming out of a factory, but one operates completely differently than the other. So differently in fact, that in one you steer with two hands, and in the other with your thoughts. Steering with thoughts is conceivable, but not being one iota different materially from a normal car isn't.
User avatar
Consul
Posts: 6036
Joined: February 21st, 2014, 6:32 am
Location: Germany

Re: Philosophical Zombies

Post by Consul »

Djacob7 wrote:Consul wrote: "Without such concepts, zombies couldn't even make fictional reports about their nonexistent experiences."

Wow! That pleases me, if I'm reading it right.
It seems like you're agreeing with me that P-zombies are absurd and inconceivable.
I do believe that zombie duplicates of conscious beings are impossible, but this is not necessarily to say that they are inconceivable—unless, of course, inconceivability implies impossibility. Here I use "x is inconceivable (for us)" synonymously with "we are unable to form a logically coherent and rationally intelligible concept of x".

"Now, will our philosophical zombies possess phenomenal concepts? They will of course act as if they do. Nonetheless, suppose that they do not possess phenomenal concepts. Then it seems there must be something about the world that they are lacking which accounts for this failing. Presumably, it is their lack of consciousness. But they have everything physical required to possess such concepts. Therefore there is some feature of reality beyond the physical lurking in the vicinity. We as conscious beings in possession of phenomenal concepts, have or are acquainted with this feature. Thus on this horn of the dilemma, we lack a physicalist account of phenomenal concepts.

On the other hand, suppose that the zombies do come to possess phenomenal concepts (an extremely bizarre supposition in my view once one starts to think about how it could be true). Zombies thus know what they are talking about when they say things like ‘now I know what phenomenal red looks like’ but they are radically mistaken given that they are completely unconscious. The epistemic situation of the zombies is totally unlike our own in this case, and could not serve to explain why there is an explanatory gap.

I am inclined to think that phenomenal concepts must in some sense contain or at least directly involve the phenomenal feature which they designate. In that case, it is impossible for zombies to attain them, although evidently they could generate neural structures which physically act in precisely the same way as do the neural correlates or realizations of our own genuine phenomenal concepts."


(Seager, William. Theories of Consciousness: An Introduction and Assessment. 2nd ed. New York: Routledge, 2016. pp. 230-1)

I agree with Seager that zombies cannot acquire and possess phenomenal concepts of phenomenal qualities, because those concepts require first-personal, experiential acquaintance with what they are concepts of.

However (to argue against my own statement above), the "zombiephiles", who defend their conceivability/possibility, can reply that zombies can at least pretend to have and understand phenomenal concepts (of subjective experiences) by having a language module (and the requisite vocabulary) in their brain which enables them to produce and utter words and sentences that seem to express phenomenal concepts and seem to refer to actual phenomenal qualities and the corresponding experiences containing them. We (as nonzombies) understand those apparently experience-describing words and sentences uttered by a zombie and are easily fooled into believing that a nonzombie speaks to us; but the zombie himself, who uses psychological/phenomenological words, doesn't understand what he's saying, having no idea what he's talking about due to his being an experienceless zombie. But it is arguable that zombies could use our public psychological language (and its vocabulary) and thereby behave linguistically as if they had and understood concepts of subjective experiences.

-- Updated October 28th, 2017, 7:08 pm to add the following --

Nevertheless, (seemingly) experience-describing/-expressing utterances made by zombies are and remain problematic:

"Zombies’ utterances. Suppose I smell roasting coffee beans and say, ‘Mm! I love that smell!’. Everyone would rightly assume I was talking about my experience. But now suppose my zombie twin produces the same utterance. He too seems to be talking about an experience, but in fact he isn't because he's just a zombie. Is he mistaken? Is he lying? Could his utterance somehow be interpreted as true, or is it totally without truth value? Nigel Thomas (1996) argues that ‘any line that zombiphiles take on these questions will get them into serious trouble’."

Source: https://plato.stanford.edu/entries/zombies/
"We may philosophize well or ill, but we must philosophize." – Wilfrid Sellars
Chili
Posts: 392
Joined: September 29th, 2017, 4:59 pm

Re: Philosophical Zombies

Post by Chili »

The simplest computer program is one where you ask it "are you conscious" and it types back "yes".
Suppose I smell roasting coffee beans and say, ‘Mm! I love that smell!’. Everyone would rightly assume I was talking about my experience.
We all believe that at least to some extent, our bodies and brains function according to physical cause-and-effect. Most of the example above can be understood in terms of these physical cause-and-effect chains: the particles of coffee wafting, the olfactory bulb stimulated, other neurological events that we understand well, and then at some point - a question mark. The magic happens: a subjective experience. And then from there, well-understood neurological events happen culminating in the vocal utterance about loving the smell. From the perspective of a scientist who is not just listening to the individual casually in a cafe but also has an assortment of brain-measuring devices, most of what is seen happening may well be in a zombie for all he knows.

"The philosopher Georges Rey once told me that he has no sentient experiences. He lost them after a bicycle accident when he was 15. Since then, he insists, he has been a zombie. I assume he is speaking tongue-in-cheek, but of course I have no way of knowing, and that is his point." - Pinker, How the Mind Works

The true hunt must be for whatever in the observer's mind decides that the coffee-smeller is conscious and not a zombie.
Post Reply

Return to “General Philosophy”

2023/2024 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021