Meaning is inherent in the photons which end up hitting my eye , in the apple I'm looking at, or...?JamesOfSeattle wrote:Gertie, this is very close, so let me quibble:Okay,Gertie wrote:A quale is a phemonenal experience resulting from stimuli/input (eg seeing a blue sky, or the feel of a toaster 'wanting' to heat its element)
I would say descriptions of the meanings of the associated physical processes. The entity/system might have no access/knowledge of the physical processes.[/quotewhich is an entity/system's description (representation?) of the associated physical processes involved
I think this might boil down to the same thing.
I prefer to say meaning is inherent in the input (semantic information), and purpose determines which meaning is picked out.which gives meaning/purpose to those physical processes (light hitting retina, interacting with visual cortex, or toaster lever being pushed down etc) for that entity/system,
*
Nitpick of the quibbles -
I can see how your quibbles could apply to emotional type responses - pressing the lever makes the the toaster 'want' to get hot, or seeing an apple makes me 'want' to eat it.
But how about raw perception which doesn't inspire a physical response, isn't functionally significant. I perceive things all the time (my coffee cup in my peripheral vision now, the tree outside my window) which I don't respond to in a physical way (except in my brain) and don't give me purpose at the point the phenomenal experience arises?
I can see that this 'monitoring' of the environment overall has a purpose, but seeing or hearing something in particular which is functionally irrelevant in that moment is still an experiential state, still a quale.
And my larger point still stands, a functional description of consciousness is cool, and potentially very useful, but it shouldn't be confused with an explanation. Hence imo you can't extrapolate from saying 'this is how consciousness functions for known conscious creatures' to saying this is how it must be for toasters, especially when we have a known and testable alternative explanation for how toasters work which is purely physical. (We do for humans too of course, but that's part of the quandary, not an explanation)
-- Updated November 8th, 2017, 10:14 am to add the following --
keep messing up the quotes, sorry. trying again -
JamesOfSeattle wrote:
Gertie, this is very close, so let me quibble:Okay,Gertie wrote:
A quale is a phemonenal experience resulting from stimuli/input (eg seeing a blue sky, or the feel of a toaster 'wanting' to heat its element)
I think this might boil down to the same thing?I would say descriptions of the meanings of the associated physical processes. The entity/system might have no access/knowledge of the physical processes.which is an entity/system's description (representation?) of the associated physical processes involved
I prefer to say meaning is inherent in the input (semantic information), and purpose determines which meaning is picked out.which gives meaning/purpose to those physical processes (light hitting retina, interacting with visual cortex, or toaster lever being pushed down etc) for that entity/system,
Meaning is inherent in the photons which end up hitting my eye , in the apple I'm looking at, or...?
Nitpick of the quibbles -
I can see how your quibbles could apply to emotional type responses - pressing the lever makes the the toaster 'want' to get hot, or seeing an apple makes me 'want' to eat it.
But how about raw perception which doesn't inspire a physical response, isn't functionally significant. I perceive things all the time (my coffee cup in my peripheral vision now, the tree outside my window) which I don't respond to in a physical way (except in my brain) and don't give me a functional purpose at the point the phenomenal experience arises?
I can see that this 'monitoring' of the environment overall has a purpose, but seeing or hearing something in particular which is functionally irrelevant in that moment is still an experiential state, still a quale.
And my larger point still stands, a functional description of consciousness is cool, and potentially very useful, but it shouldn't be confused with an explanation. Hence imo you can't extrapolate from saying 'this is how consciousness functions for known conscious creatures' to saying this is how it must be for toasters, especially when we have a known and testable alternative explanation for how toasters work which is purely physical. (We do for humans too of course, but that's part of the quandary, not an explanation)