How to get semantics from syntax
- JamesOfSeattle
- Premium Member
- Posts: 509
- Joined: October 16th, 2015, 11:20 pm
How to get semantics from syntax
As always, any explanation in philosophy will depend on how you define terms. In this case, the definitions of "value, "meaning". and "information" will be crucial, so I will address these definitions as they come up.
The concept of "value" first emerges when life emerges. On Earth before there was life there was no reason to talk about value. After there was at least single-celled life (self-reproducing), it made sense to talk about value. Value then, by definition, refers to any situation that increases the fitness/survivability of a given life form. It is also at this stage that you can refer to a single agent or self by specifying what physically counts as part of that agent. For single cells, that's pretty much the cell membrane and everything inside. Everything not part of the agent can be considered the "environment".
One way for agents to increase their fitness is by interacting with the environment. Let's assume for a given agent there are two important molecules in the environment: glucose and galactose. For whatever reason, for this agent, glucose is a nutrient and galactose is a poison. Let's assume that the agent already generates a protein that gets inserted into the membrane. That protein has nothing to do with glucose or galactose. Now let's assume the the protein can be mutated slightly so that it can bind glucose and transport it inside the cell, and an alternative mutation will cause the protein to bind and transport galactose. Obviously, in a given population, those with the glucose binding mutation will increase while the others decrease. Thus, we can say the glucose-binding variant has value.
At this point we can talk about information. I will define information as a set of measurable data embodied in a physical system. Any given physical system might contain an immense (infinite?) amount of data. For example, a golf ball will have a specific measurable diameter, mass, number of dimples, etc. That golf ball will also have a given number of molecules, atoms, neutrons, etc., each with a potentially measurable position, velocity, etc. The point is, reference to "information" embodied in the golf ball will necessarily reference a discrete subset of all the possible data. Thus, a reference to "information" will necessarily require a specified context. This context is called a level of abstraction, because you have abstracted away (removed) those variables that are not relevant.
To return to the example of our singLe-called agent, we can say that a molecule of glucose has information, namely a set of measurable variables (relative locations of atoms), and so does a molecule of galactose. More importantly, the information in glucose is potentially discernible from the information in galactose. This means that it is possible to measure (interact with) a subset of the information in glucose such that the result of the interaction would be different if the glucose were instead galactose. And of course, that's what the glucose-binding molecule does.
And this is where "meaning" comes in. In the context of the glucose-binding molecule, the information that gets measured provides the "meaning" that the thing being measured is in fact a glucose molecule. If it (pseudo-)feels like a glucose, that means it's a glucose. That doesn't mean that the glucose binding molecule can't be mistaken. It just means that, as far as the glucose-binding molecule is concerned, if it feels like a glucose, it will be treated like a glucose. That "feeling"(measurement of specific variables) "means" "bring it inside".
Let's call this kind of "meaning" the Formal meaning, because the importance of the meaning depends on the specific form being measured. In the next post I will describe other kinds of meaning.
-- Updated March 3rd, 2017, 5:18 pm to add the following --
[unfortunately, I don't know how to make this a separate post instead of an update ... ah well]
Now let's consider a slightly different scenario. Let's say our agent cell no longer cares about glucose or galactose in so far as nutrients or poisons are concerned. Both are completely inert for the agent. However, the agent now lives in an environment that contains prey (food for the agent). It just so happens that the prey tend to exude glucose. In this scenario, the glucose-binding molecule no longer internalizes the glucose, but instead re-orients the agent such that it moves toward toward the surface where the glucose molecule bound.
In this case the "meaning" is slightly different. In the previous scenario, it makes no difference where the glucose comes from. The agent gets value from the glucose itself. In this new scenario, the agent gets value not from the thing itself (it doesn't actually use the glucose) but from the material thing that it came from. So we can call this the "material" meaning.
Now let's consider another scenario. In this case, the prey does not exude the glucose. Instead, the prey excretes protein X, and protein X just happens to be very efficient at breaking down sucrose into fructose and glucose. So again, the "glucose-binding toward-orienting" is valuable for the agent. The information in the glucose is exactly the same, but the meaning is slightly different. The meaning is that there is prey nearby which is doing something (excreting X) and thereby efficiently generating glucose. We can call this the "efficient" meaning.
Okay, twenty points for anyone who sees what I did there. I described three kinds of meaning:
1. Formal,
2. Material,
3. Efficient
What's 4?
*
[yes, the analogy is not perfect, but it can be made better with some massaging. The point is, Aristotle is relevant here.]
-
- Posts: 2181
- Joined: January 7th, 2015, 7:09 am
Re: How to get semantics from syntax
Ingenious as ever James, but I'll try to explain why I still see a clear blue divide between the shared Objective quantitive realm of stuff, and the private Subjective qualiative realm of experiencing, and why concepts like value and meaning belong in the latter.
(Bear in mind you're dealing with a scientific numpty, what I'm challenging are your underlying premises).
I agree that before life there was no value, but I disagree that the biological ability to replicate is sufficient to create value.The concept of "value" first emerges when life emerges. On Earth before there was life there was no reason to talk about value. After there was at least single-celled life (self-reproducing), it made sense to talk about value. Value then, by definition, refers to any situation that increases the fitness/survivability of a given life form.
Value is a property of Subjective Experience, it is subjective, experiential and qualiative by its very nature. So my claim is that only with the advent of entities able to experience a quality of life, to value whether they are 'comfortable' (having their basic experiential reward system needs met) or 'suffering' ( experiencing their needs not being met) does Value enter the world. My toaster doesn't value being a toaster, because it doesn't experience being a toaster, like-wise single celled organisms (as far as we can tell). Hence it requires critters like you and me to understand value, to be of inherent value, because we experience qualiative states, a quality of life.
So Value is an experienced phenomenon. And there's no indication that single cells or toasters experience qualiative states which are of value to them.
You might respond that this is simply a matter of how we define 'value'. But I'd say we have come up with terms like 'value', 'meaning', 'purpose' 'quality of life' in order to capture something unique to the ontological properties of subjective experiencing. Using such terms as you do, would simply mean we'd need to invent new words to capture those properties, to distinguish them from the way you're using them, because toasters and single cells don't have those ontological experiential properties (as far as we know).
I find Information to be a tricky concept. And it is a concept imo, not a 'thing in itself'. I think we agree from what you say here that 'information' is description of actual 'things in themselves'? So the more ways you can describe something, you can say the more 'information it embodies' as sort of metaphor?At this point we can talk about information. I will define information as a set of measurable data embodied in a physical system. Any given physical system might contain an immense (infinite?) amount of data. For example, a golf ball will have a specific measurable diameter, mass, number of dimples, etc. That golf ball will also have a given number of molecules, atoms, neutrons, etc., each with a potentially measurable position, velocity, etc. The point is, reference to "information" embodied in the golf ball will necessarily reference a discrete subset of all the possible data. Thus, a reference to "information" will necessarily require a specified context. This context is called a level of abstraction, because you have abstracted away (removed) those variables that are not relevant.
I see what you're getting at, but as far as we know it doesn't have any experiential 'feeling', it's simply matter and physics following the known rules of matter and physics. Like a computer. It doesn't 'understand' the role of glucose, it doesn't 'desire' glucose, it's simply physics in action. Like a crystal growing. Just 'syntax'.And this is where "meaning" comes in. In the context of the glucose-binding molecule, the information that gets measured provides the "meaning" that the thing being measured is in fact a glucose molecule. If it (pseudo-)feels like a glucose, that means it's a glucose. That doesn't mean that the glucose binding molecule can't be mistaken. It just means that, as far as the glucose-binding molecule is concerned, if it feels like a glucose, it will be treated like a glucose. That "feeling"(measurement of specific variables) "means" "bring it inside".
Searle makes the point that computers can functionally do what they do without needing to experience 'understanding' of what they're doing. Encoded descriptions are fed in, the symbols are manipulated according to pre-set rules, and the 'correct' answers come out. Physical processes fully account for everything.
Brains encode descriptions of the world in the form of biological physical interactions (the sensory systems), these systems integrate with physical memory, reward, cognitive and other systems, which interact with the physical motor systems, and behaviour comes out. So far no significant difference, physical systems following the laws of physics. Except! There's something about how physical brain systems do what they do which gives rise to this extra ontological 'something' of subjective experience.
And it's this subjective experiencing which brings subjective properties of meaning, understanding, 'qualiativity', value into the world. Brings Mattering into the world. Because the quality of experience matters - it matters to me if I starve, it feels bad, in a way it doesn't matter if a cell doesn't get glucose, or a toaster gets broken, or a computer 'dies'. These things only matter, only have value and meaning, because of the qualitive nature of the experiencing. I can suffer, toasters can't. If I die I lose something of value because I have a quality of life, a single cell doesn't.
By saying that non-experiencing cells have value, it's you who is confering value on something incapable of experiencing it, knowing what it is. Back when only single celled life existed, there were no critters experiencing qualiative states and no-one to confer value, it didn't exist. If all life but single celled organisms was wiped out tomorrow, value would no longer be experienced, nothing of value would exist (except in their potential to evolve into creatures who experience value in the form of qualiative experiential states).
- JamesOfSeattle
- Premium Member
- Posts: 509
- Joined: October 16th, 2015, 11:20 pm
Re: How to get semantics from syntax
Admittedly, what I'm calling value, and maybe I could call proto-value, at the cellular level is not exactly the same as value as you experience it, but the point of the post was that the development of of the latter begins with the former and develops in an understandable process. My intention was to carry the discussion through to neurons, then organizations of neurons, and finish with what I think is the pertinent organization of neurons in humans. Then I realized how much more text is needed for that, and that the question more appropriate for this forum could be asked in a much simpler version. [See my new thread].
Nevertheless, if I am right, I think the biggest hurdle that you will face is understanding that what you are referring to as 'experiential feeling' is actually, necessarily, more than one experience, namely, the experience and then the experience of the results of the first experience (and then the results of those experiences). What I mean by this is that when you refer to the experience of seeing red, you're actually referring to things that happen after you experience red, including the short-term memory of seeing red, and any other memories that are triggered by seeing red, and any physiological effects that occur because you saw red. Memories and other feelings resulting from physiological effects are subsequent experiences, but we generally refer to all of them together as the "feeling of seeing red" because they seem like they're all happening at the same time. Obviously, the experience had by a single cell will (presumably) not involve actual memories, although in some cases (neurons) there may be vague analogs.
Hope that is helpful.
*
-
- Posts: 2181
- Joined: January 7th, 2015, 7:09 am
Re: How to get semantics from syntax
Research will eventually reveal which combinations of patterns of neural activity are required to give rise to subjective experience (feeling), that's a technical question and whatever the scientists tell me I'll accept, I have no 'preference'. We're on our way with that type of empirical observation, it's doable. I'm pretty sure it will be complex, but maybe can be broken down in the ways you suggest, I'm open to what the research tells us.
What that won't answer is why certain patterns of neuronal (or possibly other physical) activity give rise to subjective experience. The hard question, the explanatory gap. Called that because our current scientific models simply don't include anything about subjective experience (feeling) - why it arises at all, its properties, equations, its units, its causal properties, what laws govern it, nothing. Look at the standard model - it's not there.
You're saying the potential for subjective qualities like value, purpose and desire emerge evolutionarily, which is clearly true. And all the way back at the single cell organism level, we can see evolutionarily useful physical processes happening which are analogous in retrospect to desire, purpose and value. And of course there is an evolutionary history from that single celled organism to a human being, who does have subjective experience.
At some point in that history, a critter was born which could perhaps subjectively experience (feel) the difference between light and dark, which caused a reflex response to move towards the light or somesuch (perhaps to avoid a predator casting a shadow like spiders do). There will have been something about that first experiencing critter's physical make up which was significantly different. But we still don't know why the physical difference would lead to this new property of experiencing arising. We could theoretically take that critter, put it in an fMRI machine, dissect and scan it and get a full picture of all its physical processes, but our scientific model would still not be able to explain why this new property of experiencing arose in this particular physical system. We could describe the processes, we could describe their functions, but not explain why those particular physical processes give rise to 'feeling'. That's the issue. That's what still wouldn't have been explained, no matter what terminology or framing you use.
So imo we have to say that either it's unexplained, perhaps unexplainable, using our current scientific model of how the world works. Or we have to say all experienced feeling isn't actually real (at which point we've lost the plot!).
2023/2024 Philosophy Books of the Month
Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023
Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023