Gertie wrote:OK, but I'm talking about subjective experience, not just physical interactions.
I'm not clear, are you saying you believe rocks and neutrinos have some kind of subjective experience? Or not?
No :) As we all know, rocks seem to just persist and react chemically, mechanically and electrically. [/quote]
OK, thanks for the clarification. I'd just say that you're probably not a Panpsychist then, as I understand it. Panpsychism implies rocks would subjectively experience something, we just don't recognise it. Because consciousness is a fundamental part of the universe, all stuff.
I'll try another approach - a nested hierarchy:
1. Chemical reaction
2. Reflexes - suites of chemical reactions
3. Emotions - suites of reflexes
4. Control - of the above.
So a rock is much more reactive than a neutrino, but neither have reflexes, nor those things that emerge from reflexes. You can extrapolate easily enough from there. That's the gist, even if probably flawed. Feel free to improve on it.
Abiogenesis was an emergent, exponential leap, so microbes can be thought of as exponentially more conscious than nonliving chemical (like rocks), just as multicellular organisms are exponentially more aware and flexible than microbes, then we have brained animals, then humans, then institutions, and so on. Each time the evolutionary advantage is strong enough to create local areas of dominance in a population, which we then label "emergence".
Aren't you just conflating consciousness here with responsiveness? To me the interesting and difficult questions about consciousness are to do with its qualiative experiential nature. Isn't that the question we're really stumped on when we ask can computers become conscious? Can they subjectively experience what it's like to be a computer? That's the question fundamental hypotheses like panpsychism try to address. [/quote]
I'll address all at once. Another member had an objection here and I asked him where we would draw that line between an entity that experiences something and one that experiences nothing. He drew the line that flatworms, the simplest organisms with brains. For him, brains and a sense of experience were synonymous. A solid counter, I thought.
Still - and you knew there'd be a still :) - is that to say that brainless organisms with nerve nets experience the same - nothing - as ones without nerve nets? Does being an amoeba feel identical to being a salt crystal - with both feeling "nothing" equally? Are the "nothings" of amoebas and salt crystals really nothing, or perhaps just very small things? Maybe the difference lies in memory? For instance the non-experiences of being in deep sleep or a coma is are really experiences, just that they are subtle and we don't remember them. Everything about us is still present, only dormant.
I think of it as "consciousness chauvinism" where our waking consciousness is so huge compared with most other entities that we dismiss things of a certain level of relative simplicity as "consciousness nulls". I would argue that they are not nulls, just variably small, invisibly small to our relatively huge awareness like atoms are invisible to our sight.
Gertie wrote:Reactions --> reflexes --> emotional responses --> control. We still largely react blindly. Only a small subset of our consciousness, as such, is "awake". Most of it is chemical reactions, reflexes, micro-emotions / sensations, with a small executive function at the top which, like any executive, takes the edifices that support them for granted and carry on as though they are all that matters :)
Well one of the things we don't understand about consciousness is whether it does actually have any executive control (mental causation). We haven't found an executive control centre in the brain, it seems rather to be a highly complex system of interacting subsystems. And it seems that the physical interactions of all those subsystems can at least in principle fully account for our behaviour, without invoking consciousness at all, no mental willing or deciding or emotions or control required.
Another mystery, which imo again points to the need for a more fundamental understanding of the nature of the relationship between 'the physical' and 'the mental'. The type of understanding which might allow us to know if a computer can meet the necessary and sufficient conditions for subjective experiencing.
Like an organisation, the brain certainly does need all those interrelated subsystems but the executive is clearly the cerebral cortex! Overrated, slow, overpaid (energy consumption), often just gets in the way and then takes all the credit :)
Gertie wrote:Are there ways of being in the world that are qualitatively better than what we think of as qualia?
Ooh interesting question! Can you expand a bit more?
Probably not, because qualia is all I know :) Still, as discussed earlier, the human consciousness is novel in nature. I'm thinking of the next innovation.
Gertie wrote:Ever since humans stopped being peers to local species, forming our own exclusive colonies, we have have failed to seriously consider other species. We didn't think about them in terms of responsiveness, flexibility, senses, complexity, sociability, and capacity to feel pain. Instead we assumed, conveniently, that other animals don't feel a thing, merely operating via reflex (#2 above), which we now know is profoundly blinkered.
Our careless chauvinistic attitude to the subjective experience of other species will/should be seen as shameful, genocidal. But it relies on us treating subjective experience as something special. It may have emerged/evolved as part of a process, but imo it's the unique basis for welfare considerations and moral oughts.
Agreed. It's at the heart of the thread's question because we famously have difficulty in comprehending nonhuman consciousness and awareness. In terms of animal welfare, my view is that if an animal looks like it's suffering then it probably is. With robots this assessment is less reliable because programming can mimic suffering, eg. a chatbot saying, "You hurt my feelings". Maybe the only way to know would be for expert programmers to assess the code and discount possible mimicry?
Gertie wrote:What if RobotGreta is very convincing? Does the mask become the face, so to speak? Is the act of simulating consciousness its manifestation? I'd say "no". I think it possible that machines could follow their own evolution and, if useful, they may develop their own way of experiencing, but by then they won't be much like our machines today.
I think the more similar RobotGreta is to HumanGreta, the more likely it is to have subjective experience, that's just common sense. But I don't know what the necessary and sufficient conditions for subjective experience are, no-one does, so I don't know if RobotGreta could meet them. If she does, I agree it might well not be similar to our types of subjective experiences, maybe we wouldn't even recognise it as subjective experience at all. She'd be made of different stuff, with different processes, have different needs, so why would her 'desires', 'fears', etc be like ours?
This is maybe partly addressed above. Also, you have touched on a point I was making earlier - that there may be types of experience that we don't recognise as such. I imagine that machine "desires" at least earlier on, will be akin to how a car "dislikes" being driven too fast, too jerkily or too infrequently. It
really dislikes having an empty tank and is not partial to water in its electrical system, and so on. I imagine that self improving AI would "want" energy, materials and information. A bit like us, except that their connections will seemingly come via logic rather than affection.