Belindi, I really appreciate your trying to engage with me in this topic. The concepts I'm trying to communicate are not intuitive and I appreciate the practice.
Regarding your several examples of "shared digestion", I submit that those examples are of serial digestion as opposed to simultaneous digestion. A pertinent example would be if the twins shared a stomach and both benefitted from the digestion that happened there.
Belindi wrote:But [machines] can potentially share the excact same experience.
[Note: in the following I make statements based on my ideas, which ideas are not generally accepted yet. I think they will be, eventually.]
I disagree. The experiences can be extremely similar (two identical machines running identical software) , but experience requires hardware and experiences on different hardware cannot be shared. Even the twins, who share some, but not all, of the neuronal hardware, do not have the same experiences. An experience is defined (by me) as an event in which an agent is given input and produces output. [NOTE: the "agent" is not what we assign consciousness to. We assign consciousness to a larger system which includes the agent.]. In the case of the twins, they share parts of the agent (via sharing part of the thalamus, at least), and so they also share some of the inputs (via the shared part of thalamus). Also, the sequellae of the experience (qualia) from experiences of one may become available to the agent of the other. This is how one might have an idea of what the other is thinking.
But have AI machines personal and unique memories? If so, my notion of what computers are and do is very wrong.
I think it very likely your notion of what computers do is incomplete as Steve explains above. I would also suggest that your notion of information is incomplete. Don't worry, you're in good company. Many/most people's concept of information is based on Claude Shannon's work, but I suggest his work applies only to a subset of information, namely, coded symbolic information.
I would have thought that computers can potentially share any information whatsoever.
Only general purpose/universal computers (as described by Turing) can potentially share any coded information whatsoever. And such computers can simulate any analogue computers to any accuracy short of perfection, but the closer you want to get to perfection, the longer the calculations take. So the neuromorphic chips being produced by IBM are more like analogue computers, and are specifically not universal computers. The programs that run on the chips can also be run on universal computers, but they run much slower
Qualia, on other hand cannot ever be transferred (except perhaps in the special case of those conjoined twins ).
I would not attribute anything else to qualia besides the "original experience". If further experiences are added, the singular experience becomes infiltrated by memories which incorporate the quale or qualia in a concept. For instance there is your concept of 'red' which is made of thousands of memories of red things and sometimes includes one further quale which you label 'red' and file it too away with your concept 'red.
I would accept this, but then I would say if exactly one experience happened, you would never know it. For example, if only one of the "red" photo receptors in one of your eyes triggered exactly one time, you would never know it. It's only when (I'm guessing) thousands of them trigger repeatedly over a sufficiently long period of time that you notice.
I accept that machines may be made that do what the brain does in terms of qualia. When this happens the machines will in important regards be saddled with responsibilities and rights . Like us they will need to stay humanised and not revert to narrow machine think. If the qualia-capable machines cannot be made so that they are capable of resonsibility they will have to be ethically controlled as they will be terribly dangerous.
I think you are conflating perception and intelligence and emotion. I think those are three very distinct things.
-- Updated September 14th, 2017, 8:33 pm to add the following --
A quale seems to be far too high a level concept for that.
I think that a quale is as low level as you can get and still be aware. I think that a quale is not a concept but is a simple percept.Even simpler than a percept, if such is possible.
I'm with Belindi on this one.
I was taught that neuronal action which is electro chemical is all or nothing = discrete.
This is true, but the decision to fire is analogue.