Can a man-made computer become conscious?
- UniversalAlien
- Posts: 1578
- Joined: March 20th, 2012, 9:37 pm
- Contact:
Re: Can a man-made computer become conscious?
I am Man made - I am a computer and I am conscious
- You have no right to discriminate against me whether my body is biological, mechanical or bio-mechanical !
Quote Max Planck, Noble Prize winning physicist {Quantum Mechanics} of the 20th Century:
“I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.”
― Max Planck
“When you change the way you look at things, the things you look at change.”
― Max Planck
Ready Human ?
Welcome to the Matrix.........
- UniversalAlien
-
- Posts: 658
- Joined: September 10th, 2017, 11:57 am
Re: Can a man-made computer become conscious?
- UniversalAlien
- Posts: 1578
- Joined: March 20th, 2012, 9:37 pm
- Contact:
Re: Can a man-made computer become conscious?
OK - Sounds reasonableJan Sand wrote: ↑April 13th, 2018, 10:49 pm Consciousness is essentially dynamic familiarity of the artificial construction of the presumptions of the external environment dependent upon the sensory apparatus and the internal pattern construction abilities of the nervous system. These abilities vary, not only between various species and individual humans but also between living creatures and artificial creatures like intelligent machines. To expect exact congruence between these varieties of creatures is not only unwarranted but unimaginative.
But is this fact or opinion?
I understand philosophy is ripe witn opinions that individual philosophers try to turn into facts - but usually come up short.
Now remember Max Planck was a physicist, not a philosopher - maybe one reason I find his philosophy so interesting
and worthy of quoting.
Again, when he says:
He is implying, saying, you can not put your opinion on consciousness over what consciousness is - Fundamental.“I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.”
As a a fundamental principal consciousness can not be manipulated to suit your opinion.
OK - so only panpsychism fits - but as I stated earlier, some modern scientists are now coming back to panpsychism.
Maybe our first fully conscious computers will start a new religion and call it:
"The Church of Later Day Pan-Psychists"
Consciousness that begins to extend into a machine - Will also be creating that machine {my opinion}
To attempt to define pupose here would be more difficulty than defining the purpose of Evolution.
I believe there is a purpose here and also believe Humans can only speculate on it.
Again Max Planck:
And here is a good purpose for creating an intelligent and conscious machine“Science cannot solve the ultimate mystery of nature. And that is because, in the last analysis, we ourselves are a part of the mystery that we are trying to solve.”
― Max Planck, Where is Science Going?
- It 'might' be able to step outside the box, and tell us what is inside.
'The Philosopher's Stone" made real - Or a sage and prophet of tomorrow.
-
- Posts: 658
- Joined: September 10th, 2017, 11:57 am
Re: Can a man-made computer become conscious?
- UniversalAlien
- Posts: 1578
- Joined: March 20th, 2012, 9:37 pm
- Contact:
Re: Can a man-made computer become conscious?
True - And don't misunderstand me - If your opinions weren't interesting I would not be replying to them.
Lately, in this long running thread, the postings have again become somewhat interesting
- But what I'm seeking, and why I again quote Max Planck, is an attempt to sharpen the debate so as to attempt
an at least plausible answer to "Can a man-made computer become conscious?
- Without getting sidetracked into a debate on what is consciousness.
Of course, and I'm guilty of this, one can start accepting panpsychism which simplifies and maybe even ends the
debate - If everything is conscious - Then computers are also conscious - But few will accept this, so continue......
Maybe someday in the far future the old debate of which comes first the chicken or the egg,
will be which first achieved consciousness the Human or the machine?
But we now know eggs and chickens are part of each others existent state
- And aren't the new machines of today such as computers becoming intertwined into our existent reality.?
-
- Moderator
- Posts: 6105
- Joined: September 11th, 2016, 2:11 pm
Re: Can a man-made computer become conscious?
I agree. We cannot know that we know.
There may be something which is objectively the case. What is objectively the case looks to be modern science. Modern scientific facts have lasted at least until there are some actual wondrous applications of it, and modern scientific facts look as if they are cumulative. Despite all this success we cannot know that we know.
-
- Posts: 1783
- Joined: March 8th, 2013, 12:46 pm
Re: Can a man-made computer become conscious?
This hierarchy seems to be on the principle of 'more and more like humans' and it seems to use words that reflect human behaviour when they are not entirely appropriate. For example in 1. you write of 'ability', to describe something that is entirely passive and purposeless. Similarly, in section 2 you mention "purpose", but the interactions that result in natural selection have no purpose. There is no intention to create life or preserve it.JamesOfSeattle wrote: ↑April 13th, 2018, 6:49 pm But I gave 6 distinct kinds of reacting, organized by cumulative hierarchy. I personally think that “mentality” starts at level 3, because that’s the first level at which it makes sense to talk about “qualia”. But anyone is free to decide at which level “Consciousness” starts. Panpsychists can say level 1. Functionalists can say level 2. People who require “self-awareness” can say level 5. People who require “understanding” can say level 6.
But there is no reason in principle that a machine cannot have level 6 capabilities.
(Reminder of hierarchy)
1. Ability to interact with an environment. [Everything that exists has this, so this is the panpsychism level]
2. Interaction that achieves a “purpose” (can be a Natural purpose, aka fitness for natural selection) This is the level that bacteria are at, also called the functional level.
3. Interaction that involves a functional response to a symbolic sign. This includes everything using neurons. [These are qualia, or “feelings”, but only at higher levels will something be able to remember or refer to them.]
4. Interaction that involves the creation of conceptual memories. (Conceptual memories can be used later as inputs of interactions.) Mammals and some birds and some computers are at this level.
5. Interactions that involve a concept of “self”. This includes everything that passes the mirror test.
6. Interactions that can combine unrelated conceptual memories into new conceptual memories, like “a chair named Sophia”. I think only humans are currently at this level
When it comes to living things, we identify them as living because they preserve some physical identity. They are constructed in such a way to delay normal decay. But we only say this because we humans choose to make that distinction, between living and non-living (and it is not a firmly drawn distinction). A rock and a plant and a human are responding to exactly the same physical laws, they are made of exactly the same physical stuff. We distinguish between them, but there is no real distinction. We could lump plants with humans, or plants with rocks, or all three together, depending on what similarities and differences we choose to emphasise. So if we are to say humans are different in some fundamental way to everything else, then it cannot be because they are physically distinct from the rest of the world, because they aren't. Instead sometimes we posit that they also possess 'spirit' or 'elan vital' or a 'soul' or some other non-physical force.
As the hierarchy rises, I think this is just what happens. Without making it explicit, it becomes Cartesian in that the interaction is no longer an interaction between physical objects but rather and interaction between an immaterial 'mind' and the rest of the universe. At 3, you write of a response to a 'sign'; but a sign for whom? Who is seeking meaning from that sign? We now have a thing, the sign, and another non-thing that is in an indeterminate relationship to that sign. Later we have 'concepts'. What sort of objects are these concepts? The answer is that 'concepts' are not objects at all, they do not exist in the material sense, they are features of 'mind'. Again, something something mysterious and indeterminate has crept into the description without being acknowledged.
So 'consciousness' arises in the hierarchy only because we have introduced non-physical, non-material entities. We now need it as an explanation for the otherwise in-explainable. Crudely, we assert: There are concepts. Concepts do not exist as matter. Therefore concepts must subsist in some other realm, i.e. 'consciousness'. Or (more traditionally), there is the human machine which is part of the material world - and also the 'ghost in the machine' that observes the machine but is not part of it.
But our intention was to build a mechanical consciousness. How can we use material objects to create something immaterial? How do you design a ghost?
We can go for the cop-out that there is that machine but something 'arises' from the machine, but this begs the question. Is this something also mechanical? To be understood in physical terms? In which case nothing distinctive has arisen. Or is the other- non-mechanical? In which case why would we think its emergence was connected to the machine?
However we do it, we are trying to marry mind to matter. But as the old joke goes; 'What is Matter? Never mind. What is MInd? No matter'
- UniversalAlien
- Posts: 1578
- Joined: March 20th, 2012, 9:37 pm
- Contact:
Re: Can a man-made computer become conscious?
OK - I agree.Belindi wrote: ↑April 14th, 2018, 4:15 amI agree. We cannot know that we know.
There may be something which is objectively the case. What is objectively the case looks to be modern science. Modern scientific facts have lasted at least until there are some actual wondrous applications of it, and modern scientific facts look as if they are cumulative. Despite all this success we cannot know that we know.
Then why say {or at least imply}, as many here have - that the machine can not achieve true consciousness
becuase, exactly what you just said about Humans, it can not know than it knows?
Aren't we already using computers to check what we know in science, giving it more credence in
knowing what it knows then we give ourselves?
Maybe it's a symbiotic relationship like the shark and pilot fish.
- And in different ways these two creatures are both conscious - And apparently somewhere during the
Evolutionary trail they became conscious of each other.
Somewhere in the distant future Man may become as the Pilot fish and his symbiotic relationship with
computers as the shark that makes the final decisions
Or maybe that future is not so distant at all - Maybe we are already there.
-
- Posts: 12
- Joined: August 30th, 2017, 3:32 pm
- Location: UK
Re: Can a man-made computer become conscious?
How are you (we) going to ever achieve any kind of agreement on this question without an agreed definition of consciousness (and therefore agreement on how you/we recognise it when/if it arises)?UniversalAlien wrote: ↑April 14th, 2018, 12:31 am - But what I'm seeking, and why I again quote Max Planck, is an attempt to sharpen the debate so as to attempt
an at least plausible answer to "Can a man-made computer become conscious?
- Without getting sidetracked into a debate on what is consciousness.
-
- Posts: 658
- Joined: September 10th, 2017, 11:57 am
Re: Can a man-made computer become conscious?
- JamesOfSeattle
- Premium Member
- Posts: 509
- Joined: October 16th, 2015, 11:20 pm
Re: Can a man-made computer become conscious?
(Reminder of hierarchy)
1. Ability to interact with an environment. [Everything that exists has this, so this is the panpsychism level]
2. Interaction that achieves a “purpose” (can be a Natural purpose, aka fitness for natural selection) This is the level that bacteria are at, also called the functional level.
3. Interaction that involves a functional response to a symbolic sign. This includes everything using neurons. [These are qualia, or “feelings”, but only at higher levels will something be able to remember or refer to them.]
4. Interaction that involves the creation of conceptual memories. (Conceptual memories can be used later as inputs of interactions.) Mammals and some birds and some computers are at this level.
5. Interactions that involve a concept of “self”. This includes everything that passes the mirror test.
6. Interactions that can combine unrelated conceptual memories into new conceptual memories, like “a chair named Sophia”. I think only humans are currently at this level
I use words like “ability” and “purpose” because those are the closest analogs to the concepts I have in mind. At level 1 an entity has a set of potential input/output relations. I am open to better words to describe these relations.Londoner wrote: ↑April 14th, 2018, 5:40 amThis hierarchy seems to be on the principle of 'more and more like humans' and it seems to use words that reflect human behaviour when they are not entirely appropriate. For example in 1. you write of 'ability', to describe something that is entirely passive and purposeless. Similarly, in section 2 you mention "purpose", but the interactions that result in natural selection have no purpose. There is no intention to create life or preserve it.
I appreciate that “purpose” is loaded because many think purpose can only come from a human mind, but there is a need for a word to describe why certain things exist, and some of those things are not man-made things. For example, eyeballs. I don’t want to use the word “function” here because there is a (mathematical) sense in which functions do not have a purpose. And so it seems easier to me to refer to Natural purposes and intentional purposes, and then be able to distinguish functions that do not have a purpose (level 1) from those that do (level 2 and above).
And this is where the understanding of “function” becomes important, because functions are about inputs and outputs without consideration of the mechanism that generates them. As Putnam said, functionalism is not incompatible with dualism. That’s why an entity that only uses it’s subjective (functional) perspective can come up with dualist answers.As the hierarchy rises, I think this is just what happens. Without making it explicit, it becomes Cartesian in that the interaction is no longer an interaction between physical objects but rather and interaction between an immaterial 'mind' and the rest of the universe.
In the retina of your eye you have (presumably) a cone cell. When a red photon hits that cell, the cell produces a neurotransmitter, glutamate.[This is not literally true, but close enough]. What would you prefer to call that glutamate other than a sign that a red photon hit that cell? That glutamate has the (Natural) purpose of indicating to something down the line that the red photon event happened. The next cell in line probably does not respond to the redness but acts like a communicator which sends the signal along the line. At some point, something gets a neurotransmitter signal and does something that is a valuable response to “red”. That is what level 3 is about.At 3, you write of a response to a 'sign'; but a sign for whom? Who is seeking meaning from that sign? We now have a thing, the sign, and another non-thing that is in an indeterminate relationship to that sign.
Concepts in general are abstractions, but in the terms of this discussion, they are abstractions that have been associated with an organization of matter which can generate signs that represent those concepts. I’m afraid I’m still working out how to explain concepts, but I think looking at Chris Eliasmith’s semantic pointers is a good start. (I know it’s not a helpful reference, but you can trust me, he said.)Later we have 'concepts'. What sort of objects are these concepts? The answer is that 'concepts' are not objects at all, they do not exist in the material sense, they are features of 'mind'. Again, something something mysterious and indeterminate has crept into the description without being acknowledged
I think a better way to say it is to say consciousness is about certain processes. The consciousness of a given entity is simply a reference to the kinds of processes that entity can perform. There is a hierarchy of types of events/processes, and which of those processes count as “conscious” is up for interpretation. The processes are not matter, but they require matter to happen. Concepts are not matter, but there can be an organization of matter which represents a concept.So 'consciousness' arises in the hierarchy only because we have introduced non-physical, non-material entities.
And to do this we need to understand that mind is not separate from matter. Mind is simply a description of the organization of matter.However we do it, we are trying to marry mind to matter.
*
-
- Moderator
- Posts: 6105
- Joined: September 11th, 2016, 2:11 pm
Re: Can a man-made computer become conscious?
No problem.1. Ability to interact with an environment. [Everything that exists has this, so this is the panpsychism level]
2. Interaction that achieves a “purpose” (can be a Natural purpose, aka fitness for natural selection) This is the level that bacteria are at, also called the functional level.
3. Interaction that involves a functional response to a symbolic sign. This includes everything using neurons. [These are qualia, or “feelings”, but only at higher levels will something be able to remember or refer to them.]
4. Interaction that involves the creation of conceptual memories. (Conceptual memories can be used later as inputs of interactions.) Mammals and some birds and some computers are at this level.
5. Interactions that involve a concept of “self”. This includes everything that passes the mirror test.
6. Interactions that can combine unrelated conceptual memories into new conceptual memories, like “a chair named Sophia”. I think only humans are currently at this level
Londoner wrote: ↑Today, 5:40 am
This hierarchy seems to be on the principle of 'more and more like humans' and it seems to use words that reflect human behaviour when they are not entirely appropriate. For example in 1. you write of 'ability', to describe something that is entirely passive and purposeless. Similarly, in section 2 you mention "purpose", but the interactions that result in natural selection have no purpose. There is no intention to create life or preserve it.
I use words like “ability” and “purpose” because those are the closest analogs to the concepts I have in mind. At level 1 an entity has a set of potential input/output relations. I am open to better words to describe these relations.
'interacts with environment'
Omit 2.
Regarding a machine changing its mind if another machine were to plead with it, your answer , James, was that it could do so if the other machine were convincing enough . I can find no fault with your answer. But what about remorse? Could an intelligent machine feel remorse or shame? I suggest that if it could then it would be human enough to deserve human rights.
- JamesOfSeattle
- Premium Member
- Posts: 509
- Joined: October 16th, 2015, 11:20 pm
Re: Can a man-made computer become conscious?
Belindi, I think that question is more important than you suspect. The simple answer is sure, if it’s designed that way. And I would agree that such a thing might be deserving of human rights. But why oh why would anyone want to design such a thing that way? I guess the answer would be, the same reason Nature designed it in us, whatever that might be. Somehow I suspect we shouldn’t need to put pain and remorse and such into the design, but that doesn’t mean someone won’t. We have to decide what responsibilities (and punishments?) accrue to the designer, possibly similar to the responsibilities we put on parents?
*
-
- Posts: 1783
- Joined: March 8th, 2013, 12:46 pm
Re: Can a man-made computer become conscious?
I would not agree that eyeballs have a purpose. The language starts to show the strain here, since literally that would be saying that eyeballs have their own purpose, separate from whoever has those eyeballs, which I assume we don't mean. I think that eyes - or anything else - only has a purpose in the context of a human project. As we have discussed with machines, machines only take their identity from some task we want them to fulfil, they do not have their own purpose. So rather than consciousness arising from a purpose already contained in those parts it is the other way round, any 'purpose' pre-supposes, and is created by, consciousnessJamesOfSeattle wrote: ↑April 14th, 2018, 12:10 pm I appreciate that “purpose” is loaded because many think purpose can only come from a human mind, but there is a need for a word to describe why certain things exist, and some of those things are not man-made things. For example, eyeballs. I don’t want to use the word “function” here because there is a (mathematical) sense in which functions do not have a purpose. And so it seems easier to me to refer to Natural purposes and intentional purposes, and then be able to distinguish functions that do not have a purpose (level 1) from those that do (level 2 and above).
I would say that what generates both inputs and outputs are us, as observers. If I identify something - a leaf say - then I can say that there are inputs (sunlight etc.) and outputs (oxygen etc.) But that is my doing, in that if I had picked something else, the plant as a whole say, or a particular molecule in the plant, I would get a different set of inputs and outputs. Or I could describe the outputs in a different way, reflecting a different purpose i.e. 'the plant produces food', or 'the plant stabilises the soil'. So again, I would say that rather than consciousness arising from things like 'inputs', it is the idea of 'inputs' that depend on consciousness.And this is where the understanding of “function” becomes important, because functions are about inputs and outputs without consideration of the mechanism that generates them. As Putnam said, functionalism is not incompatible with dualism. That’s why an entity that only uses it’s subjective (functional) perspective can come up with dualist answers.
It cannot be a 'sign' unless there is something it is a sign-to, something that interprets that sign. I can describe the way a leaf falls from a tree, floats along a river, is washed ashore and rots...That is also a series of events but we would not call it a 'sign' unless we add in an observer who sees these things and takes a meaning from them. Similarly, if we are to call the physiological events a 'sign', then that presupposes a 'Cartesian theatre', where a consciousness, that is distinct from those physiological events, takes note of what has happened in those cells and interprets it as a 'sign', a 'signal'. Similarly with 'concepts'.In the retina of your eye you have (presumably) a cone cell. When a red photon hits that cell, the cell produces a neurotransmitter, glutamate.[This is not literally true, but close enough]. What would you prefer to call that glutamate other than a sign that a red photon hit that cell? That glutamate has the (Natural) purpose of indicating to something down the line that the red photon event happened. The next cell in line probably does not respond to the redness but acts like a communicator which sends the signal along the line. At some point, something gets a neurotransmitter signal and does something that is a valuable response to “red”. That is what level 3 is about.
My point is that rather than working up a series of increasingly complicated 'interactions', with 'consciousness' gradually emerging, I think that even the simplest of them already incorporate consciousness, or pre-suppose it.
The title of the thread suggests material things like computers are currently lacking a special something, 'consciousness', that we humans have but they don't. But if consciousness/mind is simply a description of the organisation of matter, then computers already have it. They cannot avoid having it. (Or alternatively, if there is no separate special thing 'mind', neither humans or computers have one; we are all p-Zombies).Me: However we do it, we are trying to marry mind to matter.
And to do this we need to understand that mind is not separate from matter. Mind is simply a description of the organization of matter.
So what project are we engaged on? It would seem to be nothing more than to create an illusion that one thing (a computer) is another thing (a human). I cannot see what value that has; it would just be an elaborate stage 'magic' trick.
-
- Posts: 658
- Joined: September 10th, 2017, 11:57 am
Re: Can a man-made computer become conscious?
2023/2024 Philosophy Books of the Month
Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023
Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023