Can a man-made computer become conscious?

Discuss any topics related to metaphysics (the philosophical study of the principles of reality) or epistemology (the philosophical study of knowledge) in this forum.

Post Number:#61  Postby wanabe » January 7th, 2009, 2:03 am

increasing processing power does not mean becoming conciousness. due to knowledge of your writing style I think you know this. increasing processing power would be an initial step, I think that is your point, to show it is becoming more likely that a computer could be concious.

my only question is what are the conversions to peta flops and teraflops from MHz

we will have to have some direct biological influence, cybernetics, to make a computer concious and better than the human mind and especially to have a soul.
Secret To Eternal Life: Live Life To The Fullest, Help All Others To Do So.Meaning of Life Is Choice. Increase choice through direct perception. Golden rule+universality principal+Promote benefits-harm+logical consistency=morality.BeTheChange.
User avatar
wanabe
Moderator
 
Posts: 3385 (View: All / In topic)

Joined: November 24th, 2008, 5:12 am
Location: UBIQUITY
Favorite Philosopher: Gandhi.



Become a member for less ads

Already a member? Login
 

Post Number:#62  Postby Belinda » January 7th, 2009, 7:06 am

OXFFF even although AI to humjan standrd of sapience is possible, AI does not imply sentience does it? Can a machine ever be a subject?
Belinda
Contributor
 
Posts: 13865 (View: All / In topic)

Joined: July 10th, 2008, 7:02 pm
Location: UK

Post Number:#63  Postby Akhenaten » January 8th, 2009, 11:29 am

We sit currently at around 2.13 Petaflops of processing power, currently, in the supercomputer at the military installation Dragon Slayer. (Declassified information) We need 20 petaflops to do the raw computational power. Roughly 10 years.

You're correct, I don't believe its nearly that simple, and my response had been to Mark Black. =p Concerning the Assumption that making Assumptions is an aspect of sentience, and not simply a by-product of the human mind... the concept of pure logic, or stoicsm, is very old.
DISCLAIMER: THIS DOCUMENT does not cover all individuals in the infinite and variable universe. This is in no way is speaking on cases of incredible, random, or odds of more than 1 : Pi against probability.
User avatar
Akhenaten
 
Posts: 209 (View: All / In topic)

Joined: August 29th, 2008, 6:22 pm

Post Number:#64  Postby mz » February 5th, 2009, 2:24 am

I admit I mostly skimmed over the above posts, but I'll just leave my own interjection (and apologize if it very closely resembles an argument already stated)

I have to wonder if human reason and creativity is actually a function of the soul, or just another function of a highly advanced brain.

I sometimes like to think of god as a gigantic entity of pure imagination, and each of us are bestowed with little pieces of god that allows us to also imagine and innovate things.

but I've also pondered the power of technology if this were not the case, and Human Reason was replicatable

I've been workin on a short story about a computer called the "Super-Philosopher" named Lenora
Now this computer is programmed with a very basic system of digits known as the "Lenorian-code"
allowing the computer to think like a human.
(I've wondered a few times about this, and if there could be such a simple code or system, that is so simple nobody has ever considered or even thought of)

Also I have wondered if it would be possible to develope a system of turning language and words into plottable points, and each sentence and phrase would be plotted on this graph by the computer, and as it makes all these new lines, it could be able to perform certain functions like "averaging" certain lines together to draw conclusions and what not.

and as the computer compiles this massive database of information, and is working it all into it's internal plotting system, it begins to be able to draw huge, deeply philisophical conclusions from all of the information that it has collected, by possibly finding trends or something.

I tried to develope such a system, quickly discovered it would take way too long, and I'm a rather impatient person, hah.

Though, my crazy philosopher's dream would be to initiate a terminus loop (a term i made up for when something in the future becomes it's own cause in the past, initiated by suggestion), to create an interdeminsional infinitely powerful super-philosopher-computer, using the entire universe as its calculating field, equipped with the Lenorian-code. Imagine that, an infinitely powerful, infinitely wise, omniscient being. Sounds kinda like god to me.

I would be the singlemost important person who ever lived, *cackles maniacally* :lol:
I live my life in a dream; the constant threat of a rude awakening keeps me on my toes.
-Mettley Zimmer
User avatar
mz
 
Posts: 48 (View: All / In topic)

Joined: February 2nd, 2009, 6:15 pm
Location: lost in thought

Post Number:#65  Postby system-hater » February 6th, 2009, 10:15 pm

It is not unrealistic to imagine a world populated by machanical oganisms and robotic lifeforms. What is unrealistic, is that we anthropomorphosize the elements of technology. If we have reached an age when we attribute human virtues and characteristics to machines we have obviously forgotten and altogether dismissed any and all residual logic we have not already given up to the technological systems. Granted, the likelihood of machines dictating societies to a mass degree is not only possible but imminent. However, I find it shameful and repugnant that we waste our time mulling over whether or not non-biological entities can be examined as artificial human beings. Have we entirly forgone our ability to reject such sophomoric possibilities? And even if they are possible, have we fallen into such a state of resignation and pacifism tht we simply have accepted it as inevitable? Brothers, we are human beings, our emotions, feelings, thoughts, desires, urges, all belong to us. Do not entrust the beauty of your intellect into the evil hands of technology.
system-hater
 
Posts: 17 (View: All / In topic)

Joined: February 6th, 2009, 3:48 pm

Post Number:#66  Postby mz » February 6th, 2009, 10:31 pm

Brothers, we are human beings, our emotions, feelings, thoughts, desires, urges, all belong to us. Do not entrust the beauty of your intellect into the evil hands of technology.


But it is in technology where we can reach perfection! Just as technology can be evil, it can also be beautiful.

Actually I do agree with you to an extent, that the human mind is a very unique and unreplicable thing.

But I still wonder what marvels could come from creating "innovating" and "creative" super-computers.

..Though I wonder, maybe it is in our flaws that actually allow us to be creative, innovative, and curious. Understandings only mortals like us could reach, but completely outside the comprehension of immortals.
I live my life in a dream; the constant threat of a rude awakening keeps me on my toes.
-Mettley Zimmer
User avatar
mz
 
Posts: 48 (View: All / In topic)

Joined: February 2nd, 2009, 6:15 pm
Location: lost in thought

Post Number:#67  Postby system-hater » February 7th, 2009, 1:46 am

mz wrote:
Brothers, we are human beings, our emotions, feelings, thoughts, desires, urges, all belong to us. Do not entrust the beauty of your intellect into the evil hands of technology.


But it is in technology where we can reach perfection! Just as technology can be evil, it can also be beautiful.

Actually I do agree with you to an extent, that the human mind is a very unique and unreplicable thing.

But I still wonder what marvels could come from creating "innovating" and "creative" super-computers.

..Though I wonder, maybe it is in our flaws that actually allow us to be creative, innovative, and curious. Understandings only mortals like us could reach, but completely outside the comprehension of immortals.


Therein lie's the crisis of human character MZ. True, we succumb to our base undertones of curiosity and therefore harbor the propensity for "pervasive creation". What we are should not be vilified or negated. However, it is important to approximate our understanding that while we are inventors by nature, it is the nafarity of the system that has and will continue to influence and fuel our desire to create. This is dangerous you see. When we trek on a path to excessive aquirement and acheivement, we overlook the problems that it causes. The needs of man have always been, food, water and shelter. Technology has devalued those nessesities into nothing more than triviality. Moreover, we have been given "artificial activities" to pursue and further immerse ourselves in, which manufactures our lives into exactly what we are discussing-machines. If we allow ourselves to become enamored by this artificial species we will have reliquished our individuality completely and we will have no connection whatsoever to our intrinsic nature.
system-hater
 
Posts: 17 (View: All / In topic)

Joined: February 6th, 2009, 3:48 pm

Post Number:#68  Postby mz » February 7th, 2009, 1:57 am

I believe technology has advanced far beyond our civilization's ability to responsibility use it. We are simply not mature enough as a race, and I can't help but worry we are going to obliterate ourselves before people begin to learn.

But you're right, as long as people are corruptible, and technology is in the hands of people, then technology will also be corruptible.

I don't think technology is the problem here...
it's the people!

I think I know how to solve this...
*goes out and builds a gigantic self-replicating robot army to destroy humanity*
I live my life in a dream; the constant threat of a rude awakening keeps me on my toes.
-Mettley Zimmer
User avatar
mz
 
Posts: 48 (View: All / In topic)

Joined: February 2nd, 2009, 6:15 pm
Location: lost in thought

Post Number:#69  Postby loudthoughts » February 7th, 2009, 9:04 pm

My answer to the original question (whether we can create a computer that has consciousness and a soul) is sort of YES, but more complicated than that.

First of all, I don't believe in humans having "souls". That is a very nebulous concept that has no evidence in science, knowledge, or fact. The reason we are so attracted to the idea of a soul is that we don't know why our notion of self feels separate from the physical world. I feel there is a 'me' and there is my body. The 'me' is unverifiable and ungrounded in physical reality, and is only associated with my body, or at least that is how it feels. Just because I cannot explain it fully doesn't mean I should resort to a concept that has as little basis in verifiable reality as that of the Christian God.

Second, the computers we have today could be considered having artificial consciousness, just at a more basic level. Being aware of surroundings is not difficult. We can construct a machine that is much more aware than a human body is, using technology we have today. Being aware of your self as an entity is something quite different, and I don't think it is the factor we need to look at when talking about designing artificial beings. A tree has no self-awareness, but is considered alive. If human is considered to have a soul, then what about a plant, or any creature with varying levels of consciousness in between?

Third, our general concept of "alive" is flawed. While using the term is useful in biology as a method of classification, it does not make sense at a fundamental level to call something alive or not alive.

As far as being able to reproduce goes, this qualifying factor for being alive is based on the evolutionary tendency of life on earth. But what if there was a creature that could live forever, adapting to the environment as it changed? Would it not be considered alive, even though it wouldn't have to reproduce to continue the life of its species? Or, look at a mule, which is basically a sterile version of a donkey, and tell me it's not alive because it will never reproduce.

Our notion of alive also includes the biological idea of cells. However, this is only one type of life, and there is a HUGE variation in cells' function, nature, and composition. I believe it is possible to come up with a different technological unit that could be the underlying basis for a body with just as much functionality as bodies of living creatures on earth.

Living things can be dormant for long periods of time (few- or one-celled organisms trapped in ice for thousands of years) and then become functional when the environment allows. This happens all the time in emergency rooms when a patient's heart stops, and their bodies are brought back to functionality by CPR or defibrillator. Are these patients not alive?

Some say living creatures must be able to adapt to the environment, that they must have a will to survive. This will to live can be programed into a computer. But no living creature is able to adapt perfectly to their environment so that they live forever - they are instead bound by their physical constitution and genetic endowment as to how well they can survive. So some beings are able to survive for longer, some shorter, and some don't make it from birth.

Another thing people think of when they consider living vs nonliving is the complexity of the organism. However, when you look back to when the first living creatures were forming on earth, 3.5 billion years ago, they started out as molecules that replicated themselves. Were those molecules, in their incredible simplicity compared to organisms, or even cells, not alive? Where along the timeline from simple to complicated can you distinguish: THERE is where it became alive and before then they were all not alive?

My fourth point is a belief I have. I believe that "life" as we incorrectly label it, is simply organization of matter. The keyboard on which I am typing has just as strong a claim to being alive as I do. I come to this conclusion because I don't believe in distinguishing living from non-living, as there is no logical distinction. This point will become especially relevant in the next century or so when we are capable of creating an artificially intelligent being. People argue about whether animals should have rights now, but when we create things that are potentially more intelligent than humans, the question will be debated a great deal more. I think we will have to expand our notion of alive, if not eliminate it and replace it with a more appropriate distinction.
loudthoughts
 
Posts: 13 (View: All / In topic)

Joined: February 2nd, 2009, 1:39 am
Location: http://www.squidoo.com/problem-with-life

Post Number:#70  Postby loudthoughts » February 7th, 2009, 10:34 pm

The idea I'm about to tell you I did not come up with. I watched a video of a neurologist who made these arguments. I would recommend you watch it too, it is very interesting:

Jeff Hawkins' talk on how brain science will change computing, on TED's website

A NOTE ON OUR CONCEPT OF INTELLIGENCE:

Our idea of intelligence has been, in the fields of psychology and neurology, mainly based on behavior. This, in my opinion, is the wrong way to look at it.

If you look at an alligator, which as a reptile has an "old" evolutionary brain, and study its behavior you would have to conclude it is a very complex being intellectually. It has survived very well for millions of years. It has complex behaviors, however we would never consider an alligator as having anywhere close to human intelligence. Indeed compared to most other animals alligators are rather stupid.

More relevantly, a computer could mimic, to a tee, the exact behavior that a human has, but we wouldn't necessarily consider that intelligence as it would not necessarily have understanding.

I believe our view of intelligence should shift from being based on behavior to being based on memory and prediction.

Mammals' brains are more sophisticated than reptiles' brains because mammals have what is called the cortex added on top of the "old" brain. Humans have a frontal cortex, which came about because evolution copied one cortex and added on another, giving us our complex social nature, linguistic capability, and highly advanced motor performance capability.

What happens is all sensory information coming into the brain pass through the old brain and become compartmentalized in the newer portions that humans have. The cortex basically works on memorizing all that comes in through the senses, with great detail and distinction. Then, from moment to moment, our brain is constantly making predictions based on these memories.

Let's say someone were to move the door handle on the front door of your house just a few inches to the right while you were away. The next time you go to the door, you will immediately know that something is wrong with the door. This is not because you saw the door and went through, in your head, all the possible things about the door that could be amiss, and eventually in the long list contemplated where the handle was supposed to be. No, your brain has stored memories of entering the doorway, and as you approached the door, your brain was making predictions about what was going to happen this time based on those memories.

This way of thinking about the brain is by no means all-inclusive, but I think it is a more accurate framework when thinking about intelligence.

NOW, to get back on topic:

Keeping this in mind, I think it is entirely reasonable to believe we will be able to create an artificially intelligent thing in the near future (meaning within say a hundred years). I think it would be a simple question of how soon we will have the technology capable of such a huge memory-based system that can then intuitively make live, constant predictions.

There is a team of biological computer scientists (I can't remember the actual title for their field) which is currently working with the most capable, vast computer in the world to recreate a part of a rat brain. They have actually accomplished this with a very small portion of the brain, in its neurological behavior when given life-like stimuluses. If their research and progress continues at the pace it is right now, then they will have been able to recreate in digital form the entire rat brain within the next decade. From there they would attempt to attach the computer brain to a robot that is very similar in function to a real rat brain, to study its behavior and nature.

This does not prove or disprove the idea that an artificial being would have consciousness or a soul, but helps put into perspective how close we are to having to start answering our questions of rights of the beings we create.[/i]
loudthoughts
 
Posts: 13 (View: All / In topic)

Joined: February 2nd, 2009, 1:39 am
Location: http://www.squidoo.com/problem-with-life

Post Number:#71  Postby Belinda » February 8th, 2009, 5:48 am

Loudthoughts , your account of AI, right to be called 'alive', definitions of life, etc is satisfyingly factual and objective.

Arguments for AI being conscious lack mention of one of the important attributes of consciousness i.e. sentience. The ability to feel pleasure, pain and all the range of feelings that with the addition of input from the cortex stem from pleasure and pain, is not at present within the capability of any artefacts.I don't think so, anyway.

When sentience is a property of an artefact, then the more empathic among us will allow that it has consciousness and accord it rights.

Animals other than the human already have rights and will soon be accorded more, especially the great apes who are similar to humans in genetic structure and in intelligence.
Belinda
Contributor
 
Posts: 13865 (View: All / In topic)

Joined: July 10th, 2008, 7:02 pm
Location: UK

Post Number:#72  Postby loudthoughts » February 8th, 2009, 2:51 pm

Belinda:
Arguments for AI being conscious lack mention of one of the important attributes of consciousness i.e. sentience. The ability to feel pleasure, pain and all the range of feelings that with the addition of input from the cortex stem from pleasure and pain, is not at present within the capability of any artefacts.I don't think so, anyway.


If I were to have anesthesia injected into all areas of my skin, I would be unable to feel anything, not pain or pleasure, or anything in between. However, I would still be considered a conscious being and a sentient one. This is because I have a system of motivation instilled in my brain that tells me when something is (or feels) good or bad. Such a system could be programed into a computer, which could result in a very simple but real feeling of sickness (malaise) or pain for the computer.

In my view, the difference between consciousness and sentience is that sentience requires feelings of good/bad, and consciousness requires only that you sense the environment, in one or more ways, and can distinguish your self from that environment.

Since it is organization of matter, and not the matter itself that gives rise to "living" qualities, the computers we have built thus far have living qualities, but they will become more and more conscious as we develop AI. Then they will begin to have sentience, as their programing becomes increasingly self-centered and capable of feeling good/bad through motivational systems much like those in brains.
loudthoughts
 
Posts: 13 (View: All / In topic)

Joined: February 2nd, 2009, 1:39 am
Location: http://www.squidoo.com/problem-with-life

Post Number:#73  Postby Belinda » February 8th, 2009, 7:50 pm

I think that sentience and sapience are both contents,properties, of consciousness.Obviously computers are sapient.

I too believe that once computers, robots or any other artefacts have properties of sentience as well as sapience, then they will be conscious, although r and d have a long way to go before artefacts possess mirror neurons.

If I were to have anesthesia injected into all areas of my skin, I would be unable to feel anything, not pain or pleasure, or anything in between
Is not actually the case.You could still feel the positioning of your body in space and the various positions of your body. You could also see,hear and smell as well as before your skin was anaesthetised.
If however you had deep enough general anaesthesia you would not be conscious.
Belinda
Contributor
 
Posts: 13865 (View: All / In topic)

Joined: July 10th, 2008, 7:02 pm
Location: UK

Post Number:#74  Postby Panoptimist » February 10th, 2009, 8:32 pm

selfless wrote:Quantum computers are just as schizophrenic as our minds are. With some proper modeling and "raising" them to recognize commonality in anomolies that occur by comparison to past stored memory data, then there may be a possibility that these types of computers could take on human stylistic consciousness which believes it thinks independently and can know what it doesn't really know.


You cannot know what you do not know. Knowledge is not reabsorbed into knowledge.


And I cannot subscribe to any sort of fundamentalism. Analytic philosophy is undeniably tainted with the with the double insecurity possessed by those who inherit the sciences.
Panoptimist
 
Posts: 3 (View: All / In topic)

Joined: November 10th, 2008, 11:18 pm

Post Number:#75  Postby xtropx » February 11th, 2009, 5:13 pm

It's a three-part question. What is consciousness? Can you put it in a machine? And if you did, how could you ever know for sure?

Consciousness — AWARENESS — is truly in the eye of the beholder. I know I am conscious. But how do I know that you are?

This is the primary problem.

Could it be that my colleagues, my friends, all the people I see on the streets are actually just mindless automatons who merely act as if they were conscious human beings?

That would make this question moot.

Through logical analogy — I am a conscious human being, and therefore you as a human being are also likely to be conscious — I conclude I am PROBABLY not the only conscious being in a world of biological puppets. Extend the question of consciousness to other creatures, and uncertainty grows. Is a dog conscious? A turtle? A fly? An elm? A rock?

We don't have the mythical consciousness meter. All we have directly to go on is behavior.

So without even a rudimentary understanding of what consciousness is, the idea of instilling it into a machine — or understanding how a machine might evolve consciousness — becomes almost unfathomable.

Physical Limitations:

The human brain has about 1012 neurons, and each neuron makes about 103 connections (synapses) with other neurons, in average, for a total number of 1015 synapses. In artificial neural networks, a synapsis can be simulated using a floating point number, which requires 4 bytes of memory to be represented in a computer. As a consequence, to simulate 1015 synapses a total amount of 4*1015 bytes (4 millions of Gigabytes) is required. Let us say that to simulate the whole human brain we need 8 million Gigabytes, including the auxiliary variables for storing neuron outputs and other internal brain states.

That is a lot of GB! :shock:
Note: Current high-end desktop computer memory limitations: 12GB RAM (memory) & ~ 12TB data storage.

If humans are ever going to figure out how to instill a computer/machine/robot with consciousness, we must first come to understand our own consciousness better.

Artificial intelligence is slightly different.

A computer becomming INTELLIGENT is far more plausable, and through EMERGENCE:

"In philosophy, systems theory and science, emergence is the way complex systems and patterns arise out of a multiplicity of relatively simple interactions. Emergence is central to the theories of integrative levels and of complex systems."


...perhaps consciousness will follow, but the problem still remains: How will we know?

Sources:
http://en.wikipedia.org/wiki/Emergence
http://www.nytimes.com/2003/11/11/scien ... r=USERLAND
http://www.channon.net/alastair/msc/adc_msc.pdf
User avatar
xtropx
 
Posts: 9 (View: All / In topic)

Joined: January 28th, 2009, 3:03 pm

PreviousNext

Return to Epistemology and Metaphysics

Who is online

Users browsing this forum: Metathought and 4 guests

Philosophy Trophies

Most Active Members
by posts made in lasts 30 days

Avatar Member Name Recent Posts
Greta 162
Fooloso4 116
Renee 107
Ormond 97
Felix 90

Last updated January 6, 2017, 6:28 pm EST

Most Active Book of the Month Participants
by book of the month posts

Avatar Member Name BOTM Posts
Scott 147
Spectrum 23
Belinda 23
whitetrshsoldier 20
Josefina1110 19
Last updated January 6, 2017, 6:28 pm EST