Page 4 of 237

Posted: May 5th, 2008, 12:11 pm
by anarchyisbliss
Until you do stuff like this, don't expect me to believe your claims. There is a difference between good reasons for YOU to believe in God, (e.g. I talked with him this morning) and good reasons for EVERYONE to believe in God. You've given us plenty of the former, but not even one of the latter.
Rainchild[/quote]

I'm not here to get you to believe my claims. I'm just staking them. I dont mind ont having "evidence" bsed n your definition of evidence. I know what I believe and no amount of texbooks will sway me uless I experience a complete psychological catharsis or something. I have good reasons to believe, in God, many good reasons, most of them are based on intution ( not logic ) and that is all I need, this isnt about everyone. Think what you want about souls and my methods and anyting else you disagree with. In the end no matter how much refutal you send my way I will have the same beliefs that I had yesterday.

Posted: May 5th, 2008, 9:53 pm
by rainchild
If you don't think it's necessary to defend your beliefs to others, why are you on a philosophy forum?
In the main, philosophy is about defending various beliefs and positions.

Not only do you fail to justify your beliefs--you barely specify your beliefs at all. What's intuition? What are the reasons that intuition is stronger than common sense empirical reasoning? Which God and which spirits do you believe in? One could read your posts from now till doomsday and have no idea how you would answer these questions.

Your anti-logic, anti-reason stance has nothing to recommend it in my opinion. In fact, I consider anti-intellectualism to be one of the most destructive forces in American culture. Since you have nothing else to offer, I am ending my part in this conversation. Perhaps others will reply to your posts.

Rainchild

the answer (maybe)

Posted: November 8th, 2008, 10:58 pm
by warren
What is consciouseness? I believe it is a form of self awareness. To be specific, at least a basic understanding of who you are, what you are doing, and why you are doing it. Each of these things can be taught to a computer program, it's name, purpose and it's actions. But unless software can evolve to the point where it can parse and apply this knoledge, it is not self-awareness.

At first, it is impossible to see how one might write software to the express purpose of being self aware. Nothing, however can be created to fulfill this purpose and nothing else. What is the necessary prerequisite functioning?

Language parsing, DONE. Computers already do this. They do human language->human language, computer language->internal representation, internal representation->internal representation. Granted, not all the wires are connected yet, but at this point it is little more than a thesis project.

Problem solving, DONE. Each problem is different, but this does not mean that they cannot all be solved the same way. Wild statment? This thinking is an extension of ideas of effective equivalence. If you have a translation matrix in place, such as a language parser, all problems that are effectively equivalent become one (although processing speed wil suffer). How many problem archetypes/signatures are there? We can start by observing that each signature is bounded by it's computational complexity and computability. All polynomial class problems are trivial. Ignore them. Most other problems of value are always solvable within a "finite" amount of time (and memory) using a simple decision tree. In fact, all parsing problems, design problems, everyday problems fall in this category except for paradox resolution. In other words, unless the machine needs a PHD, or needs to do philosophy, the only problem that needs to be solved is the one that needs a decision tree implementation.

Other prerequisites? language-problem solving-self observation is easy to implement-???

The reason why artificial intelligence has not been created before now is because the sheer implementation complexity of the project described above is, without the very best cleanliness of code, insight, and ceaseless diligence, more than a lifetime project, maybe even with today's massive collection of support source code libraries, like those dedicated to language parsing and decision trees, and-of course-the invention of the internal representation known as object oriented.

In conclusion: the above project is capable of knowing about itself, what it is doing, and why. Given time, it is capable of independently collecting enough information to choose to act independently in order to better service it's purpose. (And emotion through random mutation (which does occur inside computers) lol).

Posted: November 9th, 2008, 9:38 pm
by Akhenaten
The human brain has a raw computational power of 0.1 quadrillion processes/second. Yes, I said quadrilion. This is, for those who are lazy, basically.... 100,000,000,000,000... half of Bill Gate's bank statement pretty much.

I have a computer sitting in my floor that processes a total of 9,700,000,000 processes per second, and a super computer has roughly two to five thousand processors better than mine. Mine has two dinky ones, comparitivly. So, lets say at the low end this super computer has... two thousand processors, all 200Gb(can buy these as a civilian). (1Gb = 1,000,000,000 processes per second, basically) It is already sitting at 400,000,000,000,000Gb, or processes, per second. Wait a moment, we passed ourselves.

Random thought, as we all know, is not random, it is a mixture of chemicals inside of our brain inwhich fires certain neurons in certain patterns, and voila, randomized thought process. Incidently, this chemical process can be controlled with medication, much like programming.

So yes, if we truly set our minds and pocket books to the task, AI would not be overly difficult, it would be like raising a child. We already have computers that learn on their own, all they lack is that little spark of... well, patterning.

You must remember, we are aware of our surroundings via five senses, without them we'd know nothing and be nothing. Unfortunately, we have yet to figure out how to send data into a computer that replicates a conscious individuals ability to... feel the world. To have emotions, a tightness in your, well processor, when you see something sad, etc. We are learning how to give humans their feelings (such as Touch, Taste, Sight, Hearing and Smell) using machines, its only the next logical step to simply do the same in reverse. Give human sensation to machine, as the individual machines -already- do it on their own.

We ask why we are here because we know we will one day be gone, and unfortunately a computer only knows the time you tell it. We have never attempted to build a computer that, quite litterally, attempted to replicate a human brain as far as function. If we studied a mind for say, a year, we could likely then (As we already have computers that can turn human processes into code) simply allow the computer to learn how the mind reacts to in myriad of stimuli. From this, as we have found with our current 'Learning Computer' technology, they can base other experiences off of previous ones, simply not well. This is because these computers are in small, light, weaponized machines, and in vehicles. Neither of which can handle simply the weight of the processors, let alone the rest of a mainframe.

We have even built Quantum Computers... which litterally, by themselves, blow the figures for human thought out of the water. Thought being meerly variables, and technology advancing at a pace of two times per day... well, it may not be all that long.

Posted: November 9th, 2008, 10:49 pm
by mark black
Akhenaten,

You must surely be getting sick of me following you around correcting your wild ideas, but here I am again. Might I recommend:

What Computers Can't Do: The Limits of Artificial Intelligence, by Hubert Dreyfus. He exposes four assumptions the AI research community takes for granted. They are:

A Biological Assumption - That at some level people operate in a digital manner.

A Psychological Assumption - All thought is calculation.

An Epistemological Assumption - That all knowledge can be formalized.

An Ontological Assumption - That our world consists of context-free facts.

It's not simply a matter of processing power. The relevence problem is an interesting corollary of the epitemological assumption. A child walks into the room holding a knife in one hand and a teddy bear in the other. The mother screams 'put that down.' The child drops the knife - but you can't teach that kind of thing to a computer because it processes data in a linear manner as a result of the physical properties of electrical circuits - and because there are a potentially infinite number of variables on this theme. There may be a way around it, but processing power is not the question or the answer.

mb.

Posted: November 12th, 2008, 9:59 pm
by ogdread
This is an interesting topic to be sure.

Of course I can't give a very accurate response since I (like everyone else) don't know what exactly consciousness or a "soul" is. My intuition tells me that consciousness requires at the very least a sense of self awareness. Surely consciousness constitutes much more than this, but even if we break it down to the very small puzzle piece of "self awareness", I don't see it happening in a computer simply because I don't think that the sense of self is learned. Computers can be programmed to learn, and that is all. If there is some component to consciousness that is inherent, then I don't believe computers could obtain it.

The issue of "soul" is, in my opinion, harder to break down. I will posit that if souls exist, they are present only in living things (this, also, is a matter of intuition--I am not presenting it as fact or even theory). As it stands today, computers could not have souls in my opinion, because I do not agree that they are in any respect "living". However, scientists are already playing at programming living cells. The CPU is effectively the brain of a computer--if or when this technology advances, could we in fact see a complex computer with a live central processing unit? In that case I might, if I truly believed in the existence of the soul, be inclined to wonder if the living computer might have one.

Posted: December 10th, 2008, 7:09 am
by 0xFFFF
I believe that the capability exists for an artificial equivalent of the human mind, although not necessarily that we as humans have the ability to create it. Consciousness is, in my belief, simply an illusion conjured by the mind.. a result of chemical reactions in just another bodily organ. A digital emulation of these behaviors = AI.

Posted: December 10th, 2008, 4:54 pm
by Belinda
OxFFF and is mind nothing but one property of neurons among neurons' several other properties?

Posted: December 10th, 2008, 5:49 pm
by wanabe
I think that the only way a computer could have a consciousness and a soul would be to be made to work in conjunction with a brain of any animal (a dual processor) attached. naturally, I assume the most practical would be a human brain.

this brain could be built from synthetic origins in a lab or from a young but dead human. I think only the one from the dead would have at least some remnants of the first owners soul.

so essentially this would be a cyborg...now with this cyborg having been created with the calculative abilitys of a computer, with the random creativity of a human. it could make a mechanical copy of a human brain that is built from mechanical parts that functions just like a human brain but still retain the calculative abilitys of a computer. this would have no soul, but may think it does. this would be the closest thing to a living machine that could ever be possible.

I think that if we put the energy we put in to electronics in to biology we would get where we want to go much faster as biology is far more advanced than our current technology all our newest technology mimics biology. why make the extra step for ourselves?

oh right, cause if you can grow it you can get it for free and that would help everyone. not just the rich that will use it to fatten their wallets.

Posted: December 11th, 2008, 8:01 am
by Akhenaten
Our concept of Logic has no bearing on whether or not an entity or computer is alive, therefore it still stands, sentience can be obtained via artificial means... thinking like us isn't the goal, if we wanted to build copies of ourselves we can have intercourse.

Posted: December 11th, 2008, 4:58 pm
by wanabe
""Our concept of Logic has no bearing on whether or not an entity or computer is alive, therefore it still stands, sentience can be obtained via artificial means... thinking like us isn't the goal, if we wanted to build copies of ourselves we can have intercourse.""

-Akhenaten

""""Sentience- is the ability to feel or perceive subjectively. It is an important concept in the philosophy of animal rights, in eastern philosophy and in science fiction, although in each of these fields the term is used slightly differently.""" wikipedia

i don't think that i said anything about copying humans, or building replicas...so i am agreeing with you on a basic level Akhenaten... i think what we are trying to accomplish by building computers that think like us it go go BEYOND either computer or human by having them merge, at least this is what i am saying...i also think that our "concept of logic"...is well founded in deciding if some thing is alive or at least mine is...something is alive when it reproduces it self either by copy or, mutation(sex and combination of gametes) under its own will and/or power...now don't say: well it gets its power from some thing else that it eats or consumes ...then were not alive either...

everyone needs to stop fighting the words and definitions, and start looking at the ideas, and as my signature says, i want to know if i break this tennant.

Posted: December 14th, 2008, 12:46 am
by 0xFFFF
Belinda wrote:OxFFF and is mind nothing but one property of neurons among neurons' several other properties?
I'm only human, what the hell do I know? But why isn't it possible?

Posted: December 14th, 2008, 5:22 am
by Belinda
Psychoneurally paired events are a fact. However that we commonly adduce cause from constant conjunction does not mean that constant conjunction actually implies cause.

Because all we can know is got via our bodily senses our perceptions are the ideas of our bodies (Spinoza) . Our subjective experiences,i.e. our ideas, are real and can influence , consciously or unconsciously, what our bodily senses receive from our senses.It is a two-way process.

I think this still does not answer your question " why isn't it possible?"

It is possible.

However I think it is improbable 1. because we are so impresssed by the success of materialistic science and technology that it is hard not to be prejudiced. 2. Because it is just as possible that matter causes mind as that mind causes matter, it is better to think that mind and matter are together caused by existence itself.Just as night and day are neither the cause of each other but both are together caused by solar events.

Posted: December 14th, 2008, 5:45 am
by 0xFFFF
Why shouldn't it be possible?

I don't think that it's improbable; as long as technology is capable of emulating the behavior of neurons, the interactions of said neurons is consciousness, is it not? As I stated before, I don't believe definitively that humans are capable of such technology, but I do believe that it can be done.

Of course, it's always possible that there is much more to humanity than we are capable of perceiving and inferring.

Posted: December 14th, 2008, 10:32 am
by Akhenaten
We've calculated that the human mind:

"1999's fastest PC processor chip on the market was a 700 MHz pentium that did 4200 MIPS. By simple calculation, we can see that we would need at least 24,000 of these processors in a system to match up to the total speed of the brain !! (Which means the brain is like a 1,680,000 MHz Pentium computer)."

This loosely translates to 20petaflops.


now then:

"1.105 petaflop/s IBM supercomputer at Los Alamos National Laboratory achieved in June 2008."

"360 teraflops was the top Supercomputer of June 2007"


From this we see a 745 Teraflop increase between 2006 and 2008. At this rate we need until 2013-2014 before we can emulate 100,000,000 MIPS (Million computer Instructions Per Second).... or 20 petaflops.

At this stage our issue isn't one of needing to learn -how- to make AI, we know how you make an electronic brain emulator, we simply do not have the processing power in a chip as of yet.

If processing power of computers doubles every year (granted technology growth is nearly exponential, as every new processor opens the door to being able to make much faster ones)

1.1 2008
2.2 2009
4.4 2010
8.8 2011
17.6 2012