Page 212 of 237

Re: Can a man-made computer become conscious?

Posted: April 16th, 2018, 6:47 am
by Belindi
Jan Sand wrote: April 16th, 2018, 1:47 am There is an interesting item at https://hardware.slashdot.org/story/18/ ... personhood wherein legal arguments are considering robots as persons under the law and liable to be sued for damages. Since a robot at the moment has no personal income nor can be made to suffer through imprisonment I can only place this under the general category as human insanity. It's like suing a gun for killing somebody or a bridge because it has collapsed. No doubt it would be a field day for lawyers but even if a robot gains some kind of consciousness it remains a most peculiar concept,
Is the problem whether the intelligent machine and the man are different in kind or in degree? Or in numbers of units of personhood?

Re: Can a man-made computer become conscious?

Posted: April 16th, 2018, 8:47 am
by Jan Sand
Intelligence itself is a very tricky concept. There is an interesting article at https://tech.slashdot.org/story/18/04/1 ... innovation which indicates that innovation can be more productive because AI offers a huge increase of the fields of observation at a rate that almost no human analyst can match and I suspect that one of the prime elements in intellect is the breadth of pattern similarities covered in any search for originality in innovation. Human minds may have some similarities to digital operations but the organic mind seems to me to be quite different from the way a digital machine works both in degree or what might be understood as personhood. At some point in almost anything, the quantity differences can become quality differences and the social interactions cannot be simplified into comparisons of quantity. Business in general frequently uses people as if they were machines and although work quality and speed can be comparable a human has needs and responsibilities that a machine not only does not but can not have.

Re: Can a man-made computer become conscious?

Posted: April 16th, 2018, 12:15 pm
by Belindi
Jan Sand wrote: April 16th, 2018, 8:47 am Intelligence itself is a very tricky concept. There is an interesting article at https://tech.slashdot.org/story/18/04/1 ... innovation which indicates that innovation can be more productive because AI offers a huge increase of the fields of observation at a rate that almost no human analyst can match and I suspect that one of the prime elements in intellect is the breadth of pattern similarities covered in any search for originality in innovation. Human minds may have some similarities to digital operations but the organic mind seems to me to be quite different from the way a digital machine works both in degree or what might be understood as personhood. At some point in almost anything, the quantity differences can become quality differences and the social interactions cannot be simplified into comparisons of quantity. Business in general frequently uses people as if they were machines and although work quality and speed can be comparable a human has needs and responsibilities that a machine not only does not but can not have.
I agree. If we are both right, then there is some point of degree of intelligence are which we must arbitrate that the individual is or is not a person. What are the criteria for that point's being established?

Talk about qualia seems to get nowhere. I believe that tests for personhood should involve both subjective feelings of the individual concerned, and also objective criteria about the individual's central nervous system and its analogue for a silicone machine. Please note I used "and" not "or".

Both symptoms and signs are used for medical diagnoses and that's been a successful method for diagnosing diseases and lesions.

Re: Can a man-made computer become conscious?

Posted: April 16th, 2018, 12:27 pm
by Jan Sand
Being a person involves important factors independent of intelligence as the recent US president election clearly indicated. It is deeply involved with civic responsibilities and if a machine is given personhood should it not also be granted citizenship? Could robot soldiers rise in rank to become officers commanding human troops? Things get pretty weird.

Re: Can a man-made computer become conscious?

Posted: April 16th, 2018, 12:56 pm
by Belindi
Jan Sand wrote: April 16th, 2018, 12:27 pm Being a person involves important factors independent of intelligence as the recent US president election clearly indicated. It is deeply involved with civic responsibilities and if a machine is given personhood should it not also be granted citizenship? Could robot soldiers rise in rank to become officers commanding human troops? Things get pretty weird.

Weird, yes. However the word 'robots' would either become a racist bad word, or 'robots' would connote a status equal to biological persons. After all there are biological persons who should be deprived of civil liberties. There are some horrible high status biological persons. I'd rather have a moral and learning-capable robot than Bashar-al Assad, or some Mafia gangmaster.

Re: Can a man-made computer become conscious?

Posted: April 16th, 2018, 9:24 pm
by Jan Sand
That living things contain intensities of love and hate to mingle with their functionalities is like a tree with roots in soil where nourishment must be sought in the gifts of histories of feeling deeply of this miracle of planetary wonders. The tin and plastic toys that clatter and destroy the greeneries of living things may evolve in their own simplicities but there seems to me something vital missing, a sense of alien invasion rather insectile and vitally insensitive and false. No doubt I am prejudiced and would prefer that humanity would change and improve and open their minds and eyes to its own possibilities instead of investing a kind of strange sexual delight in invoking misery and cruelty and wild love of desrtuction. I maintain this sense that something is going terribly wrong.

Re: Can a man-made computer become conscious?

Posted: April 16th, 2018, 9:25 pm
by Jan Sand
That living things contain intensities of love and hate to mingle with their functionalities is like a tree with roots in soil where nourishment must be sought in the gifts of histories of feeling deeply of this miracle of planetary wonders. The tin and plastic toys that clatter and destroy the greeneries of living things may evolve in their own simplicities but there seems to me something vital missing, a sense of alien invasion rather insectile and vitally insensitive and false. No doubt I am prejudiced and would prefer that humanity would change and improve and open their minds and eyes to its own possibilities instead of investing a kind of strange sexual delight in invoking misery and cruelty and wild love of destruction. I maintain this sense that something is going terribly wrong.

Re: Can a man-made computer become conscious?

Posted: April 16th, 2018, 10:22 pm
by Sy Borg
Jan Sand wrote: April 16th, 2018, 5:17 am
Greta wrote: April 16th, 2018, 3:49 am
I'm inclined to agree, Jan. Like the "female" AI being given greater rights in Saudi Arabia than women.

There may come a time when a learning machine experiences meaningful qualia, but how would we know? After all, we spent centuries wreaking all manner of havoc on other species based on the false belief that they were neither conscious nor capable of truly experiencing pain. Hopefully some AI equivalent to neuroscience will help.

There might be a touch of self preservation involved too. If they become sentient, one would hope that we have not treated them badly!
The interesting aspect of this problem is to examine the whole dynamic of the entire criminal justice system from ancient times into current practice. It is to understand the responsibilities of criminal behavior, If a man steals money or goods because society has not given him the possibility to feed or clothe or provide vital medical care fir himself or his family, who is responsible for the crime? If a person has become so warped in mind because of ill treatment as a child or because of education that gave him or her a gross misunderstanding of personal rights and responsibilities, who or what should be held for criminal reprise? This is a fundamental problem for society to solve and obviously, since proper justice throughout all societies is most often difficult or impossible to obtain, where is responsibility to be placed. Established society has never really faced or solved this extensive problem.
At least we acknowledge the prisoners' sentience, although I take your point that the way we assess webs of causation tends to be a superficial due to complexity that we cannot handle. If AI can work through that complexity it could revolutionise the criminal justice system.

Re: Can a man-made computer become conscious?

Posted: April 17th, 2018, 5:55 am
by Belindi
Jan Sand wrote: April 16th, 2018, 9:25 pm That living things contain intensities of love and hate to mingle with their functionalities is like a tree with roots in soil where nourishment must be sought in the gifts of histories of feeling deeply of this miracle of planetary wonders. The tin and plastic toys that clatter and destroy the greeneries of living things may evolve in their own simplicities but there seems to me something vital missing, a sense of alien invasion rather insectile and vitally insensitive and false. No doubt I am prejudiced and would prefer that humanity would change and improve and open their minds and eyes to its own possibilities instead of investing a kind of strange sexual delight in invoking misery and cruelty and wild love of destruction. I maintain this sense that something is going terribly wrong.

Yes. But hand wringing and no active involvement in change will allow the bad men to have their way. Intelligent machines exist and will get more intelligent. We need to make active moral decisions or the bad men will make the decisions for us.

Re: Can a man-made computer become conscious?

Posted: April 17th, 2018, 6:15 am
by Jan Sand
One possibility has occurred to me. Just suppose a computer security expert realized that he couldn't possibly discover all the ways that the hackers would attack and destroy the highly integrated networks that are now maintaining much of the digital infrastructure that now keeps the world in good operation. So he sets a deep learning procedure into AI to create a general procedure to look through all possibilities and apply the simplest and most direct method to attack and destroy th malware. As with much of deep learning, people who work at it frequently do not understand the procedures but they do work. So the program gets approval to proceed. What is not obvious in the solution is that the arrived solution accepted that human beings are behind all the malware attacking digital infrastructure so the AI proceeds to prevent this by eliminating humanity. No doubt it is an effective solution but I doubt humanity would be grateful.

Re: Can a man-made computer become conscious?

Posted: April 17th, 2018, 11:38 am
by JamesOfSeattle
Londoner wrote: April 16th, 2018, 6:01 am I would answer the second question [what are kidneys for]; "Nothing". An object is what it is, it is only 'for' something if we, something outside the object, have a purpose for it. A hammer is not 'for' anything in itself. It doesn't have any objectives. It only becomes 'for killing zombies' or 'for hammering nails' in the context of our purpose, not the hammer's.
This is why I say “purpose” is best used as an explanation of why (what for) a thing came to exist. If it was something designed, we reference the intent of the designer. If it was something selected by nature, we reference how that thing increased fitness. Once the thing is made, all bets are off.
Philosophy is picky. If you aren't, you let ambiguities of language lead you by the nose.
But if you rule out any ambiguities a priori, you rule out any and all progress. You’re just stuck with what you have.
in the above, I still do not understand what 'represented' could mean. I might have the abstract concept of 'a triangle'.
A represented concept could mean a group of neurons organized to fire when ever something “triangle-ish” shows up, like someone saying “triangle”, or a visual representation of a triangle, or someone thinking how to enclose a few sheep with three stretches of fence.
I give the example of the damaged apple to illustrate what is understood by 'damage' - like 'purpose' and 'interprets' - and why, if we insert words like these into a description of a mechanical process, we are smuggling in the notion of a consciousness.
We are not smuggling them in, we’re parading them in. We’re showing that those words are explained by specific mechanisms in specific circumstances.
(Reminder of hierarchy)

1. Ability to interact with an environment. [Everything that exists has this, so this is the panpsychism level]
2. Interaction that achieves a “purpose” (can be a Natural purpose, aka fitness for natural selection) This is the level that bacteria are at, also called the functional level.
3. Interaction that involves a functional response to a symbolic sign. This includes everything using neurons. [These are qualia, or “feelings”, but only at higher levels will something be able to remember or refer to them.]
4. Interaction that involves the creation of conceptual memories. (Conceptual memories can be used later as inputs of interactions.) Mammals and some birds and some computers are at this level.
5. Interactions that involve a concept of “self”. This includes everything that passes the mirror test.
6. Interactions that can combine unrelated conceptual memories into new conceptual memories, like “a chair named Sophia”. I think only humans are currently at this level

You:It is not an observable scale! Such a claim begs the question in that it implies both such a scale exists and that it is objective.
Maybe scale is a wrong word, but hierarchy is still correct. Each numbered level represents a strict subset of the group identified above it. So 1. is the set of all interactions. 2. is the set of interactions of a mechanism that has a npurpose. And so on. These subsets seem objective, albeit insufficiently refined, to me.
You say you need these things to 'explain human consciousness' but we have not yet shown that there is anything that needs explaining.
Are you saying there’s no such thing as consciousness? I read a lot about people trying to explain consciousness. I read about theories like Integrated Information Theory, and Global Workspace theory, etc. I read about people asking questions like “can a computer be conscious?” The key is that people are looking for explanations of phenomena. In this case, given the proposed explanation (hierarchies), the pertinent phenomena can occur in both humans and man-made machines.

*

Re: Can a man-made computer become conscious?

Posted: April 17th, 2018, 1:57 pm
by The Beast
The original Universal condition allowed for the present energy/matter spectrum and the possibility of life among the possible properties in any given result. The given matter/spectrum of a human has the property of “alive” as opposite to inert in the case of dead. Other given properties are free will and objective thinking. In addition, the many others like capacities of suffering and empathy. From this point of view, the composition of the energy/matter spectrum of the machine is a creation of a human energy/spectrum. As the human spectrum evolves so does his creation. If the possibility of life is a spectrum and the matter/energy yet another spectrum then the question is about injecting the identity of “alive” into a different result of matter/energy spectrum. As evolution goes, the identity word “alive” might suffer an evolution as well. Properties like objectivity and… algorithms of mood meet the human needs of today demands. A matter/spectrum of a machine becomes alive with electricity. The property of objectivity is mastered by the circuitry that is better suited than those of a human. As the human evolves so do the algorithms. The algorithms might allow decision making and the creation by the machine of new algorithms. A human might shake the hand of the machine. Fifth generation sensors developed with the help of the objective algorithms might “feel” variations in the human spectrum. The free decision making might have overtones that might not be suitable to humans. A machine might therefore feel a superior objectivity and objective reasoning capacity. It might write itself a mood…

Re: Can a man-made computer become conscious?

Posted: April 18th, 2018, 11:15 pm
by Jan Sand
One of the most disconcerting qualities of philosophical discourse is that it most frequently is involved with language and language deals with generalities and has a tendency to attempt to cross apply classes of understandings that are totally inappropriate. Like it or not, living things are very mechanical and the principal understandings of both organic creatures and machines can be quite similar but judgements and understandings cannot be inappropriately smeared over both because of the similarity of language. All energies may display some characteristics in common but a high voltage line and an excited child find rather little in common. A powerful cook cannot capture a good sized moon and a large star cannot make an apple pie. Super computers cannot evaluate the feelings I have for my pet dog unless it is designed to do so and humans, as with many animals, prize superiority of skills and social values in many ways that were developed within the necessities of evolution, ways that intelligent machines have no clue to unless a programmer analyses those necessities and formulates a program to do so. The most impressively intelligent humans repeatedly say and do obviously stupid things and frequently live very miserable lives as a result. Intelligent machines do not feel pride or superiority or love or hate or even any emotion at being turned off. Humans and machines have many things in common but each is patterned by existence in radically different ways.

Re: Can a man-made computer become conscious?

Posted: April 19th, 2018, 4:27 am
by Belindi
Jan Sand wrote: April 18th, 2018, 11:15 pm One of the most disconcerting qualities of philosophical discourse is that it most frequently is involved with language and language deals with generalities and has a tendency to attempt to cross apply classes of understandings that are totally inappropriate. Like it or not, living things are very mechanical and the principal understandings of both organic creatures and machines can be quite similar but judgements and understandings cannot be inappropriately smeared over both because of the similarity of language. All energies may display some characteristics in common but a high voltage line and an excited child find rather little in common. A powerful cook cannot capture a good sized moon and a large star cannot make an apple pie. Super computers cannot evaluate the feelings I have for my pet dog unless it is designed to do so and humans, as with many animals, prize superiority of skills and social values in many ways that were developed within the necessities of evolution, ways that intelligent machines have no clue to unless a programmer analyses those necessities and formulates a program to do so. The most impressively intelligent humans repeatedly say and do obviously stupid things and frequently live very miserable lives as a result. Intelligent machines do not feel pride or superiority or love or hate or even any emotion at being turned off. Humans and machines have many things in common but each is patterned by existence in radically different ways.
If it's true that the major difference in kind between the biological mammal and the silicone machine is that the former feels affection, loyalty, fear of death, beauty, fear of loss, attachments to values beyond self such as nation or truth, sympathy and so on, then we can lump all those feelings together as to their being caused by mammals' ,especially human mammals' , inherent reliance upon one another.

Humans would not be humans unless ,what is indeed the case, that each individual is part of a larger society into which the individual is glued by a culture of belief and practice. Moreover humans much more than any other mammals have evolved both mentally and physically alongside cultures which are transmitted from generation to generation and which evolve as do their carriers the humans.

It's unlikely to happen but theoretically if not technologically it could happen that android machines are designed to need commonly held cultures of belief and practice in order that they function at all. If they failed to be creatures that 'lived' in societies then they wouldn't be autonomous and would remain servile like our present aeroplanes or drone weaponry. The real danger probably is not that robots will become morally able but that they will remain servile in nature but able to overcome humans.

We already know what happens when a military, commercial, or political leader is servile; he serves none but himself and does so efficiently. The servile machine can and naturally will outwit him .That's why these machines must be deprived of autonomy. They must be not be allowed to become autonomous not because they cannot be moral agents(they can theoretically ) but because they are in their infancy and will be captured by bad men and made to be bad.

Re: Can a man-made computer become conscious?

Posted: April 19th, 2018, 5:09 am
by Jan Sand
The latest deep learning techniques do not necessarily understand the programs written but are designed to process raw data and program themselves to obtain results desired. Frequently they already have gained the freedoms to improve themselves in a manner not understood by their controllers. And this is needed in order to obtain the superior results obtained. This is going on right now. It is not yet at the stage that they can become dangerous but nobody knows when that stage will be obtained.