Is the problem whether the intelligent machine and the man are different in kind or in degree? Or in numbers of units of personhood?Jan Sand wrote: ↑April 16th, 2018, 1:47 am There is an interesting item at https://hardware.slashdot.org/story/18/ ... personhood wherein legal arguments are considering robots as persons under the law and liable to be sued for damages. Since a robot at the moment has no personal income nor can be made to suffer through imprisonment I can only place this under the general category as human insanity. It's like suing a gun for killing somebody or a bridge because it has collapsed. No doubt it would be a field day for lawyers but even if a robot gains some kind of consciousness it remains a most peculiar concept,
Can a man-made computer become conscious?
-
- Moderator
- Posts: 6105
- Joined: September 11th, 2016, 2:11 pm
Re: Can a man-made computer become conscious?
-
- Posts: 658
- Joined: September 10th, 2017, 11:57 am
Re: Can a man-made computer become conscious?
-
- Moderator
- Posts: 6105
- Joined: September 11th, 2016, 2:11 pm
Re: Can a man-made computer become conscious?
I agree. If we are both right, then there is some point of degree of intelligence are which we must arbitrate that the individual is or is not a person. What are the criteria for that point's being established?Jan Sand wrote: ↑April 16th, 2018, 8:47 am Intelligence itself is a very tricky concept. There is an interesting article at https://tech.slashdot.org/story/18/04/1 ... innovation which indicates that innovation can be more productive because AI offers a huge increase of the fields of observation at a rate that almost no human analyst can match and I suspect that one of the prime elements in intellect is the breadth of pattern similarities covered in any search for originality in innovation. Human minds may have some similarities to digital operations but the organic mind seems to me to be quite different from the way a digital machine works both in degree or what might be understood as personhood. At some point in almost anything, the quantity differences can become quality differences and the social interactions cannot be simplified into comparisons of quantity. Business in general frequently uses people as if they were machines and although work quality and speed can be comparable a human has needs and responsibilities that a machine not only does not but can not have.
Talk about qualia seems to get nowhere. I believe that tests for personhood should involve both subjective feelings of the individual concerned, and also objective criteria about the individual's central nervous system and its analogue for a silicone machine. Please note I used "and" not "or".
Both symptoms and signs are used for medical diagnoses and that's been a successful method for diagnosing diseases and lesions.
-
- Posts: 658
- Joined: September 10th, 2017, 11:57 am
Re: Can a man-made computer become conscious?
-
- Moderator
- Posts: 6105
- Joined: September 11th, 2016, 2:11 pm
Re: Can a man-made computer become conscious?
Jan Sand wrote: ↑April 16th, 2018, 12:27 pm Being a person involves important factors independent of intelligence as the recent US president election clearly indicated. It is deeply involved with civic responsibilities and if a machine is given personhood should it not also be granted citizenship? Could robot soldiers rise in rank to become officers commanding human troops? Things get pretty weird.
Weird, yes. However the word 'robots' would either become a racist bad word, or 'robots' would connote a status equal to biological persons. After all there are biological persons who should be deprived of civil liberties. There are some horrible high status biological persons. I'd rather have a moral and learning-capable robot than Bashar-al Assad, or some Mafia gangmaster.
-
- Posts: 658
- Joined: September 10th, 2017, 11:57 am
Re: Can a man-made computer become conscious?
-
- Posts: 658
- Joined: September 10th, 2017, 11:57 am
Re: Can a man-made computer become conscious?
- Sy Borg
- Site Admin
- Posts: 14995
- Joined: December 16th, 2013, 9:05 pm
Re: Can a man-made computer become conscious?
At least we acknowledge the prisoners' sentience, although I take your point that the way we assess webs of causation tends to be a superficial due to complexity that we cannot handle. If AI can work through that complexity it could revolutionise the criminal justice system.Jan Sand wrote: ↑April 16th, 2018, 5:17 amThe interesting aspect of this problem is to examine the whole dynamic of the entire criminal justice system from ancient times into current practice. It is to understand the responsibilities of criminal behavior, If a man steals money or goods because society has not given him the possibility to feed or clothe or provide vital medical care fir himself or his family, who is responsible for the crime? If a person has become so warped in mind because of ill treatment as a child or because of education that gave him or her a gross misunderstanding of personal rights and responsibilities, who or what should be held for criminal reprise? This is a fundamental problem for society to solve and obviously, since proper justice throughout all societies is most often difficult or impossible to obtain, where is responsibility to be placed. Established society has never really faced or solved this extensive problem.Greta wrote: ↑April 16th, 2018, 3:49 am
I'm inclined to agree, Jan. Like the "female" AI being given greater rights in Saudi Arabia than women.
There may come a time when a learning machine experiences meaningful qualia, but how would we know? After all, we spent centuries wreaking all manner of havoc on other species based on the false belief that they were neither conscious nor capable of truly experiencing pain. Hopefully some AI equivalent to neuroscience will help.
There might be a touch of self preservation involved too. If they become sentient, one would hope that we have not treated them badly!
-
- Moderator
- Posts: 6105
- Joined: September 11th, 2016, 2:11 pm
Re: Can a man-made computer become conscious?
Jan Sand wrote: ↑April 16th, 2018, 9:25 pm That living things contain intensities of love and hate to mingle with their functionalities is like a tree with roots in soil where nourishment must be sought in the gifts of histories of feeling deeply of this miracle of planetary wonders. The tin and plastic toys that clatter and destroy the greeneries of living things may evolve in their own simplicities but there seems to me something vital missing, a sense of alien invasion rather insectile and vitally insensitive and false. No doubt I am prejudiced and would prefer that humanity would change and improve and open their minds and eyes to its own possibilities instead of investing a kind of strange sexual delight in invoking misery and cruelty and wild love of destruction. I maintain this sense that something is going terribly wrong.
Yes. But hand wringing and no active involvement in change will allow the bad men to have their way. Intelligent machines exist and will get more intelligent. We need to make active moral decisions or the bad men will make the decisions for us.
-
- Posts: 658
- Joined: September 10th, 2017, 11:57 am
Re: Can a man-made computer become conscious?
- JamesOfSeattle
- Premium Member
- Posts: 509
- Joined: October 16th, 2015, 11:20 pm
Re: Can a man-made computer become conscious?
This is why I say “purpose” is best used as an explanation of why (what for) a thing came to exist. If it was something designed, we reference the intent of the designer. If it was something selected by nature, we reference how that thing increased fitness. Once the thing is made, all bets are off.Londoner wrote: ↑April 16th, 2018, 6:01 am I would answer the second question [what are kidneys for]; "Nothing". An object is what it is, it is only 'for' something if we, something outside the object, have a purpose for it. A hammer is not 'for' anything in itself. It doesn't have any objectives. It only becomes 'for killing zombies' or 'for hammering nails' in the context of our purpose, not the hammer's.
But if you rule out any ambiguities a priori, you rule out any and all progress. You’re just stuck with what you have.Philosophy is picky. If you aren't, you let ambiguities of language lead you by the nose.
A represented concept could mean a group of neurons organized to fire when ever something “triangle-ish” shows up, like someone saying “triangle”, or a visual representation of a triangle, or someone thinking how to enclose a few sheep with three stretches of fence.in the above, I still do not understand what 'represented' could mean. I might have the abstract concept of 'a triangle'.
We are not smuggling them in, we’re parading them in. We’re showing that those words are explained by specific mechanisms in specific circumstances.I give the example of the damaged apple to illustrate what is understood by 'damage' - like 'purpose' and 'interprets' - and why, if we insert words like these into a description of a mechanical process, we are smuggling in the notion of a consciousness.
Maybe scale is a wrong word, but hierarchy is still correct. Each numbered level represents a strict subset of the group identified above it. So 1. is the set of all interactions. 2. is the set of interactions of a mechanism that has a npurpose. And so on. These subsets seem objective, albeit insufficiently refined, to me.(Reminder of hierarchy)
1. Ability to interact with an environment. [Everything that exists has this, so this is the panpsychism level]
2. Interaction that achieves a “purpose” (can be a Natural purpose, aka fitness for natural selection) This is the level that bacteria are at, also called the functional level.
3. Interaction that involves a functional response to a symbolic sign. This includes everything using neurons. [These are qualia, or “feelings”, but only at higher levels will something be able to remember or refer to them.]
4. Interaction that involves the creation of conceptual memories. (Conceptual memories can be used later as inputs of interactions.) Mammals and some birds and some computers are at this level.
5. Interactions that involve a concept of “self”. This includes everything that passes the mirror test.
6. Interactions that can combine unrelated conceptual memories into new conceptual memories, like “a chair named Sophia”. I think only humans are currently at this level
You:It is not an observable scale! Such a claim begs the question in that it implies both such a scale exists and that it is objective.
Are you saying there’s no such thing as consciousness? I read a lot about people trying to explain consciousness. I read about theories like Integrated Information Theory, and Global Workspace theory, etc. I read about people asking questions like “can a computer be conscious?” The key is that people are looking for explanations of phenomena. In this case, given the proposed explanation (hierarchies), the pertinent phenomena can occur in both humans and man-made machines.You say you need these things to 'explain human consciousness' but we have not yet shown that there is anything that needs explaining.
*
- The Beast
- Posts: 1403
- Joined: July 7th, 2013, 10:32 pm
Re: Can a man-made computer become conscious?
-
- Posts: 658
- Joined: September 10th, 2017, 11:57 am
Re: Can a man-made computer become conscious?
-
- Moderator
- Posts: 6105
- Joined: September 11th, 2016, 2:11 pm
Re: Can a man-made computer become conscious?
If it's true that the major difference in kind between the biological mammal and the silicone machine is that the former feels affection, loyalty, fear of death, beauty, fear of loss, attachments to values beyond self such as nation or truth, sympathy and so on, then we can lump all those feelings together as to their being caused by mammals' ,especially human mammals' , inherent reliance upon one another.Jan Sand wrote: ↑April 18th, 2018, 11:15 pm One of the most disconcerting qualities of philosophical discourse is that it most frequently is involved with language and language deals with generalities and has a tendency to attempt to cross apply classes of understandings that are totally inappropriate. Like it or not, living things are very mechanical and the principal understandings of both organic creatures and machines can be quite similar but judgements and understandings cannot be inappropriately smeared over both because of the similarity of language. All energies may display some characteristics in common but a high voltage line and an excited child find rather little in common. A powerful cook cannot capture a good sized moon and a large star cannot make an apple pie. Super computers cannot evaluate the feelings I have for my pet dog unless it is designed to do so and humans, as with many animals, prize superiority of skills and social values in many ways that were developed within the necessities of evolution, ways that intelligent machines have no clue to unless a programmer analyses those necessities and formulates a program to do so. The most impressively intelligent humans repeatedly say and do obviously stupid things and frequently live very miserable lives as a result. Intelligent machines do not feel pride or superiority or love or hate or even any emotion at being turned off. Humans and machines have many things in common but each is patterned by existence in radically different ways.
Humans would not be humans unless ,what is indeed the case, that each individual is part of a larger society into which the individual is glued by a culture of belief and practice. Moreover humans much more than any other mammals have evolved both mentally and physically alongside cultures which are transmitted from generation to generation and which evolve as do their carriers the humans.
It's unlikely to happen but theoretically if not technologically it could happen that android machines are designed to need commonly held cultures of belief and practice in order that they function at all. If they failed to be creatures that 'lived' in societies then they wouldn't be autonomous and would remain servile like our present aeroplanes or drone weaponry. The real danger probably is not that robots will become morally able but that they will remain servile in nature but able to overcome humans.
We already know what happens when a military, commercial, or political leader is servile; he serves none but himself and does so efficiently. The servile machine can and naturally will outwit him .That's why these machines must be deprived of autonomy. They must be not be allowed to become autonomous not because they cannot be moral agents(they can theoretically ) but because they are in their infancy and will be captured by bad men and made to be bad.
-
- Posts: 658
- Joined: September 10th, 2017, 11:57 am
Re: Can a man-made computer become conscious?
2023/2024 Philosophy Books of the Month
Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023
Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023