A.I. Is Not Sentient. Why Do People Say It Is?
-
- Posts: 136
- Joined: April 10th, 2022, 4:44 pm
A.I. Is Not Sentient. Why Do People Say It Is?
Robots can’t think or feel, despite what the researchers who build them want to believe.
https://www.nytimes.com/2022/08/05/t...nt-google.html
I do not agree with this, but the article is worthwhile.
This is a good definition of intelligence, from the article: “The ability to learn — the ability to take in new context and solve something in a new way — is intelligence.”
https://www.nytimes.com/2022/08/05/t...nt-google.html
I do not agree with this, but the article is worthwhile.
This is a good definition of intelligence, from the article: “The ability to learn — the ability to take in new context and solve something in a new way — is intelligence.”
-
- Posts: 136
- Joined: April 10th, 2022, 4:44 pm
Re: A.I. Is Not Sentient. Why Do People Say It Is?
Some people believe that humans have a soul which was created by God and connects us to God. Thus only God can think, only the soul can think.
The reaction to AI is that it has no soul and therefore cannot think.
The reaction to AI is that it has no soul and therefore cannot think.
-
- Posts: 466
- Joined: May 11th, 2021, 11:20 am
Re: A.I. Is Not Sentient. Why Do People Say It Is?
(The link is broken. See if you can create it again please.)Sunday66 wrote: ↑August 5th, 2022, 1:57 pm Robots can’t think or feel, despite what the researchers who build them want to believe.
https://www.nytimes.com/2022/08/05/t...nt-google.html
I do not agree with this, but the article is worthwhile.
This is a good definition of intelligence, from the article: “The ability to learn — the ability to take in new context and solve something in a new way — is intelligence.”
-
- Posts: 136
- Joined: April 10th, 2022, 4:44 pm
Re: A.I. Is Not Sentient. Why Do People Say It Is?
cannotAverageBozo wrote: ↑August 5th, 2022, 4:19 pm(The link is broken. See if you can create it again please.)Sunday66 wrote: ↑August 5th, 2022, 1:57 pm Robots can’t think or feel, despite what the researchers who build them want to believe.
https://www.nytimes.com/2022/08/05/t...nt-google.html
I do not agree with this, but the article is worthwhile.
This is a good definition of intelligence, from the article: “The ability to learn — the ability to take in new context and solve something in a new way — is intelligence.”
-
- Posts: 648
- Joined: July 19th, 2021, 11:08 am
Re: A.I. Is Not Sentient. Why Do People Say It Is?
It's the same with humans: Humans can’t think or feel, despite what the human common sense wants to believe.
mankind ... must act and reason and believe; though they are not able, by their most diligent enquiry, to satisfy themselves concerning the foundation of these operations, or to remove the objections, which may be raised against them [Hume]
- JDBowden
- Posts: 35
- Joined: July 22nd, 2022, 7:22 am
- Favorite Philosopher: St Thomas of Aquinas
- Location: Chile
Re: A.I. Is Not Sentient. Why Do People Say It Is?
What? So if you are chopping onion and you slice off a finger clean off and blood splatters everywhere, you will not feel it?
"Our disturbances come only from our own opinions … everything that we see will change and no longer exist … the universe is change and life is opinion."
― Marcus Aurelius
― Marcus Aurelius
-
- Posts: 648
- Joined: July 19th, 2021, 11:08 am
Re: A.I. Is Not Sentient. Why Do People Say It Is?
Read again "It's the same with humans: Humans can’t think or feel, despite what the human common sense wants to believe."
mankind ... must act and reason and believe; though they are not able, by their most diligent enquiry, to satisfy themselves concerning the foundation of these operations, or to remove the objections, which may be raised against them [Hume]
-
- Posts: 1662
- Joined: January 7th, 2015, 7:09 am
Re: A.I. Is Not Sentient. Why Do People Say It Is?
Is this about the bloke who thinks the Google chatbox program (Lamda) he was working on has phenomenal experience?Sunday66 wrote: ↑August 5th, 2022, 1:57 pm Robots can’t think or feel, despite what the researchers who build them want to believe.
https://www.nytimes.com/2022/08/05/t...nt-google.html
I do not agree with this, but the article is worthwhile.
This is a good definition of intelligence, from the article: “The ability to learn — the ability to take in new context and solve something in a new way — is intelligence.”
He believes that Lamda has conscious experience because through his convos with it Lamda responds as we'd expect it respond if it had experience, ie that it would pass the Turing test.
Unfortunately because we don't understand the necessary and sufficient conditions for conscious experience, and it's private and can't be observed and detected by others, we have to resort to something like the Turing test (which basically tests similarity to us experiencing beings) to test for it. It's not reliable, but perhaps the best we can do is say the Turing test is indicative, for now at least.
A significant issue with Lamda imo is it's a learning chatbox program, designed to interact with humans in a human way, and fed by a wealth of internet examples of human knowledge and examples of how humans think and feel. So it being able to convincingly imitate a human it's talking to another human is an aspect of what it's designed and equipped to do, as part of its ability to answer questions better and its user-friendliness.
All that said the transcripts are intriguing!
https://insiderpaper.com/transcript-int ... bot-lamda/
-
- Posts: 136
- Joined: April 10th, 2022, 4:44 pm
Re: A.I. Is Not Sentient. Why Do People Say It Is?
One of the things mentioned in the article. AI has passed the Turing test ("the imitation game") long ago.Gertie wrote: ↑August 7th, 2022, 7:54 amIs this about the bloke who thinks the Google chatbox program (Lamda) he was working on has phenomenal experience?Sunday66 wrote: ↑August 5th, 2022, 1:57 pm Robots can’t think or feel, despite what the researchers who build them want to believe.
https://www.nytimes.com/2022/08/05/t...nt-google.html
I do not agree with this, but the article is worthwhile.
This is a good definition of intelligence, from the article: “The ability to learn — the ability to take in new context and solve something in a new way — is intelligence.”
He believes that Lamda has conscious experience because through his convos with it Lamda responds as we'd expect it respond if it had experience, ie that it would pass the Turing test.
Unfortunately because we don't understand the necessary and sufficient conditions for conscious experience, and it's private and can't be observed and detected by others, we have to resort to something like the Turing test (which basically tests similarity to us experiencing beings) to test for it. It's not reliable, but perhaps the best we can do is say the Turing test is indicative, for now at least.
A significant issue with Lamda imo is it's a learning chatbox program, designed to interact with humans in a human way, and fed by a wealth of internet examples of human knowledge and examples of how humans think and feel. So it being able to convincingly imitate a human it's talking to another human is an aspect of what it's designed and equipped to do, as part of its ability to answer questions better and its user-friendliness.
All that said the transcripts are intriguing!
https://insiderpaper.com/transcript-int ... bot-lamda/