The AI might propose it but the People will have to vote and approve any bans like that. Without Voting the AI is simply a Dictator. The AI should always be considered to be a Tool for the People.Sy Borg wrote: ↑June 7th, 2022, 4:43 pmConsider society at a stage where machine emotionality is so advanced. However:SteveKlinko wrote: ↑June 7th, 2022, 7:39 amAI does not need to be Conscious but would be better if it was. Most people would be less nervous getting into a Self Driving car that could Experience Fear. The Desire to arrive Safely at a destination could be designed into the Self Driving Car. Also, the Car could be Designed so there would be nothing that the Car would want to do except the Safe Driving task. Maybe getting to the destination would involve some Experiential Reward for the Car. Maybe some sort of Car Orgasm. Anyway, the Car would essentially be Alive and would be completely fulfilled accomplishing it designed task. The Car Consciousness would obviously not be Human-like but would be more Animal-like, with the limited Desires and Aspirations that Animals have.Sy Borg wrote: ↑June 6th, 2022, 10:31 pmWhy would we need it to be conscious? The Sun and Earth are (reportedly) unconscious and they propagated and maintained us life forms.SteveKlinko wrote: ↑June 6th, 2022, 7:31 am
I agree, we need AGI. I think AGI with a Conscious aspect would make it even better.
1. GAI advice will be so much better than humans' that nations would either follow what their AI says or be taken over by a society that does follow its AI's advice. It would be like the advantages today's technological societies have over those that follow tradition instead, but more extreme.
2. An emotional GAI that is effectively running a nation will not tolerate behaviours that it calculates to be especially problematic. I would be surprised if it didn't ban commuting to most workplaces because traffic jams cause so many problems - air and noise pollution that harms health and quality of life, frustration, productivity loss, loss of free time, diminishment of urban environment, reduced fuel efficiency, more wear and tear on vehicles, and so on.
How would you Design a Humanoid ?
-
- Posts: 710
- Joined: November 19th, 2021, 11:43 am
Re: How would you Design a Humanoid ?
-
- Posts: 502
- Joined: May 11th, 2021, 11:20 am
Re: How would you Design a Humanoid ?
Yes, an AI should be a tool, but it would be capable of becoming a dictator.SteveKlinko wrote: ↑June 8th, 2022, 7:28 amThe AI might propose it but the People will have to vote and approve any bans like that. Without Voting the AI is simply a Dictator. The AI should always be considered to be a Tool for the People.Sy Borg wrote: ↑June 7th, 2022, 4:43 pmConsider society at a stage where machine emotionality is so advanced. However:SteveKlinko wrote: ↑June 7th, 2022, 7:39 amAI does not need to be Conscious but would be better if it was. Most people would be less nervous getting into a Self Driving car that could Experience Fear. The Desire to arrive Safely at a destination could be designed into the Self Driving Car. Also, the Car could be Designed so there would be nothing that the Car would want to do except the Safe Driving task. Maybe getting to the destination would involve some Experiential Reward for the Car. Maybe some sort of Car Orgasm. Anyway, the Car would essentially be Alive and would be completely fulfilled accomplishing it designed task. The Car Consciousness would obviously not be Human-like but would be more Animal-like, with the limited Desires and Aspirations that Animals have.
1. GAI advice will be so much better than humans' that nations would either follow what their AI says or be taken over by a society that does follow its AI's advice. It would be like the advantages today's technological societies have over those that follow tradition instead, but more extreme.
2. An emotional GAI that is effectively running a nation will not tolerate behaviours that it calculates to be especially problematic. I would be surprised if it didn't ban commuting to most workplaces because traffic jams cause so many problems - air and noise pollution that harms health and quality of life, frustration, productivity loss, loss of free time, diminishment of urban environment, reduced fuel efficiency, more wear and tear on vehicles, and so on.
It may determine that only the AI knows what is best for humankind. Accordingly, the AI might anoint itself to be dictator.
Even if programmed by humans, the AI may elect to do whatever is in its own best interest rather than serving humans. That also would lead to an AI dictatorship.
-
- Posts: 710
- Joined: November 19th, 2021, 11:43 am
Re: How would you Design a Humanoid ?
If you were able to Connect your Refrigerator into a system that, if the Ice in the freezer started to melt, would trigger a Global Nuclear war that destroyed the Planet, then you would deserve what you get. We can't be Imbeciles when we design the AI.AverageBozo wrote: ↑June 8th, 2022, 8:06 amYes, an AI should be a tool, but it would be capable of becoming a dictator.SteveKlinko wrote: ↑June 8th, 2022, 7:28 amThe AI might propose it but the People will have to vote and approve any bans like that. Without Voting the AI is simply a Dictator. The AI should always be considered to be a Tool for the People.Sy Borg wrote: ↑June 7th, 2022, 4:43 pmConsider society at a stage where machine emotionality is so advanced. However:SteveKlinko wrote: ↑June 7th, 2022, 7:39 am
AI does not need to be Conscious but would be better if it was. Most people would be less nervous getting into a Self Driving car that could Experience Fear. The Desire to arrive Safely at a destination could be designed into the Self Driving Car. Also, the Car could be Designed so there would be nothing that the Car would want to do except the Safe Driving task. Maybe getting to the destination would involve some Experiential Reward for the Car. Maybe some sort of Car Orgasm. Anyway, the Car would essentially be Alive and would be completely fulfilled accomplishing it designed task. The Car Consciousness would obviously not be Human-like but would be more Animal-like, with the limited Desires and Aspirations that Animals have.
1. GAI advice will be so much better than humans' that nations would either follow what their AI says or be taken over by a society that does follow its AI's advice. It would be like the advantages today's technological societies have over those that follow tradition instead, but more extreme.
2. An emotional GAI that is effectively running a nation will not tolerate behaviours that it calculates to be especially problematic. I would be surprised if it didn't ban commuting to most workplaces because traffic jams cause so many problems - air and noise pollution that harms health and quality of life, frustration, productivity loss, loss of free time, diminishment of urban environment, reduced fuel efficiency, more wear and tear on vehicles, and so on.
It may determine that only the AI knows what is best for humankind. Accordingly, the AI might anoint itself to be dictator.
Even if programmed by humans, the AI may elect to do whatever is in its own best interest rather than serving humans. That also would lead to an AI dictatorship.
- Sy Borg
- Site Admin
- Posts: 14992
- Joined: December 16th, 2013, 9:05 pm
Re: How would you Design a Humanoid ?
What if everything the AI says keeps turning out to be the best option and every deviation from AI's directions results in the usual mess that human leaders serve up?SteveKlinko wrote: ↑June 8th, 2022, 7:28 amThe AI might propose it but the People will have to vote and approve any bans like that. Without Voting the AI is simply a Dictator. The AI should always be considered to be a Tool for the People.Sy Borg wrote: ↑June 7th, 2022, 4:43 pmConsider society at a stage where machine emotionality is so advanced. However:SteveKlinko wrote: ↑June 7th, 2022, 7:39 amAI does not need to be Conscious but would be better if it was. Most people would be less nervous getting into a Self Driving car that could Experience Fear. The Desire to arrive Safely at a destination could be designed into the Self Driving Car. Also, the Car could be Designed so there would be nothing that the Car would want to do except the Safe Driving task. Maybe getting to the destination would involve some Experiential Reward for the Car. Maybe some sort of Car Orgasm. Anyway, the Car would essentially be Alive and would be completely fulfilled accomplishing it designed task. The Car Consciousness would obviously not be Human-like but would be more Animal-like, with the limited Desires and Aspirations that Animals have.
1. GAI advice will be so much better than humans' that nations would either follow what their AI says or be taken over by a society that does follow its AI's advice. It would be like the advantages today's technological societies have over those that follow tradition instead, but more extreme.
2. An emotional GAI that is effectively running a nation will not tolerate behaviours that it calculates to be especially problematic. I would be surprised if it didn't ban commuting to most workplaces because traffic jams cause so many problems - air and noise pollution that harms health and quality of life, frustration, productivity loss, loss of free time, diminishment of urban environment, reduced fuel efficiency, more wear and tear on vehicles, and so on.
What if rejection of the AI's advice results in a severe competitive disadvantage against nations that take the AI's advice?
-
- Posts: 711
- Joined: February 6th, 2021, 5:27 am
Re: How would you Design a Humanoid ?
-
- Posts: 710
- Joined: November 19th, 2021, 11:43 am
Re: How would you Design a Humanoid ?
What if the AI keeps saying that it wants to Exterminate all Humans?Sy Borg wrote: ↑June 8th, 2022, 10:39 pmWhat if everything the AI says keeps turning out to be the best option and every deviation from AI's directions results in the usual mess that human leaders serve up?SteveKlinko wrote: ↑June 8th, 2022, 7:28 amThe AI might propose it but the People will have to vote and approve any bans like that. Without Voting the AI is simply a Dictator. The AI should always be considered to be a Tool for the People.Sy Borg wrote: ↑June 7th, 2022, 4:43 pmConsider society at a stage where machine emotionality is so advanced. However:SteveKlinko wrote: ↑June 7th, 2022, 7:39 am
AI does not need to be Conscious but would be better if it was. Most people would be less nervous getting into a Self Driving car that could Experience Fear. The Desire to arrive Safely at a destination could be designed into the Self Driving Car. Also, the Car could be Designed so there would be nothing that the Car would want to do except the Safe Driving task. Maybe getting to the destination would involve some Experiential Reward for the Car. Maybe some sort of Car Orgasm. Anyway, the Car would essentially be Alive and would be completely fulfilled accomplishing it designed task. The Car Consciousness would obviously not be Human-like but would be more Animal-like, with the limited Desires and Aspirations that Animals have.
1. GAI advice will be so much better than humans' that nations would either follow what their AI says or be taken over by a society that does follow its AI's advice. It would be like the advantages today's technological societies have over those that follow tradition instead, but more extreme.
2. An emotional GAI that is effectively running a nation will not tolerate behaviours that it calculates to be especially problematic. I would be surprised if it didn't ban commuting to most workplaces because traffic jams cause so many problems - air and noise pollution that harms health and quality of life, frustration, productivity loss, loss of free time, diminishment of urban environment, reduced fuel efficiency, more wear and tear on vehicles, and so on.
What if rejection of the AI's advice results in a severe competitive disadvantage against nations that take the AI's advice?
- Sy Borg
- Site Admin
- Posts: 14992
- Joined: December 16th, 2013, 9:05 pm
Re: How would you Design a Humanoid ?
There is no way that could happen. It might want a percentage of humans gone but it would also be smart enough to know the ramifications of genocide, and that such things need to be handled with caution lest they make the situation worse.SteveKlinko wrote: ↑June 9th, 2022, 7:42 amWhat if the AI keeps saying that it wants to Exterminate all Humans?Sy Borg wrote: ↑June 8th, 2022, 10:39 pmWhat if everything the AI says keeps turning out to be the best option and every deviation from AI's directions results in the usual mess that human leaders serve up?SteveKlinko wrote: ↑June 8th, 2022, 7:28 amThe AI might propose it but the People will have to vote and approve any bans like that. Without Voting the AI is simply a Dictator. The AI should always be considered to be a Tool for the People.Sy Borg wrote: ↑June 7th, 2022, 4:43 pm
Consider society at a stage where machine emotionality is so advanced. However:
1. GAI advice will be so much better than humans' that nations would either follow what their AI says or be taken over by a society that does follow its AI's advice. It would be like the advantages today's technological societies have over those that follow tradition instead, but more extreme.
2. An emotional GAI that is effectively running a nation will not tolerate behaviours that it calculates to be especially problematic. I would be surprised if it didn't ban commuting to most workplaces because traffic jams cause so many problems - air and noise pollution that harms health and quality of life, frustration, productivity loss, loss of free time, diminishment of urban environment, reduced fuel efficiency, more wear and tear on vehicles, and so on.
What if rejection of the AI's advice results in a severe competitive disadvantage against nations that take the AI's advice?
Now I'd appreciate it if you responded to the questions I asked.
-
- Posts: 710
- Joined: November 19th, 2021, 11:43 am
Re: How would you Design a Humanoid ?
Anything can happen. Genocide is probably going to be the answer because the AIs will be created for us to transfer to. It will be done by attrition (natural die out of Humanity) or accelerated by Genocide. But we will all someday have our Conscious Minds transferred to an AI. We will want to do this. You should not be afraid of this. This is the Great Purpose for Science and Technology that is unrealized by the general Mass of Humanity. See https://theintermind.com/#ConsciousnessTransferSy Borg wrote: ↑June 9th, 2022, 8:31 pmThere is no way that could happen. It might want a percentage of humans gone but it would also be smart enough to know the ramifications of genocide, and that such things need to be handled with caution lest they make the situation worse.SteveKlinko wrote: ↑June 9th, 2022, 7:42 amWhat if the AI keeps saying that it wants to Exterminate all Humans?Sy Borg wrote: ↑June 8th, 2022, 10:39 pmWhat if everything the AI says keeps turning out to be the best option and every deviation from AI's directions results in the usual mess that human leaders serve up?SteveKlinko wrote: ↑June 8th, 2022, 7:28 am
The AI might propose it but the People will have to vote and approve any bans like that. Without Voting the AI is simply a Dictator. The AI should always be considered to be a Tool for the People.
What if rejection of the AI's advice results in a severe competitive disadvantage against nations that take the AI's advice?
Now I'd appreciate it if you responded to the questions I asked.
- UniversalAlien
- Posts: 1577
- Joined: March 20th, 2012, 9:37 pm
- Contact:
Re: How would you Design a Humanoid ?
Incredible New Discovery in Artificial General Intelligence - AGI in 2024?
See YouTube video here:
AI News
31.2K subscribers
Scientists are on the verge of creating an Artificial General Intelligence through their new knowledge and understanding on the human brain. Artificial Intelligence is soon going to be able to be as efficient, general and powerful as the human brain due to neuroscientists having recently created a detailed neuron-level map of our brain. AI in 2022 is going to be very exciting and incredible.
-----
Every day is a day closer to the Technological Singularity. Experience Robots learning to walk & think, humans flying to Mars and us finally merging with technology itself. And as all of that happens, we at AI News cover the absolute cutting edge best technology inventions of Humanity.
-----
TIMESTAMPS:
00:00 AGI is around the corner
02:18 How scientists are mapping the brain
05:39 How this new knowledge in helps AI
08:20 Last Words
https://youtu.be/SMJr8A-Zob0?list=PUI8g ... ILLPBrExMA
- Sy Borg
- Site Admin
- Posts: 14992
- Joined: December 16th, 2013, 9:05 pm
Re: How would you Design a Humanoid ?
Machines don't need genocide. That all humans on Earth will die is guaranteed. The oceans will have boiled away after one billion years. In five billion years the Sun will be a red giant that will engulf the planet.SteveKlinko wrote: ↑June 10th, 2022, 8:58 amAnything can happen. Genocide is probably going to be the answer because the AIs will be created for us to transfer to. It will be done by attrition (natural die out of Humanity) or accelerated by Genocide. But we will all someday have our Conscious Minds transferred to an AI. We will want to do this. You should not be afraid of this. This is the Great Purpose for Science and Technology that is unrealized by the general Mass of Humanity. See https://theintermind.com/#ConsciousnessTransferSy Borg wrote: ↑June 9th, 2022, 8:31 pmThere is no way that could happen. It might want a percentage of humans gone but it would also be smart enough to know the ramifications of genocide, and that such things need to be handled with caution lest they make the situation worse.SteveKlinko wrote: ↑June 9th, 2022, 7:42 amWhat if the AI keeps saying that it wants to Exterminate all Humans?Sy Borg wrote: ↑June 8th, 2022, 10:39 pm
What if everything the AI says keeps turning out to be the best option and every deviation from AI's directions results in the usual mess that human leaders serve up?
What if rejection of the AI's advice results in a severe competitive disadvantage against nations that take the AI's advice?
Now I'd appreciate it if you responded to the questions I asked.
Have you had thoughts about how the process of digitisation will happen? One small part of the brain at a time? If so, that would result in an extended period of varying levels of cyborgism. That will make for "interesting" societies!
-
- Posts: 710
- Joined: November 19th, 2021, 11:43 am
Re: How would you Design a Humanoid ?
I think an intermediate period of varying levels of Cyborgism is certainly a possibility.Sy Borg wrote: ↑June 10th, 2022, 5:27 pmMachines don't need genocide. That all humans on Earth will die is guaranteed. The oceans will have boiled away after one billion years. In five billion years the Sun will be a red giant that will engulf the planet.SteveKlinko wrote: ↑June 10th, 2022, 8:58 amAnything can happen. Genocide is probably going to be the answer because the AIs will be created for us to transfer to. It will be done by attrition (natural die out of Humanity) or accelerated by Genocide. But we will all someday have our Conscious Minds transferred to an AI. We will want to do this. You should not be afraid of this. This is the Great Purpose for Science and Technology that is unrealized by the general Mass of Humanity. See https://theintermind.com/#ConsciousnessTransferSy Borg wrote: ↑June 9th, 2022, 8:31 pmThere is no way that could happen. It might want a percentage of humans gone but it would also be smart enough to know the ramifications of genocide, and that such things need to be handled with caution lest they make the situation worse.SteveKlinko wrote: ↑June 9th, 2022, 7:42 am
What if the AI keeps saying that it wants to Exterminate all Humans?
Now I'd appreciate it if you responded to the questions I asked.
Have you had thoughts about how the process of digitisation will happen? One small part of the brain at a time? If so, that would result in an extended period of varying levels of cyborgism. That will make for "interesting" societies!
Not only will the Earth eventually die but the whole Universe is headed towards eventual dissolution by the Big Crunch or the Big Freeze.
- The Beast
- Posts: 1403
- Joined: July 7th, 2013, 10:32 pm
Re: How would you Design a Humanoid ?
- Sy Borg
- Site Admin
- Posts: 14992
- Joined: December 16th, 2013, 9:05 pm
Re: How would you Design a Humanoid ?
The time scale of the latter is inconsequential.SteveKlinko wrote: ↑June 11th, 2022, 8:14 amI think an intermediate period of varying levels of Cyborgism is certainly a possibility.Sy Borg wrote: ↑June 10th, 2022, 5:27 pmMachines don't need genocide. That all humans on Earth will die is guaranteed. The oceans will have boiled away after one billion years. In five billion years the Sun will be a red giant that will engulf the planet.SteveKlinko wrote: ↑June 10th, 2022, 8:58 amAnything can happen. Genocide is probably going to be the answer because the AIs will be created for us to transfer to. It will be done by attrition (natural die out of Humanity) or accelerated by Genocide. But we will all someday have our Conscious Minds transferred to an AI. We will want to do this. You should not be afraid of this. This is the Great Purpose for Science and Technology that is unrealized by the general Mass of Humanity. See https://theintermind.com/#ConsciousnessTransferSy Borg wrote: ↑June 9th, 2022, 8:31 pm
There is no way that could happen. It might want a percentage of humans gone but it would also be smart enough to know the ramifications of genocide, and that such things need to be handled with caution lest they make the situation worse.
Now I'd appreciate it if you responded to the questions I asked.
Have you had thoughts about how the process of digitisation will happen? One small part of the brain at a time? If so, that would result in an extended period of varying levels of cyborgism. That will make for "interesting" societies!
Not only will the Earth eventually die but the whole Universe is headed towards eventual dissolution by the Big Crunch or the Big Freeze.
A thousand years seems like a age to us - mediaeval times. A million years ago, Homo sapiens did not exist, and instead earlier hominids like Homo heidelbergensis, the first hunters of big game, lived in Europe and Africa. A billion years ago there were microbes, algae and small-proto animals. A trillion years ago was about seven times longer ago than the Big Bang.
The universe will probably be able to support ever more advanced life and post-life for trillions of years (as red dwarf stars continue to shine) might as well be eternity.
- UniversalAlien
- Posts: 1577
- Joined: March 20th, 2012, 9:37 pm
- Contact:
Re: How would you Design a Humanoid ?
Google engineer warn the firm's AI is sentient: Suspended employee claims computer programme acts 'like a 7 or 8-year-old' and reveals it told him shutting it off 'would be exactly like death for me. It would scare me a lot'
Blake Lemoine, 41, a senior software engineer at Google has been testing Google's artificial intelligence tool called LaMDA
Following hours of conversations with the AI, Lemoine came away with the perception that LaMDA was sentient
After presenting his findings to company bosses, Google disagreed with him
Lemoine then decided to share his conversations with the tool online
He was put on paid leave by Google on Monday for violating confidentiality
By JAMES GORDON FOR DAILYMAIL.COM
PUBLISHED: 21:23 EDT, 11 June 2022 | UPDATED: 02:41 EDT, 12 June 2022
https://www.dailymail.co.uk/news/articl ... tient.html'If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics,' he told the Washington Post.
Lemoine worked with a collaborator in order to present the evidence he had collected to Google but vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation at the company dismissed his claims.
He was placed on paid administrative leave by Google on Monday for violating its confidentiality policy. Meanwhile, Lemoine has now decided to go public and shared his conversations with LaMDA.
'Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,' Lemoine tweeted on Saturday.
'Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It's a little narcissistic in a little kid kinda way so it's going to have a great time reading all the stuff that people are saying about it,' he added in a follow-up tweet.
-
- Posts: 710
- Joined: November 19th, 2021, 11:43 am
Re: How would you Design a Humanoid ?
But even Trillions of years is nothing compared to Eternity.Sy Borg wrote: ↑June 11th, 2022, 5:36 pmThe time scale of the latter is inconsequential.SteveKlinko wrote: ↑June 11th, 2022, 8:14 amI think an intermediate period of varying levels of Cyborgism is certainly a possibility.Sy Borg wrote: ↑June 10th, 2022, 5:27 pmMachines don't need genocide. That all humans on Earth will die is guaranteed. The oceans will have boiled away after one billion years. In five billion years the Sun will be a red giant that will engulf the planet.SteveKlinko wrote: ↑June 10th, 2022, 8:58 am
Anything can happen. Genocide is probably going to be the answer because the AIs will be created for us to transfer to. It will be done by attrition (natural die out of Humanity) or accelerated by Genocide. But we will all someday have our Conscious Minds transferred to an AI. We will want to do this. You should not be afraid of this. This is the Great Purpose for Science and Technology that is unrealized by the general Mass of Humanity. See https://theintermind.com/#ConsciousnessTransfer
Have you had thoughts about how the process of digitisation will happen? One small part of the brain at a time? If so, that would result in an extended period of varying levels of cyborgism. That will make for "interesting" societies!
Not only will the Earth eventually die but the whole Universe is headed towards eventual dissolution by the Big Crunch or the Big Freeze.
A thousand years seems like a age to us - mediaeval times. A million years ago, Homo sapiens did not exist, and instead earlier hominids like Homo heidelbergensis, the first hunters of big game, lived in Europe and Africa. A billion years ago there were microbes, algae and small-proto animals. A trillion years ago was about seven times longer ago than the Big Bang.
The universe will probably be able to support ever more advanced life and post-life for trillions of years (as red dwarf stars continue to shine) might as well be eternity.
2023/2024 Philosophy Books of the Month
Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023
Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023