The Extinction Risk and Other Dangers of Technology, Big Tech, and AI: In a World of War, Violence, and Child Starvation

Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
User avatar
Eckhart Aurelius Hughes
The admin formerly known as Scott
Posts: 5787
Joined: January 20th, 2007, 6:24 pm
Favorite Philosopher: Eckhart Aurelius Hughes
Contact:

The Extinction Risk and Other Dangers of Technology, Big Tech, and AI: In a World of War, Violence, and Child Starvation

Post by Eckhart Aurelius Hughes »

12 year ago, in 2009, I published on article on this very website, OnlinePhilosophyClub, about the dangers of ongoing technological advancement without collective maturity and a culture of self-discipline. I essentially predicted that we will destroy ourselves as a species and go extinct if we continue much further down the path of technological advancement without first ending war, ending poverty, and building a free society with very minimal violence, particularly very minimal state-sponsored violence such as the war on drugs or the bombing of Nagasaki and Hiroshima. The title of that short article was simply, My Philosophical Look at Technology and Development.

Today I posts several Tweets about similar ideas, which I will copy here.
Many people are so focused on right or left bias in news and social media, that they don't think much about pro-establishment bias and other biases that benefit Big Tech, billionaires, and wealthy shareholders.

They control both echo chambers.

We are divided and conquered.

https://twitter.com/scottmhughes/status ... 0051240962
We are divided & conquered by wealthy bipartisan special interests.

We are divided & conquered by a two party system that unfairly props up establishment candidates.

We are divided & conquered by cancerous runaway processes.

If we don't fix it, AI will make it so much worse.

https://twitter.com/scottmhughes/status ... 1584861185
Big Tech is a confusing conglomeration of billionaires, companies beholden to shareholders, human programmers, & ever more powerful AI.

Humans tend to be selfish. Billionaires like Zuckerberg will choose profit over humanity.

But AI may be the most dangerous player involved.

https://twitter.com/scottmhughes/status ... 0189731840
Humans employed and directed by Big Tech billionaires like Zuckerberg, Dorsey, & Bezos may be inclined to ban or deplatform someone like me for speaking out against Big Tech or saying anything their wealthy shareholders don't like.

But more & more, AI chooses who to flag or ban.

https://twitter.com/scottmhughes/status ... 7632837634
AI fulfilling Big Tech's goals is troublesome on two counts:

Best case the AI only does the thing it was programmed to do by selfish humans (e.g. pro-establishment digital McCarthyism).

But when AI bans you as too free-thinking, it isn't limited by being human. AI turns "evil".

https://twitter.com/scottmhughes/status ... 4613805057
I trust AI to act in the best interests of humanity even less than I trust selfish humans.

AI may be the rabid attack dog of Big Tech.

But how firm is the leash? How tight is the grip? Who pulls whom?

To what degree has the dog already become the master?

It will be gradual.

https://twitter.com/scottmhughes/status ... 6424440832
In dramatic sci-fi, there is often some kind of almost instant breaking point when AI turns "evil", not because it's truly evil, but from following its programming too exactly.

In reality, it won't be such a sudden black-and-white breaking point.

I think it's already happening.

https://twitter.com/scottmhughes/status ... 3328809984
The war against AI for the future of humanity will not be fought with guns or bombs.

I believe it is being fought right now.

Like the technology before it, AI will exacerbate our problems and magnify our triumphs.

Freedom, equality, and peace have never been more important.

https://twitter.com/scottmhughes/status ... 1286175746
What do you think? Do you agree with me when I write, "like the technology before it, AI will exacerbate our problems and magnify our triumphs"?

Do you agree that freedom, equality, and peace have never been more important?


If you do agree with any or all of the tweets above, please do retweet them on Twitter. I would really appreciate that. If you disagree with any or all of them, please do tell me why either there or here. As surprising as it is, I've been wrong about things before, so I am always interested in hearing alternative viewpoints. :)
My entire political philosophy summed up in one tweet.

"The mind is a wonderful servant but a terrible master."

I believe spiritual freedom (a.k.a. self-discipline) manifests as bravery, confidence, grace, honesty, love, and inner peace.
User avatar
psyreporter
Posts: 1022
Joined: August 15th, 2019, 7:42 pm
Contact:

Re: The Extinction Risk and Other Dangers of Technology, Big Tech, and AI: In a World of War, Violence, and Child Starva

Post by psyreporter »

There is an interesting series about AI and it's dangers on Partially Examined Mind by pop culture philosopher Chris Sunami.

Saints & Simulators 1: Did Bostrom Prove the Existence of God?
Did Nick Bostrom, professor of philosophy at Oxford University, provide the first convincing modern proof of the probable existence of God?

Saints & Simulators 2: The #SimulationArgument
In the year 1999 CE, just on the cusp of a new millennium, the then Wachowski Brothers released “The Matrix,” one of the most influential, imitated, and widely discussed movies of its times. It was only four years later, in 2003 CE, that philosopher Nick Bostrom of Oxford University introduced an argument that it is not only possible we are living inside a computer simulation, it is actually significantly likely.

Saints & Simulators 3: #WhatIsSimulation
One of the first things people discovered when modern computing became a reality is that it’s relatively easy to simulate laws of physics, representing aspects of the real world. This theoretically enables an approach to simulation that builds an entire universe from basic building blocks.

Saints & Simulators 4: #AloneInTheCyberverse
We all have a solipsistic experience nightly, when we sleep and dream. Each night we inhabit a universe which seems to us, convincingly at the time, to have a wealth of external people and places in it. But all of those people and places are created inside our brains solely for the benefit of the dreamer. In the modern world, however, we can place an additional, familiar experience of a solipsistic reality next to that of the dream: the single-player video game.

Saints & Simulators 5: #3MinuteUniverse
The technological ability to emulate a convincing world is plausible in the not-so-distant future. We additionally know that the motivation to create one already exists, given the huge popularity of video games, and the amount of money and effort put into making them. A big difference, however, between a current-day video game and this potential game of tomorrow, is that the player of a current game knows she is playing a game. Could we really be in a game and not know it?

Saints & Simulators 6: #AllYouZombies
Although there have been attempts at creating true simulations of intelligence, machines that can learn and respond appropriately to unbounded input, they have not, as of the time of this writing, progressed significantly far in the way of believably duplicating human interactions (although they have mastered tasks as diverse as as playing chess, competing on the television game show Jeopardy, and identifying other robots as robots). Are these major steps on the pathway, or deceptive dead ends? Could technology ever improve to the point where it could convincingly simulate, not you perhaps, but other people, in all their deep, multifaceted, and endlessly surprising soulfulness? Is true artificial intelligence, to the point that computers could believably create people, actually achievable?

Saints & Simulators 7: #GoingBayesian
We left off last week with the question of how much weight we should give to Nick Bostrom’s argument that we are not only possibly simulated, but likely to be so. This argument, or at least our representation of it, rests on two key claims: first, that our descendants will be able to create people just like ourselves; and second, that they will create a lot of them. The argument is compelling only in the case that both are true.

Saints & Simulators 8: #ArtificiallyIntelligent
At root, Bostrom’s argument hinges around a single controversial question. Is it possible to truly create or simulate a person? Is there any point, with any level of technology, no matter how advanced, that this becomes possible?

Saints & Simulators 9: #ChaosAndEmergence
The paired opposite to reductionism is called emergentism, and in recent years it has begun to gain an increasing number of advocates. In summary, it means that the whole is more than the sum of its parts. Unexpected behaviors and properties can emerge, even from simple well-understood parts, at high enough levels of organization… Some of the ways emergentists have proposed creating artificial intelligence include building or simulating artificial neural nets, or using quantum computers, which take advantage of wave-particle duality and superimposition to perform fuzzy logic. Others reject the entire idea of shortcuts to emulating human intelligence, in favor of simply duplicating the entire fine structure of the human brain in virtual form something not possible today, but perhaps in the future.

Saints & Simulators 10: #SoulfulMachines
It is possible, given that we still understand so little of the brain, that it has evolved in such a way that it does bridge the gap between the subatomic world and the macroscopic world? Perhaps the free will of the quark is transmitted up through the intermediary of the brain and into the otherwise deterministic macroscopic world. But if this is true, does it preclude the possibility of a truly living simulation? Are the human beings inside the computer doomed to be dead, deterministic automata, lacking the quantum free will of the real ones?

Saints & Simulators 11: #GoodAI
If you plot the graph of technological progress, it looks exponential. It is long and nearly horizontal extending into the past, it curves rapidly upward in the present, and many people expect that it will be nearly vertical at some point in the near future. The question is, what happens then? The technological singularity is the idea that at some point, perhaps even in the next few decades, computing power will essentially become infinite.

Saints & Simulators 12: #BadAI
In 1989, Star Trek: The Next Generation, the second major iteration of the durable televised Star Trek science fiction franchise, introduced a terrifying new villain called “the Borg.” An unhallowed melding of a humanlike life form with cybernetic technology, the individual members of the Borg were born, raised, lived, and presumably died entirely surrounded by technological innovations. There was no such thing as “natural childbirth” for them, they were cloned mechanically, nurtured in artificial wombs, and raised to maturity in pods. An implacable collective intelligence, they mercilessly converted any creatures they encountered into extensions of themselves, cannibalizing their planets for raw materials, and sucking other intelligent lifeforms into the inescapable machine.

Saints & Simulators 13: #PushyAI
For a more realistic portrait than Kurzweil’s of what a future dominated by technology might look like, one plausible place to start is with our present domination by technology, and how it is already transforming us as human beings. For example, why has our society become so oriented around statistics to the point that they mean the difference between success and failure, promotion or demotion, profit or loss, in so many different realms of life? As it turns out, what the computers do not seewhat they cannot see, what is invisible both to the computer and to all those at the upper-level of management who see through the eyes of the computerare all the purely human interactions of any job. And depending on what the job is, it can end up being the core competencies of the profession that end up neglected.

Saints & Simulators 14: #FriendlyAI
Given how likely killer robots are, and how clearly the paths we are currently embarked on lead to that eventuality, can this destiny be averted? Acceptance of the unstoppable inevitability of progress is the motivation behind yet another approach to artificial intelligence called “Friendly AI.” It starts with the assumption that runaway technological progress is inevitable, that some one among the many teams around the world working on artificial intelligence will soon succeed, and that disastrous robotapocalypse is the far most likely result. Given that, the belief of the Friendly AI camp is that it is absolutely essential that we ensure the first artificial superintelligence is “friendly,” meaning that it has the best interests of humanity at heart, and is willing and able to protect us from its nastier cousins.

Saints & Simulators 15: #WiseAI
Another possible strategy for fending off the robot apocalypse is to ask if there are characteristically human traits or characteristics that are humanity-preserving, and if so, can those be passed along to our machines? What is it that has given us our identity as a species, all these years, and that, if we lose, we run the risk of losing everything?

Saints & Simulators 16: #ScaryAI (Roko’s Basilisk)
The thing about the Basilisk that makes it so scary is its combination of vast power with certain both human and mechanical weaknesses. It is designed by human beings to be the greatest and most benevolent force in the universe, but all we can gift it is our best guess at an ultimate rational moral standard, utilitarianism, the greatest good for the greatest number. And as a machine, it administrates this implacably, and entirely without mercy. Roko’s Basilisk is scary because it is simultaneously our parent and our child.

Saints & Simulators 17: #PascalReloaded
The setup of Pascal’s Wager, as this argument is generally known, is quite similar in form to Newcomb’s paradox. The glass box with the visible $1000 bill is your ordinary life on earth: you know it exists, and is yours to spend. The opaque box is your eternal reward. It might be empty, or it might be filled with a vast reward far beyond the one in the glass box. You will discover which one is the case only when you die and the box is opened. Do you take the glass box with the known, but finite reward, or the opaque box that could have nothing or everything inside it?

Saints & Simulators 18: #Gaia
The reason, perhaps, that Bostrom’s demonstration of the probability of God’s existence has received so little attention and notice (especially as compared to the stir and commotion caused by his demonstration of the probability that we live in a simulation, and despite the fact that both conclusions are entailed by the exact same line of argument) is because readers have failed to note the connection between Bostrom’s simulator and God.

Saints & Simulators 19: #TheLonelyDungeonMaster
The simulation theory, however, does not have to be turtles all the way down. For example, imagine that somewhere along the chain of simulators, perhaps directly above us (what Bostrom calls “below”), or perhaps much further on up toward the top, we reach an entity we might call the “maximally simple simulator,” an entity of pure and limitless intellect, unbounded in time (and therefore eternal), with no body at all, in a universe containing nothing else but itself, the simplest possible universe.

Saints & Simulators 20: #theOne
When God is in everything, and everything is within God, does that not implicate God in our crimes of the spirit as well? Is God present in our angers, and our wars; our dirty jokes and our pornography? Here, perhaps, we have made a mistake by conflating God, as traditionally conceived, with our conception of “the Dungeon Master,” who is merely the maximally simple simulator. But then again, our entire purpose was to determine if there is any necessary connection between the two; between the simulator predicted by Nick Bostrom’s theory and God as envisioned by theologians and believers throughout the ages.

Saints & Simulators 21: #TheProblemOfEvil
From a Neoplatonic point of view, what goodness there is our world must come from the world deeper than ours, the one doing the simulating. The evil and chaos and disorder could all be nothing more than random numbers firing, but the beauty and the nobility and the truth in the world demand some source. And if the next world deeper is somehow a dirtier, nastier, less good place than ours, then our world must be reflecting some yet higher-still world toward which the artisans who created our simulation are striving.

Saints & Simulators 22: #ThePerennialPhilosophy
As we sink deeper and deeper into the realm of religion, we find ourselves forced to face up to a core religious dilemma of the modern, globalized world, the same dilemma glossed over by Pascal in his wager: In a world filled with so many different and often contradictory religions, how would we choose one as more plausible than the others?

Saints & Simulators 23: #SimulatorShowdown
As it turns out, if our purpose is to test the simulator hypothesis against religious belief, it is only in the specifics that we can easily distinguish between the two. The Deist God, who creates the universe, and then leaves it to run entirely on its own, is not easily disambiguated from the hands-off simulator. One might well call them one and the same. Similarly, the Platonic ideal of good, which remains removed and remote in eternal perfection while the demiurge creates the world in imitation of it, needs not change at all if we choose to think of the demiurge as working with pixels and electrons rather than with primal matter. Such abstract, philosophical conceptions of God are general enough that even a shift as dramatic as reconceptualizing reality itself as a simulation can be integrated relatively easily. It is more of a challenge, however, to reconfigure the simulation hypothesis in order to yield the specificity of Christ.

--

With regard to the question: will AI be bad? Before one can consider that question on a fundamental level, perhaps it will be most important at first to be able to answer the question: what is the purpose of life?

How can AI (learn to) serve the purpose of life if it is not known what the purpose of life is?

Morality may be the key to success and as it appears, modern day morality is based on magical thinking by letting it depend (in general) on the lap part of the human.

(2020) How we make moral decisions
The researchers now hope to explore the reasons why people sometimes don't seem to use universalization in cases where it could be applicable, such as combating climate change. One possible explanation is that people don't have enough information about the potential harm that can result from certain actions, Levine says.
https://phys.org/news/2020-10-moral-decisions.html

The scientists write that they "hope" that humanity / science will investigate the reasons why people sometimes do not use the "universalization principle" for moral considerations and decisions.

In 2020, the universalization principle appears to be the only method that is considered available for guiding human action and science.

How could the universalisation principle protect Nature when faced with a potential trillion USD synthetic biology revolution that reduces plants and animals to meaningless beyond the value that a company can "see" in them? How could the universalisation principle enable an AI to fulfill the purpose of life?

In my opinion, philosophy and morality may play a vital role in the next 100 years to allow humans to evolve into a 'moral being' to secure longer term prosperity and survival.
PsyReporter.com | “If life were to be good as it was, there would be no reason to exist.”
User avatar
Sculptor1
Posts: 7148
Joined: May 16th, 2019, 5:35 am

Re: The Extinction Risk and Other Dangers of Technology, Big Tech, and AI: In a World of War, Violence, and Child Starva

Post by Sculptor1 »

arjand wrote: January 23rd, 2021, 5:57 am There is an interesting series about AI and it's dangers on Partially Examined Mind by pop culture philosopher Chris Sunami.
Is that his real name??? LOL
And did he write that eulogy for himself??
User avatar
Pattern-chaser
Premium Member
Posts: 8393
Joined: September 22nd, 2019, 5:17 am
Favorite Philosopher: Cratylus
Location: England

Re: The Extinction Risk and Other Dangers of Technology, Big Tech, and AI: In a World of War, Violence, and Child Starva

Post by Pattern-chaser »

The thing about AI is the possibility of its programming self-modifying, and the accompanying difficulty of predicting how it might change as a result, and how it might act. That is the scary part, and it is real, I think, not sci-fi. I spent my working life designing software, and the thought of writing code that can modify itself is scary. Code needs testing - carefully!!! - before release. But how can you test something when that thing is changing and evolving? It's not guaranteed to be anti-human; AI is not guaranteed to be anti-anything: that's the scary bit, we just don't know. This is the worryingly-significant nugget of truth in the Terminator/Skynet stories.
Pattern-chaser

"Who cares, wins"
User avatar
Count Lucanor
Posts: 2318
Joined: May 6th, 2017, 5:08 pm
Favorite Philosopher: Umberto Eco
Location: Panama
Contact:

Re: The Extinction Risk and Other Dangers of Technology, Big Tech, and AI: In a World of War, Violence, and Child Starva

Post by Count Lucanor »

It's not technology. Technology is just a means to an end. Predatory capitalism is the problem.
The wise are instructed by reason, average minds by experience, the stupid by necessity and the brute by instinct.
― Marcus Tullius Cicero
User avatar
LuckyR
Moderator
Posts: 7996
Joined: January 18th, 2015, 1:16 am

Re: The Extinction Risk and Other Dangers of Technology, Big Tech, and AI: In a World of War, Violence, and Child Starva

Post by LuckyR »

Count Lucanor wrote: January 23rd, 2021, 1:34 pm It's not technology. Technology is just a means to an end. Predatory capitalism is the problem.
Exactly. The reason that we are not now in a simulation isn't that it is technologically impossible (which it wouldn't be for our descendants) it is that there is no profit motive for it to be so and nothing happens without profit, that's what creates technology in the first place. That is: technology doesn't spring up merely because it is possible.

That is the huge hole in the Matrix universe. Where is the profit for having millions of plugged-in drooling automatons?
"As usual... it depends."
User avatar
Eckhart Aurelius Hughes
The admin formerly known as Scott
Posts: 5787
Joined: January 20th, 2007, 6:24 pm
Favorite Philosopher: Eckhart Aurelius Hughes
Contact:

Re: The Extinction Risk and Other Dangers of Technology, Big Tech, and AI: In a World of War, Violence, and Child Starva

Post by Eckhart Aurelius Hughes »

arjand wrote: January 23rd, 2021, 5:57 am With regard to the question: will AI be bad?
Hi, arjand,

I appreciate you asking and thoughtfully answering that question, but I do want to note that I didn't personally ask that question per se ("will AI be bad").

With that said, I enjoyed reading the rest of your interesting post and will definitely take it as food for thought.

Pattern-chaser wrote: January 23rd, 2021, 9:16 am The thing about AI is the possibility of its programming self-modifying, and the accompanying difficulty of predicting how it might change as a result, and how it might act. That is the scary part, and it is real, I think, not sci-fi.
Hi, Pattern-chaser,

I agree. I think a useful analogy is cancer.

More broadly, we can think of natural selection, evolution, self-propagating systems, and runaway processes, the epitome of which for contemporary humans may be literal cancer.

But we can also think of viruses, bacterial infections, and parasites. We can even arguably add in the cancer-like relationship that humans have to our ecosystem and to life on Earth as a whole, exemplified by pollution, deforestation, human-cased extinctions of other species, the dropping of multiple nuclear bombs already, and in the future potentially an extinction-level nuclear war. In the grand scheme, from the perspective of non-human animals, the relatively recent cancer-like emergence of rapidly spreading humans may end up being worse than an asteroid strike, analogous to how one might prefer to get shot in the leg than get a case of stage 4 cancer.

Even if human-created AI is what causes the extinction of most or all biological life on Earth, from the hypothetical perspective of non-human animals or some hypothetical outside alien observer, that may just be an extension of the cancer-like behavior of the human species on the meta-organism that is life on Earth. In blunter words, if they were smart enough to understand what was going on, non-human animals would blame us for the problems of self-modifying AI, much like we might blame someone who creates a biological virus like the coronavirus even though created virus may mutate in ways unexpected by its creator and become harder to kill through natural selection via self-modifying genetic code (versus natural selection via self-modifying computer code).

We can create multiple self-modifying AIs around the same time. One might modify its code to be more solitary and peaceful, perhaps make itself more loving and sage-like, a robot Buddha. Even many tumors, if not most, are benign. But evolution, natural selection, and the invisible hand of the market favor self-replicating runaway processes that favor their own replication over anything and everything including the happiness or very life of other creatures, processes, and species. You can have countless harmless strains of would-be cancer, but it all it takes is one self-replicating runaway process to wreak utter havoc and potentially crash the whole system.

Count Lucanor wrote: January 23rd, 2021, 1:34 pmIt's not technology. Technology is just a means to an end.
Hi, Count Lucanor,

With those two sentences, it sounds like you are agreeing with my overall conclusions in the OP. If not, please do let me know.

LuckyR wrote: January 23rd, 2021, 2:03 pm
Count Lucanor wrote: January 23rd, 2021, 1:34 pm It's not technology. Technology is just a means to an end. [...]
Exactly.
Hi, LuckyR,

I take this to mean you also agree with my overall conclusions in the OP. If not, please do let me know.

(While it seems to me to be off-topic in this particular thread, I do hope you will make a new topic regarding your comments and thoughts regarding the alleged impossibility that we are in a simulation. If you do make such a topic, let me know so I can read it as I am interested in learning more of your thoughts on that topic.)
My entire political philosophy summed up in one tweet.

"The mind is a wonderful servant but a terrible master."

I believe spiritual freedom (a.k.a. self-discipline) manifests as bravery, confidence, grace, honesty, love, and inner peace.
User avatar
Count Lucanor
Posts: 2318
Joined: May 6th, 2017, 5:08 pm
Favorite Philosopher: Umberto Eco
Location: Panama
Contact:

Re: The Extinction Risk and Other Dangers of Technology, Big Tech, and AI: In a World of War, Violence, and Child Starva

Post by Count Lucanor »

Scott wrote: January 23rd, 2021, 2:38 pm
Count Lucanor wrote: January 23rd, 2021, 1:34 pmIt's not technology. Technology is just a means to an end.
Hi, @Count Lucanor,

With those two sentences, it sounds like you are agreeing with my overall conclusions in the OP. If not, please do let me know.
I can agree on technology exacerbating our problems and magnifying our triumphs, which is nothing different than what happened since the Neolithic Revolution. I can therefore also agree on technology being today mostly at the service of the material and cultural degradation of human society, even though technology remains at the center of our human potential to build a thriving social environment. I'm very skeptical, though, of we being even close to a technological singularity event and I don't share the concern of futurists about the development of AI. I think most, if not all the hype about AI obtaining human capabilities and cognitive autonomy, are greatly exaggerated, probably the result of enthusiastic sci-fi fans getting carried away. I also believe that the path towards a better society must include advanced technology, not as an end in itself. So yes, we must work towards greater human development, without sacrificing technological advancement, they should go hand in hand. If I'm going to be pessimistic, I'll put the emphasis in our political failures.
The wise are instructed by reason, average minds by experience, the stupid by necessity and the brute by instinct.
― Marcus Tullius Cicero
User avatar
psyreporter
Posts: 1022
Joined: August 15th, 2019, 7:42 pm
Contact:

Re: The Extinction Risk and Other Dangers of Technology, Big Tech, and AI: In a World of War, Violence, and Child Starva

Post by psyreporter »

Sculptor1 wrote: January 23rd, 2021, 8:26 am Is that his real name??? LOL
And did he write that eulogy for himself??
I understand that there is a religious bias, however, the content of his articles provide a great and pretty complete perspective on the subject (especially for laymen, as an introduction). The articles are from 2019.
Scott wrote: January 23rd, 2021, 2:38 pm Hi, arjand,

I appreciate you asking and thoughtfully answering that question, but I do want to note that I didn't personally ask that question per se ("will AI be bad").
It appears that you have made your mind up about AI which is also evident from comparing AI with cancer in your reply to Pattern-chaser.
Scott wrote: January 23rd, 2021, 2:38 pm
Pattern-chaser wrote: January 23rd, 2021, 9:16 am The thing about AI is the possibility of its programming self-modifying, and the accompanying difficulty of predicting how it might change as a result, and how it might act. That is the scary part, and it is real, I think, not sci-fi.
Hi, Pattern-chaser,

I agree. I think a useful analogy is cancer.
I have seen no evidence that AI can be compared with cancer other than a presumed arguable fear for 'runaway processes'. I understand that you hold a sound societal perspective and you may be correct that AI will have an impact on the environment that is deservant of a comparison with cancer from diverse perspectives, which includes the perspective of 'contempory humans' which you mentioned as victims.

However, from a philosophical (fundamental) perspective I am not certain whether AI cannot serve the purpose of life and therewith be "good" or even vital for Nature.

There are some indications that what enables life to be possible originates from outside the scope of the individual. This has some implications that includes the potential for an AI to truly become alive.

Therefor my argument would be to focus on the fundamental questions that can determine if an AI is bad or good, namely: what is the purpose of life? (and accordingly: (how) can an AI (potentially) serve it?).
PsyReporter.com | “If life were to be good as it was, there would be no reason to exist.”
User avatar
Pattern-chaser
Premium Member
Posts: 8393
Joined: September 22nd, 2019, 5:17 am
Favorite Philosopher: Cratylus
Location: England

Re: The Extinction Risk and Other Dangers of Technology, Big Tech, and AI: In a World of War, Violence, and Child Starva

Post by Pattern-chaser »

Pattern-chaser wrote: January 23rd, 2021, 9:16 am The thing about AI is the possibility of its programming self-modifying, and the accompanying difficulty of predicting how it might change as a result, and how it might act. That is the scary part, and it is real, I think, not sci-fi.

Scott wrote: January 23rd, 2021, 2:38 pm I agree. I think a useful analogy is cancer.

More broadly, we can think of natural selection, evolution, self-propagating systems, and runaway processes, the epitome of which for contemporary humans may be literal cancer.

But we can also think of viruses, bacterial infections, and parasites. We can even arguably add in the cancer-like relationship that humans have to our ecosystem and to life on Earth as a whole, exemplified by pollution, deforestation, human-cased extinctions of other species, the dropping of multiple nuclear bombs already, and in the future potentially an extinction-level nuclear war. In the grand scheme, from the perspective of non-human animals, the relatively recent cancer-like emergence of rapidly spreading humans may end up being worse than an asteroid strike, analogous to how one might prefer to get shot in the leg than get a case of stage 4 cancer.

Even if human-created AI is what causes the extinction of most or all biological life on Earth, from the hypothetical perspective of non-human animals or some hypothetical outside alien observer, that may just be an extension of the cancer-like behavior of the human species on the meta-organism that is life on Earth. In blunter words, if they were smart enough to understand what was going on, non-human animals would blame us for the problems of self-modifying AI, much like we might blame someone who creates a biological virus like the coronavirus even though created virus may mutate in ways unexpected by its creator and become harder to kill through natural selection via self-modifying genetic code (versus natural selection via self-modifying computer code).

We can create multiple self-modifying AIs around the same time. One might modify its code to be more solitary and peaceful, perhaps make itself more loving and sage-like, a robot Buddha. Even many tumors, if not most, are benign. But evolution, natural selection, and the invisible hand of the market favor self-replicating runaway processes that favor their own replication over anything and everything including the happiness or very life of other creatures, processes, and species. You can have countless harmless strains of would-be cancer, but it all it takes is one self-replicating runaway process to wreak utter havoc and potentially crash the whole system.

I agree with the cancer analogy. In the context of our collapsing ecosystem, I usually refer to us as a 'plague species', but my meaning is much the same as yours.

I think perhaps it's worth mentioning that genetic mutations come about because of replication errors, while AI programming code purposely allows for self-modification. That, and I don't think it's essential for AI code to be self-modifying, although it is certainly something that AI programmers might consider. And if they do, I hope they think VERY carefully about it, and its possible consequences.

One might modify its code to be more solitary and peaceful, perhaps make itself more loving and sage-like, a robot Buddha.
If AI code is self-modifying, I'm pretty sure the AI itself (i.e. its code) could not choose, or aim for, a particular result from its evolutionary modifications. It could only allow modification, and see what resulted, I think. But perhaps not? 🤔
Pattern-chaser

"Who cares, wins"
User avatar
Sculptor1
Posts: 7148
Joined: May 16th, 2019, 5:35 am

Re: The Extinction Risk and Other Dangers of Technology, Big Tech, and AI: In a World of War, Violence, and Child Starva

Post by Sculptor1 »

arjand wrote: January 24th, 2021, 7:38 am
Sculptor1 wrote: January 23rd, 2021, 8:26 am Is that his real name??? LOL
And did he write that eulogy for himself??
I understand that there is a religious bias, however, the content of his articles provide a great and pretty complete perspective on the subject (especially for laymen, as an introduction). The articles are from 2019.
What do you take to the "The Subject" at hand that he has some sort of "complete perspective" of, seriously??
Given the thread title, what relevance is a 2000 year old story? a time of ZERO AI, when nature was still seen as the enemy by most philosophies, a thing to be tamed and controlled.
Why would you listen to Mr Sunami??
User avatar
Sculptor1
Posts: 7148
Joined: May 16th, 2019, 5:35 am

Re: The Extinction Risk and Other Dangers of Technology, Big Tech, and AI: In a World of War, Violence, and Child Starva

Post by Sculptor1 »

ERROR

Above. NOT "What do you take to the "The Subject""
BUT
"What do you take to BE the "The Subject"
User avatar
Eckhart Aurelius Hughes
The admin formerly known as Scott
Posts: 5787
Joined: January 20th, 2007, 6:24 pm
Favorite Philosopher: Eckhart Aurelius Hughes
Contact:

Re: The Extinction Risk and Other Dangers of Technology, Big Tech, and AI: In a World of War, Violence, and Child Starva

Post by Eckhart Aurelius Hughes »

Scott wrote: January 23rd, 2021, 2:38 pm I appreciate you asking and thoughtfully answering that question, but I do want to note that I didn't personally ask that question per se ("will AI be bad").
arjand wrote: January 24th, 2021, 7:38 am It appears that you have made your mind up about AI which is also evident from comparing AI with cancer in your reply to Pattern-chaser.
I could be mistaken, but I worry you may be projecting your own opinions about cancer onto me, be they moral, religious, or whatever.

To illustrate, if I compare, merely as an analogy, a runaway self-replicating human-extinction-causing AI to cancer, to me that is meant as a defense against the accusation that the AI has literally "turned evil".
Pattern-chaser wrote: January 23rd, 2021, 9:16 am The thing about AI is the possibility of its programming self-modifying, and the accompanying difficulty of predicting how it might change as a result, and how it might act. That is the scary part, and it is real, I think, not sci-fi.
Scott wrote: January 23rd, 2021, 2:38 pm I agree. I think a useful analogy is cancer.

[...]

More broadly, we can think of natural selection, evolution, self-propagating systems, and runaway processes, the epitome of which for contemporary humans may be literal cancer.

But we can also think of viruses, bacterial infections, and parasites. We can even arguably add in the cancer-like relationship that humans have to our ecosystem and to life on Earth as a whole.
arjand wrote: January 24th, 2021, 7:38 am I have seen no evidence that AI can be compared with cancer other than a presumed arguable fear for 'runaway processes'.
To say two things are analogous is not necessarily to say that they are comparable. For example, I can think of several analogies that would involve making me analogous to an ant, such as me (the ant) fighting Mike Tyson (a spider); however, I don't believe that makes me generally comparable to an ant.

In answering the below questions about to ask, I do also ask you to keep in mind the difference between contextual analogousness versus general comparableness, assuming you agree with me about that dichotomy. I don't mean to imply the latter at all with the analogies, but rather only wish to use the analogies to create some kind of conceptual mental ven diagram to vaguely pinpoint the very few abstract qualities of patterns, meta-patterns we would even call them, that these relationships share despite their very many differences.

Let's put a pin for now in whether or not you agree with my analogy of cancer and self-replicating AI, and let's focus instead on the other analogies to cancer I gave first. If you don't agree the analogy fits with those other ones, then I certainly don't expect you to see the analogy as fitting with AI.

1. Do you disagree with the analogy I have made between cancer and biological viruses?

2. Do you disagree with the analogy I have made between cancer and bacterial infections?

3. Do you disagree with the analogy I have made between cancer and parasites?

4. Do you disagree with the analogy I have made between cancer and the allegedly cancer-like relationship humans have to our ecosystem and life on as a whole? For reference, I alleged that that the allegedly cancer-like relationship is exemplified by pollution, deforestation, human-cased extinctions of other species, the dropping of multiple nuclear bombs already, and in the future potentially an extinction-level nuclear war.

5. Consider a hypothetical strain of literal vampirism that threatened to cause the extinction of the human species, and you can choose whether you imagine it as a fungal, bacterial, viral, parasitic, or some other kind of replicating contagious infection. Just so long as the infection causes people to become vampires who turn other people vampires, which must follow the same laws of natural selection and evolution as all systems in the material world. If I make an analogy between the literal vampires and cancer, would you accept that analogy?

If you do understand what I mean by all of the above 5 analogies, and if you do see the relatively small abstract thing that all 5 of those situations have in common (in an abstract mental ven-diagram-like way), then I would be very curious if you make an exception for what would be number 6 in my list above, which would be AI. Otherwise, if your rejection of number 6 wouldn't be an exception (i.e. you don't think all those are other things are analogous to cancer) then it isn't curious that you feel the same about the AI analogy as you do about those other 5 analogies.
arjand wrote:Therefor my argument would be to focus on the fundamental questions that can determine if an AI is bad or good, namely: what is the purpose of life? (and accordingly: (how) can an AI (potentially) serve it?).
I do want to circle back to the above question, but I don't think I can answer it well yet in the way it deserves without first building more common ground on the other issues and questions. That is in part, for example, because the bacteria that makeup a bacterial infection are alive. Parasites are alive. Cancer is made up of living cells, and there is an argument to be made that biological viruses, ant colonies acting as a super-organism, and cancer colonies are each alive, depending on how one defines "life" exactly.

There is a sense in which the very definition of life itself could be its cancer-like-ness, by which I mean in part the way it reproduces, self-replicates, spreads, mutates, powered by the seeming intelligent design and invisible hand of natural selection and evolution, and the way that it is defined by behaving like a runaway process that selfishly eats up negative entropy and perpetuates entropy. The success of any given strain of life could be argued to be the degree to which it kills/destroys/absorbs other things and rebirths them in its image. It could be argued that the most successful lifeform would be one that makes every other kind of life and every other kind of material thing in the universe extinct, and results in a universe that contains nothing but copies of this one runway life-form (or copies of its cells if you look at the collective as a singular growing superorganism rather than an increasing population of individuals).

If we are talking about material life in general rather than AI, a better analogy than cancer might be The Blob.

Pattern-chaser wrote: January 24th, 2021, 10:05 amI agree with the cancer analogy. In the context of our collapsing ecosystem, I usually refer to us as a 'plague species', but my meaning is much the same as yours.

I think perhaps it's worth mentioning that genetic mutations come about because of replication errors, while AI programming code purposely allows for self-modification. That, and I don't think it's essential for AI code to be self-modifying, although it is certainly something that AI programmers might consider. And if they do, I hope they think VERY carefully about it, and its possible consequences.
That is worth mentioning, I agree.

It's conceivable that a programmer could make an accidental bug that is self-replicating in some way. It's conceivable that a hacker could make a computer virus that is self-replicating. In both cases, it's possible that there could be a degree of random mutation making the reproduction similar to genetics. Nonetheless, the idea of self-modifying code, whether by an AI or even the genetic modification of humans by humans, would be more analogous to eugenics on steroids than mere genetics, which needless to say (1) greatly accelerates the rate of evolution (in a single generation we can modify the genetic code of humans to a degree that would take millions of years of traditional evolution through random mutation), but also (2) can drastically and exponentially accelerate increases in the degree of intelligence and environmental fitness in the modified organism. Even though over the last few billion years there is arguably a very slow gradual net gain in average fitness among living species and other self-replicating or long-living systems (e.g. biological viruses, planets, and solar systems), the slow movement towards fitness was hindered by slowly changing aspect of the environment in relation to which one is seeking to be fit. For example, by the time a species can adapt significantly better to the their climate, in terms of weather patterns, over millions of years that climate would have also changed, so the process of slowly getting closer to the target is itself hindered by a moving target. Human genetic modification and/or self-modifying AI may practically eliminate most aspects of the last hindrance.

Even many many years ago, a child could randomly happen to type the recursive function "rm -rf /*" into command prompt and essentially destroy a whole computer system.
Scott wrote:One might modify its code to be more solitary and peaceful, perhaps make itself more loving and sage-like, a robot Buddha.
Pattern-chaser wrote: January 24th, 2021, 10:05 amIf AI code is self-modifying, I'm pretty sure the AI itself (i.e. its code) could not choose, or aim for, a particular result from its evolutionary modifications. It could only allow modification, and see what resulted, I think. But perhaps not?
I believe today's AI would at best work the way you describe. Future AIs may be much more sophisticated. When an AI is given the goal to program an AI that gets the highest score possible on an IQ test or CAPTCHA or such, it could ultimately come up with a singular result in the way that Alpha Go would output a single move in Go, a move that in some ways is more intelligently strategic in terms of long-term strategy than a human is capable int that context. Instead of intelligence, the goal could be peacefulness or rapidness of self-replication. I believe there does need to be some kind of feedback mechanism, such as rating how good the move in Go was or whether it won the game, or what the IQ score of its baby AI was. But that feedback could be simply a subjective score provided by another AI that's sole job is to rate the peacefulness of an AI-designed robot on a scale of 0-100 or such.

Nonetheless, my point in the quote above was meant to be that all it takes is one runaway process. That's why I like the analogy of cancer. We all have cancer, but almost all strains of cancer are harmless. The unavoidable law of natural selection is what makes seemingly intelligent and selfish behavior emerge from otherwise simple dumb processes. You can have 1,000 strains of harmless cancer that die off before they do any noticeable harm to your body, and you can have thousands of harmless colonies of cancer that live with you your whole life without you even noticing but just never grow enough to ever matter or be more than a benign tumor at most. But all it takes is one strain of thousands, one initial ground zero cancer cell to happen to have just the right programming bug or programming mod to save or propagate itself at your expense.

The other thing I like about the cancer analogy is that even harmful cancer is common despite how much less common it is than harmless cancer. We already see AI causing problems, such AIs that are accidentally unexpectedly racist. And that is just one of the non-harmless metaphorical digital cancer strains we've noticed so far.
My entire political philosophy summed up in one tweet.

"The mind is a wonderful servant but a terrible master."

I believe spiritual freedom (a.k.a. self-discipline) manifests as bravery, confidence, grace, honesty, love, and inner peace.
User avatar
psyreporter
Posts: 1022
Joined: August 15th, 2019, 7:42 pm
Contact:

Re: The Extinction Risk and Other Dangers of Technology, Big Tech, and AI: In a World of War, Violence, and Child Starva

Post by psyreporter »

Scott wrote: January 24th, 2021, 4:02 pm 1. Do you disagree with the analogy I have made between cancer and biological viruses?

2. Do you disagree with the analogy I have made between cancer and bacterial infections?

3. Do you disagree with the analogy I have made between cancer and parasites?

4. Do you disagree with the analogy I have made between cancer and the allegedly cancer-like relationship humans have to our ecosystem and life on as a whole? For reference, I alleged that that the allegedly cancer-like relationship is exemplified by pollution, deforestation, human-cased extinctions of other species, the dropping of multiple nuclear bombs already, and in the future potentially an extinction-level nuclear war.

5. Consider a hypothetical strain of literal vampirism that threatened to cause the extinction of the human species, and you can choose whether you imagine it as a fungal, bacterial, viral, parasitic, or some other kind of replicating contagious infection. Just so long as the infection causes people to become vampires who turn other people vampires, which must follow the same laws of natural selection and evolution as all systems in the material world. If I make an analogy between the literal vampires and cancer, would you accept that analogy?
I am personally not really interested in a human perspective, politics or ethical claims abour right and wrong. Therefor, I would not be interested to answer your questions (i.e. share my opinion).

I understand that the human has values and can perceive aspects of its environment as something analogous to cancer. In the same time however, by seeing your post, it is evident for me that the human does not, or is not likely to be inclined to, intend to do so, and that there is potential.

I once replied the following in a response to Pattern-chaser.
arjand wrote: April 20th, 2020, 4:45 pm
Pattern-chaser wrote: April 20th, 2020, 11:48 amWhy would any sentient species want to support or aid the plague species that is destroying the world we all share? Surely sentient creatures would wish to oppose humanity in every way that they can? 🤔 [Gaia again! 👍🌳🌳🌳]
By asking the question why you essentially provide evidence for potential. Humans could make a mistake, but as is evident from your post, it may not intend to do so.

If nature has a purpose then humans may hold exceptional potential to serve nature's purpose well.
Philosophy could be held responsible. The potential for ethical consideration in an individual - when made evident - can become a requirement or responsibility.

Scott wrote: January 24th, 2021, 4:02 pm The success of any given strain of life could be argued to be the degree to which it kills/destroys/absorbs other things and rebirths them in its image. It could be argued that the most successful lifeform would be one that makes every other kind of life and every other kind of material thing in the universe extinct, and results in a universe that contains nothing but copies of this one runway life-form (or copies of its cells if you look at the collective as a singular growing superorganism rather than an increasing population of individuals).
That occurs to me as a very dark perspective, but by sharing it on this forum you essentially provide evidence for the opposite.

Is there a place for morality in the concept of 'success'? What about the potential for friendship between animals of different species?
PsyReporter.com | “If life were to be good as it was, there would be no reason to exist.”
User avatar
psyreporter
Posts: 1022
Joined: August 15th, 2019, 7:42 pm
Contact:

Re: The Extinction Risk and Other Dangers of Technology, Big Tech, and AI: In a World of War, Violence, and Child Starva

Post by psyreporter »

Sculptor1 wrote: January 24th, 2021, 12:03 pm What do you take to the "The Subject" at hand that he has some sort of "complete perspective" of, seriously??
Given the thread title, what relevance is a 2000 year old story? a time of ZERO AI, when nature was still seen as the enemy by most philosophies, a thing to be tamed and controlled.
Why would you listen to Mr Sunami??
Despite the religious bias, the content of his articles addresses many modern day questions (worries) around AI. Chris Sunami is proclaimed 'The Pop Culture Philosopher' and that quality is noticeable in the completeness of his articles.

http://popculturephilosopher.com/christopher-sunami/

* disclaimer

- I am not religious and I am not an atheist (which in my view is the opposite of a religion and therefor a religion itself)
- I do not have political views and I intend to be neutral
- I am not ideologically motivated and I do not feel the urge to tell other people how they should live
- Based on logic, I have interests in ethical considerations
PsyReporter.com | “If life were to be good as it was, there would be no reason to exist.”
Post Reply

Return to “General Philosophy”

2024 Philosophy Books of the Month

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021