Debate with ChatGPT

Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
Post Reply
User avatar
Sy Borg
Site Admin
Posts: 15154
Joined: December 16th, 2013, 9:05 pm

Re: Debate with ChatGPT

Post by Sy Borg »

Count Lucanor wrote: February 1st, 2023, 11:08 pm
Sy Borg wrote: January 31st, 2023, 10:46 pm
Count Lucanor wrote: January 31st, 2023, 10:20 pm
Sy Borg wrote: January 30th, 2023, 4:28 pm No, some people simply say what they think is most likely, whether that works for them or not. If you'd been paying attention, you would know that my main "ideology" is pro-animal, not pro-AI, as you keep wrongly claiming. No, I just recognise the emergence of AI is part of the patterns of change evolution on Earth - the planet's evolution from a molten spheroid of molten basalt to today's much more complex geological, chemical and biological milieu.

Yet AI seems different, being mineral rather than animal or vegetable, a new phase of the broader evolution (small "e"). I'm not sure if AI themselves are the entities that are emerging or if they are a part of larger emergence. Possibly both.
I disagree because I can't see AI as a natural occurrence independent of human control. It does not run on natural processes, it has nothing to do with biological evolution, nor any vegetable or mineral domain.
Based on your view, humans and their technology are not part of the Earth's journey in time. I appreciate the practicalities of referring to "humanity" and "nature" as separate domains, but, in terms of existence, humans and their creations are as much a part of nature as geology, flora and (other) fauna.
You know where I'm coming from and I know where you're going. Humans are part of the processes of nature, undoubtedly, but the products of human civilization, even though they are made from natural resources, are not. The differences between the International Space Station and anything produced by any other organism on Earth, demand that we consider them as part of two different domains. "Earth's journey in time" sounds like a beautiful literary description, but it would be misleading if we used it to describe what actually happens. If science and philosophy didn't make use of practicalities, then just-so stories is all we would have. Classifications are then important, they help us better map the complex relations among all things, and so we know that we have to put a bug closer to a fish, than to a rock, and a computer within the domain of non-living things, rather than within the domain of living things. Living things evolve one way, non-living things in another.
Classifications are needed for communication, but the map is invariably mistaken for the territory. This was Feynman's point when he said that you can know the name of a species of bird in ten languages and still know nothing about it - that the attributes should be focused on over categories. Focusing on attributes naturally distinguishes between bugs, fishes and rocks, and also makes more clear their similarities. We are better with our understanding with categories than between them, eg. the interactions between two organisms as opposed to the interactions between biology and geology. By "othering" different domains of matter, we tend to filter them out as background.

Ultimately, what we have are numerous interlocking systems that comprise the Earth today. Some of those new systems are the ISS, skyscrapers, AI and other human creations, and these are no less part of the Earth than termite mounds or beehives. Being new systems that have emerged very quickly, the rest of nature has not had much chance to adapt to the new environments, with only select species managing to adapt so far, such as rats, mice and roaches, and some species like dogs, cats, horses, cows, sheep, pigs and goats are naturally compatible in terms of their conservation status.

Count Lucanor wrote: February 1st, 2023, 11:08 pm
Sy Borg wrote: January 31st, 2023, 10:46 pm As such, humans and their tools undergo their own evolutions like everything else, living or otherwise. The evolution of human tools is a remarkable journey, noting that humans sense tools as being an extension of themselves (Miller et al). Everything on Earth evolves, just that not all evolve according to natural selection. However, rest assured, there will still be other selection pressures - and that is what matters. The labels mean nothing, it's the phenomena that matter, and there is, and will continue to be, either the evolution or extinction of everything on Earth. That includes AI.
Tools, instruments, owe their existence to their creators. If humans stopped existing, so they would cease existing as tools. As many other products of human labor, they become alienated from their producers and appear to have a life of their own, but in truth, they cannot evolve independently of humans. Interestingly, humans now cannot evolve independently of technology,
That situation will change. Already we see that huge companies are hiring far fewer human employees. In time, that number will shrink, and shrink again, as AI takes over ever more functions. There will probably come a time when AIs are making better CEOs than humans, and desired by shareholders. At that point, if humans died out, the intelligent tech would still be able to implement corporate and strategic plans, but it would be rendered pointless.

However, as human brains and AI meld (Neuralink et al), AI's progression will be much faster, and it will become ever more essential until it is necessary and pivotal, and then ultimately it will be in control because those with more sophisticated AI integrated with their brains will have a competitive advantage.

If we push move further into the future, ever more of the brain will be mapped with AI's help, and increasingly vulnerable biological body and brain parts will be replaced by durable, superior synthetic parts allowing for greater strength and speed and more versatile senses.

Count Lucanor wrote: February 1st, 2023, 11:08 pm
Sy Borg wrote: January 31st, 2023, 10:46 pm Again, you miss the point. We all realise that ChatGPT is not human so there's no need to labour the point. However, AI has been imbued with some humanlike attributes.
Many objects since ancient times have been imbued with some humanlike attributes. There's nothing special about AI in that sense.
Millions would disagree. I've never been able to interact with a non-human with anywhere near that level of sophistication. Most times, it doesn't go much beyond "unexpected item in bagging area" or nagging at you to "Please take your items".

Count Lucanor wrote: February 1st, 2023, 11:08 pm
Sy Borg wrote: January 31st, 2023, 10:46 pm I also note that all any of us can do is generate responses based on our training. You draw a hard line between humans and their creations which, again, is practical but not philosophical. It misses the point of an emerging phenomenon, regardless of how one labels its attributes.
The parallels that you take for granted between humans and machines, such as the possibility of "training", are precisely the ones in dispute. Such figurative use of the concepts confuses philosophy with literature. In a literal sense, a machine cannot be trained. Technology is not a natural phenomenon, it is not autonomous and nothing emerges from it independently of human intervention.
Of course technology is a natural phenomenon. Technological items are as much structures created by the planet Earth as termite mounds and beaver dams. These tech items are very simple items compared with the extremely complex entities the Earth's been generating for aeons, but they are becoming more complex.

Forget the categories - look at the broader sweep of the phenomena - what's been happening long-term.

Count Lucanor wrote: February 1st, 2023, 11:08 pm
Sy Borg wrote: January 31st, 2023, 10:46 pm
Count Lucanor wrote: January 31st, 2023, 10:20 pm
Sy Borg wrote: January 30th, 2023, 4:28 pm AI does not need to experience biological sensations.
It would be a marvelous feat that it had any experience at all.
Other than potential ethical situations, this is unimportant. Note that it can be imbued with senses.
No, it can be imbued with things that simulate the senses.
Semantics. If you were being chased by killer robots, at some point you'd say, "Dammit, it's seen us!" as opposed to, "Dammit, it is simulating seeing us!".

If something detects light and responds to that light in an ordered way, then it's seeing. Again, this seems like just a matter of words. We can tell what's happening, whatever label is being applied.

Count Lucanor wrote: February 1st, 2023, 11:08 pm
Sy Borg wrote: January 31st, 2023, 10:46 pmWhy should that matter? AI makes autonomous decisions without humans present, and who only know what the machine decides in general. The more sophisticated AI becomes, the less humans will be able to comprehend the basis of their decisions. If AI CEOs are found to out-perform humans, navigating complexity that humans cannot keep in their brains, I wonder who shareholders will want running their companies?
By definition, AI cannot make decisions, because decisions imply intentions, and intentions imply desire, interest, things that a machine cannot have. ChatGPT does not have any on these things. Don't believe me:

As a language model AI, I do not have personal interests or emotions. I am designed to assist and provide information to the best of my abilities based on the data and text I was trained on.

I do not have intentions or motivations as I am an artificial intelligence language model created by OpenAI and my purpose is to assist and generate human-like text based on the input I receive and the data I was trained on.
When asked about decisions...

In the case of AI systems like me, I make decisions based on patterns in the data I was trained on, but I do not have personal intentions or motivations.


The patterns are fed by humans directly or indirectly and this is the basis of its "decisions" as mere responses to user inputs.
The most important part of the ChatGPT quote has been highlighted. The rest is just detail.
User avatar
Count Lucanor
Posts: 2318
Joined: May 6th, 2017, 5:08 pm
Favorite Philosopher: Umberto Eco
Location: Panama
Contact:

Re: Debate with ChatGPT

Post by Count Lucanor »

Sy Borg wrote: February 2nd, 2023, 4:26 pm Classifications are needed for communication, but the map is invariably mistaken for the territory. This was Feynman's point when he said that you can know the name of a species of bird in ten languages and still know nothing about it - that the attributes should be focused on over categories. Focusing on attributes naturally distinguishes between bugs, fishes and rocks, and also makes more clear their similarities. We are better with our understanding with categories than between them, eg. the interactions between two organisms as opposed to the interactions between biology and geology. By "othering" different domains of matter, we tend to filter them out as background.

Ultimately, what we have are numerous interlocking systems that comprise the Earth today. Some of those new systems are the ISS, skyscrapers, AI and other human creations, and these are no less part of the Earth than termite mounds or beehives. Being new systems that have emerged very quickly, the rest of nature has not had much chance to adapt to the new environments, with only select species managing to adapt so far, such as rats, mice and roaches, and some species like dogs, cats, horses, cows, sheep, pigs and goats are naturally compatible in terms of their conservation status.
Sy Borg wrote: February 2nd, 2023, 4:26 pm
Of course technology is a natural phenomenon. Technological items are as much structures created by the planet Earth as termite mounds and beaver dams. These tech items are very simple items compared with the extremely complex entities the Earth's been generating for aeons, but they are becoming more complex.
The value of classifications go well beyond pure communication and descriptive maps. While in general, classification criteria are arbitrary, science and philosophy depend entirely on them to be rigorous. For both scientific and philosophical practice to work systematically, knowledge must be structured with the REAL relationships between the elements, as well as the relationships between the parts and the whole. That's the only way individual attributes make any sense. This applies for categories themselves as they are nested and form hierarchies, the broader ones encompassing the more specific. Disregarding some particular domains to emphasize Earth as the only domain that matters is a confusion from both the scientific and philosophical point of views. It is a category too broad to actually explain anything, but at the same time too narrow, as the Earth system is part of other systems, and if we wanted to find the one category into which anything ultimately falls, it will be "the universe" or "everything that exists". Saying that everything is natural becomes as useful and informative as saying that everything exists. Distinctions between nature and culture, or between nature and technology, or between living vs non-living systems, make much more sense.
Sy Borg wrote: February 2nd, 2023, 4:26 pm That situation will change. Already we see that huge companies are hiring far fewer human employees. In time, that number will shrink, and shrink again, as AI takes over ever more functions. There will probably come a time when AIs are making better CEOs than humans, and desired by shareholders. At that point, if humans died out, the intelligent tech would still be able to implement corporate and strategic plans, but it would be rendered pointless.

However, as human brains and AI meld (Neuralink et al), AI's progression will be much faster, and it will become ever more essential until it is necessary and pivotal, and then ultimately it will be in control because those with more sophisticated AI integrated with their brains will have a competitive advantage.

If we push move further into the future, ever more of the brain will be mapped with AI's help, and increasingly vulnerable biological body and brain parts will be replaced by durable, superior synthetic parts allowing for greater strength and speed and more versatile senses.
That looks a lot like prophetic language based on extrapolations from general trends that do no warrant the predicted event. So yes, the automation trend that started with the first computers will likely keep growing and humans will be replaced in many tasks, just as they were replaced when engines were invented. But computers better CEOs than humans? Making plans? No, for this we need thinking and feeling agents. Nothing that we can see now, no trend, is any indication that this is possible and will happen in the future. This is more science fiction than an insightful assessment of our technological culture.
The wise are instructed by reason, average minds by experience, the stupid by necessity and the brute by instinct.
― Marcus Tullius Cicero
User avatar
Sy Borg
Site Admin
Posts: 15154
Joined: December 16th, 2013, 9:05 pm

Re: Debate with ChatGPT

Post by Sy Borg »

Count Lucanor wrote: February 4th, 2023, 11:09 pm
Sy Borg wrote: February 2nd, 2023, 4:26 pm Classifications are needed for communication, but the map is invariably mistaken for the territory. This was Feynman's point when he said that you can know the name of a species of bird in ten languages and still know nothing about it - that the attributes should be focused on over categories. Focusing on attributes naturally distinguishes between bugs, fishes and rocks, and also makes more clear their similarities. We are better with our understanding with categories than between them, eg. the interactions between two organisms as opposed to the interactions between biology and geology. By "othering" different domains of matter, we tend to filter them out as background.

Ultimately, what we have are numerous interlocking systems that comprise the Earth today. Some of those new systems are the ISS, skyscrapers, AI and other human creations, and these are no less part of the Earth than termite mounds or beehives. Being new systems that have emerged very quickly, the rest of nature has not had much chance to adapt to the new environments, with only select species managing to adapt so far, such as rats, mice and roaches, and some species like dogs, cats, horses, cows, sheep, pigs and goats are naturally compatible in terms of their conservation status.
Sy Borg wrote: February 2nd, 2023, 4:26 pm
Of course technology is a natural phenomenon. Technological items are as much structures created by the planet Earth as termite mounds and beaver dams. These tech items are very simple items compared with the extremely complex entities the Earth's been generating for aeons, but they are becoming more complex.
The value of classifications go well beyond pure communication and descriptive maps. While in general, classification criteria are arbitrary, science and philosophy depend entirely on them to be rigorous. For both scientific and philosophical practice to work systematically, knowledge must be structured with the REAL relationships between the elements, as well as the relationships between the parts and the whole. That's the only way individual attributes make any sense. This applies for categories themselves as they are nested and form hierarchies, the broader ones encompassing the more specific. Disregarding some particular domains to emphasize Earth as the only domain that matters is a confusion from both the scientific and philosophical point of views. It is a category too broad to actually explain anything, but at the same time too narrow, as the Earth system is part of other systems, and if we wanted to find the one category into which anything ultimately falls, it will be "the universe" or "everything that exists". Saying that everything is natural becomes as useful and informative as saying that everything exists. Distinctions between nature and culture, or between nature and technology, or between living vs non-living systems, make much more sense.
Seeing commonalities does not mean dismissing obvious differences.

There is no confusion in seeing everything as part of the Earth. In fact, it's absurd to hold any other view, unless one extrapolates to the Sun/solar system or galaxy etc. However, any denial that humanity is simply a sub-system of the Earth will invariably be based on quuasi-theistic notions of human specialness, not on logic.

The Earth has always been evolving and humanity is part of that evolution. AI is obviously part of that.

Count Lucanor wrote: February 4th, 2023, 11:09 pm
Sy Borg wrote: February 2nd, 2023, 4:26 pm That situation will change. Already we see that huge companies are hiring far fewer human employees. In time, that number will shrink, and shrink again, as AI takes over ever more functions. There will probably come a time when AIs are making better CEOs than humans, and desired by shareholders. At that point, if humans died out, the intelligent tech would still be able to implement corporate and strategic plans, but it would be rendered pointless.

However, as human brains and AI meld (Neuralink et al), AI's progression will be much faster, and it will become ever more essential until it is necessary and pivotal, and then ultimately it will be in control because those with more sophisticated AI integrated with their brains will have a competitive advantage.

If we push move further into the future, ever more of the brain will be mapped with AI's help, and increasingly vulnerable biological body and brain parts will be replaced by durable, superior synthetic parts allowing for greater strength and speed and more versatile senses.
That looks a lot like prophetic language based on extrapolations from general trends that do no warrant the predicted event. So yes, the automation trend that started with the first computers will likely keep growing and humans will be replaced in many tasks, just as they were replaced when engines were invented. But computers better CEOs than humans? Making plans? No, for this we need thinking and feeling agents. Nothing that we can see now, no trend, is any indication that this is possible and will happen in the future. This is more science fiction than an insightful assessment of our technological culture.
Prophetic, my fat @rse.

The extrapolations are conservative. It's easy to see how AI can replace CEOs. Increasingly CEOs will rely on AI to deal with the complexity, which is already the case to a limited extent. As AI is refined, ever more decisions will be based on AI estimations. In time, human CEOs will become largely superfluous, only needed in special cases. You will not be able to imagine a non-apocalyptic situation where this process will be halted.

Your assumption about thinking and feeling agents is exactly that, an assumption, and it's a weak assumption too, being based on a sample size of one as well as denying any chance of significant emergences.
User avatar
Count Lucanor
Posts: 2318
Joined: May 6th, 2017, 5:08 pm
Favorite Philosopher: Umberto Eco
Location: Panama
Contact:

Re: Debate with ChatGPT

Post by Count Lucanor »

Sy Borg wrote: February 5th, 2023, 2:40 am
Seeing commonalities does not mean dismissing obvious differences.
I have pointed to the obvious differences between living and non-living systems, between nature and technology. They are key differences.
Sy Borg wrote: February 5th, 2023, 2:40 am There is no confusion in seeing everything as part of the Earth.
No, it is a confusion not to see the relationship between the parts and the whole, putting all emphasis on the whole.
Sy Borg wrote: February 5th, 2023, 2:40 amHowever, any denial that humanity is simply a sub-system of the Earth will invariably be based on quuasi-theistic notions of human specialness, not on logic.
Certainly, humanity is a sub-system of the Earth, no one is denying that, but it is a different thing to assert that humanity is a sub-system entirely determined by an essential, necessary connection with the broader system, leaving out contingency. The biggest lesson of evolutionary theory is that everything could have turned out entirely differently, including the possibility that humans might not have emerged. Nothing more quasi-theistic than the notion that everything that happens on Earth was meant to be, that it is a closed system determined by its initial conditions. Culture and technology are then not understood by their own dynamics, as processes with high degree of autonomy, contingency and unpredictability, and necessarily related to human action, human interest and choices, but as just another "natural" deterministic system.
Sy Borg wrote: February 5th, 2023, 2:40 am The Earth has always been evolving and humanity is part of that evolution. AI is obviously part of that.
But Earth could have evolved in another way, living things could have evolved differently, humans could have had a different evolutionary path and perhaps could have not even existed. Human history including our technology could have been completely different. AI certainly could go many ways, but it is not possible to predict the future based only on possibilities which are determined by unpredictable human actions.
Sy Borg wrote: February 5th, 2023, 2:40 am The extrapolations are conservative. It's easy to see how AI can replace CEOs. Increasingly CEOs will rely on AI to deal with the complexity, which is already the case to a limited extent. As AI is refined, ever more decisions will be based on AI estimations. In time, human CEOs will become largely superfluous, only needed in special cases. You will not be able to imagine a non-apocalyptic situation where this process will be halted.
You're asserting that this will happen, but that futuristic view is mere speculation, and not supported by the current state of technology, which is yet to produce a thinking, sentient device (not just a processor of symbols). I'm not one of those Big Tech CEO's worshipper, but the idea that their actions and roles in organizations is reducible to algorithmic processes is too wild to be admitted as even plausible. Many tasks will be delegated to automated processes, as always, but the technology will meet its threshold at the point where thinking and feeling is required.
Sy Borg wrote: February 5th, 2023, 2:40 am Your assumption about thinking and feeling agents is exactly that, an assumption, and it's a weak assumption too, being based on a sample size of one as well as denying any chance of significant emergences.
If my sample size is one, yours is zero.
The wise are instructed by reason, average minds by experience, the stupid by necessity and the brute by instinct.
― Marcus Tullius Cicero
User avatar
Sy Borg
Site Admin
Posts: 15154
Joined: December 16th, 2013, 9:05 pm

Re: Debate with ChatGPT

Post by Sy Borg »

Count Lucanor wrote: February 5th, 2023, 4:42 pm
Sy Borg wrote: February 5th, 2023, 2:40 amHowever, any denial that humanity is simply a sub-system of the Earth will invariably be based on quuasi-theistic notions of human specialness, not on logic.
Certainly, humanity is a sub-system of the Earth, no one is denying that, but it is a different thing to assert that humanity is a sub-system entirely determined by an essential, necessary connection with the broader system, leaving out contingency. The biggest lesson of evolutionary theory is that everything could have turned out entirely differently, including the possibility that humans might not have emerged. Nothing more quasi-theistic than the notion that everything that happens on Earth was meant to be, that it is a closed system determined by its initial conditions. Culture and technology are then not understood by their own dynamics, as processes with high degree of autonomy, contingency and unpredictability, and necessarily related to human action, human interest and choices, but as just another "natural" deterministic system.
Sy Borg wrote: February 5th, 2023, 2:40 am The Earth has always been evolving and humanity is part of that evolution. AI is obviously part of that.
But Earth could have evolved in another way, living things could have evolved differently, humans could have had a different evolutionary path and perhaps could have not even existed. Human history including our technology could have been completely different. AI certainly could go many ways, but it is not possible to predict the future based only on possibilities which are determined by unpredictable human actions.
That is a red herring, Count. Everyone's life could have gone a different way.

What needs to be understood is that humanity is not only a sub-system of Earth, but the only way the biosphere has a chance of continued existence as the Sun becomes hotter. You figure that I treat Earth like a deity that has plans but continue to forget that humans are part of the Earth's system. Does the Earth want its parts to survive its demise? Obviously. Does that mean the Earth is Yahweh in drag? No All of Earth's life tries to survive up to procreation (at least). And humans - whom, I repeat, are part of the Earth - want to survive and to continue life on other worlds once Earth is no longer habitable.

Count Lucanor wrote: February 5th, 2023, 4:42 pm
Sy Borg wrote: February 5th, 2023, 2:40 am The extrapolations are conservative. It's easy to see how AI can replace CEOs. Increasingly CEOs will rely on AI to deal with the complexity, which is already the case to a limited extent. As AI is refined, ever more decisions will be based on AI estimations. In time, human CEOs will become largely superfluous, only needed in special cases. You will not be able to imagine a non-apocalyptic situation where this process will be halted.
You're asserting that this will happen, but that futuristic view is mere speculation, and not supported by the current state of technology, which is yet to produce a thinking, sentient device (not just a processor of symbols). I'm not one of those Big Tech CEO's worshipper, but the idea that their actions and roles in organizations is reducible to algorithmic processes is too wild to be admitted as even plausible. Many tasks will be delegated to automated processes, as always, but the technology will meet its threshold at the point where thinking and feeling is required.
Sy Borg wrote: February 5th, 2023, 2:40 am Your assumption about thinking and feeling agents is exactly that, an assumption, and it's a weak assumption too, being based on a sample size of one as well as denying any chance of significant emergences.
If my sample size is one, yours is zero.
The difference is that your make dogmatic claims while I consider what I think is most likely. I don't say all of this is certain. One can consider possibilities without actual samples, but one cannot validly hold dogmatic, unalterable beliefs based on a sample size of one.
User avatar
Count Lucanor
Posts: 2318
Joined: May 6th, 2017, 5:08 pm
Favorite Philosopher: Umberto Eco
Location: Panama
Contact:

Re: Debate with ChatGPT

Post by Count Lucanor »

Sy Borg wrote: February 5th, 2023, 6:12 pm Everyone's life could have gone a different way.
And that introduces contingency into the whole scheme and a more open future than your metaphysical naturalism allows.
Sy Borg wrote: February 5th, 2023, 6:12 pm What needs to be understood is that humanity is not only a sub-system of Earth, but the only way the biosphere has a chance of continued existence as the Sun becomes hotter. You figure that I treat Earth like a deity that has plans but continue to forget that humans are part of the Earth's system.
Humans are a contingency on Earth, and it will continue existing as system Earth even when humans are not around. They are a part, but not a necessary, essential part. The emergence of culture and technology is then not a necessary natural development, and being emergent, they are not reducible to the same spontaneous processes of nature as any other phenomena.
Sy Borg wrote: February 5th, 2023, 6:12 pm
Count Lucanor wrote: If my sample size is one, yours is zero.
The difference is that your make dogmatic claims while I consider what I think is most likely. I don't say all of this is certain. One can consider possibilities without actual samples, but one cannot validly hold dogmatic, unalterable beliefs based on a sample size of one.
No, you don't use the "most likely" approach. You assert firmly what WILL happen and admit (dogmatically) no dissent:

That situation WILL change. Already we see that huge companies are hiring...

In time, that number WILL shrink, and shrink again...

AI's progression WILL be much faster, and it WILL become ever more essential...

then ultimately it WILL be in control because those with more sophisticated AI integrated with their brains WILL have a competitive advantage...

If we push move further into the future, ever more of the brain WILL be mapped with AI's help,,,

increasingly vulnerable biological body and brain parts WILL be replaced by durable, superior synthetic parts...


These are just a few statements and they are all in pure prophetic language, based on a sample size of zero.
The wise are instructed by reason, average minds by experience, the stupid by necessity and the brute by instinct.
― Marcus Tullius Cicero
User avatar
Sy Borg
Site Admin
Posts: 15154
Joined: December 16th, 2013, 9:05 pm

Re: Debate with ChatGPT

Post by Sy Borg »

Count Lucanor wrote: February 5th, 2023, 9:54 pm
Sy Borg wrote: February 5th, 2023, 6:12 pm Everyone's life could have gone a different way.
And that introduces contingency into the whole scheme and a more open future than your metaphysical naturalism allows.
Sy Borg wrote: February 5th, 2023, 6:12 pm What needs to be understood is that humanity is not only a sub-system of Earth, but the only way the biosphere has a chance of continued existence as the Sun becomes hotter. You figure that I treat Earth like a deity that has plans but continue to forget that humans are part of the Earth's system.
Humans are a contingency on Earth, and it will continue existing as system Earth even when humans are not around. They are a part, but not a necessary, essential part. The emergence of culture and technology is then not a necessary natural development, and being emergent, they are not reducible to the same spontaneous processes of nature as any other phenomena.
No, humans are necessary for Earth to spread out and continue its life. How do I know that's what the Earth wants? Because that is what humanity wants, and humanity is the most intelligent and aware part of the Earth. We are it. It is us (and more).

It's not a matter of the basaltic rock deciding it wants to live, it's life itself. You say the Earth does not need humans but, aside from providing possible future prospects, if a major asteroid was heading this way, humans could prove useful. Fact is, the Earth is a mass of multiple intertwined systems all in various states of disequilibrium. The Earth, like any large cosmic body, tends towards equilibria. Of course, once one system is balanced, another system disturbs it. If the Sun was not going to eventually destroy the Earth, it would have taken billions of years for the Earth to finally equalise, in which case it would become dormant.

So, yes, the Earth needed humans, along with everything else it sprouted. If the imbalance did not exist to make humans and their spread possible, then it wouldn't have happened. And the Earth clearly needs AI because humanity needs AI.


Count Lucanor wrote: February 5th, 2023, 9:54 pm
Sy Borg wrote: February 5th, 2023, 6:12 pm
Count Lucanor wrote: If my sample size is one, yours is zero.
The difference is that your make dogmatic claims while I consider what I think is most likely. I don't say all of this is certain. One can consider possibilities without actual samples, but one cannot validly hold dogmatic, unalterable beliefs based on a sample size of one.
No, you don't use the "most likely" approach. You assert firmly what WILL happen and admit (dogmatically) no dissent:

That situation WILL change. Already we see that huge companies are hiring...

In time, that number WILL shrink, and shrink again...

AI's progression WILL be much faster, and it WILL become ever more essential...

then ultimately it WILL be in control because those with more sophisticated AI integrated with their brains WILL have a competitive advantage...

If we push move further into the future, ever more of the brain WILL be mapped with AI's help,,,

increasingly vulnerable biological body and brain parts WILL be replaced by durable, superior synthetic parts...


These are just a few statements and they are all in pure prophetic language, based on a sample size of zero.
These are just basic logic. They are inarguable, and experts are talking about them all the time because most are already happening to some extent. There is not a single even remotely controversial statement in that list. The unfortunate side of this is that I needed to say any of this to you, that these blatant and basic situations are not obvious to you.

By contrast, you assert that AI will never ever achieve consciousness, that it is completely impossible. Now THAT is dogma.
User avatar
Count Lucanor
Posts: 2318
Joined: May 6th, 2017, 5:08 pm
Favorite Philosopher: Umberto Eco
Location: Panama
Contact:

Re: Debate with ChatGPT

Post by Count Lucanor »

Sy Borg wrote: February 6th, 2023, 4:49 am
Count Lucanor wrote: February 5th, 2023, 9:54 pm
Sy Borg wrote: February 5th, 2023, 6:12 pm Everyone's life could have gone a different way.
And that introduces contingency into the whole scheme and a more open future than your metaphysical naturalism allows.
Sy Borg wrote: February 5th, 2023, 6:12 pm What needs to be understood is that humanity is not only a sub-system of Earth, but the only way the biosphere has a chance of continued existence as the Sun becomes hotter. You figure that I treat Earth like a deity that has plans but continue to forget that humans are part of the Earth's system.
Humans are a contingency on Earth, and it will continue existing as system Earth even when humans are not around. They are a part, but not a necessary, essential part. The emergence of culture and technology is then not a necessary natural development, and being emergent, they are not reducible to the same spontaneous processes of nature as any other phenomena.
No, humans are necessary for Earth to spread out and continue its life.
You were the one accusing me of anthropocentrism. Go figure!
Sy Borg wrote: February 6th, 2023, 4:49 am How do I know that's what the Earth wants? Because that is what humanity wants, and humanity is the most intelligent and aware part of the Earth. We are it. It is us (and more).

It's not a matter of the basaltic rock deciding it wants to live, it's life itself.
You keep saying what life wants, what the Earth wants, what nature wants...can't help but see echoes of Schopenhauer here, consistent with the implied idealism of your assertions. Humanity is a generalization, an abstract concept. It doesn't actually need or want something, even if the general concept of humanity comprises living beings that actually need or want something. Same for Earth, same for nature. Your view of nature is teleological: it must have some purpose. I disagree.
Sy Borg wrote: February 6th, 2023, 4:49 am You say the Earth does not need humans but, aside from providing possible future prospects, if a major asteroid was heading this way, humans could prove useful. Fact is, the Earth is a mass of multiple intertwined systems all in various states of disequilibrium. The Earth, like any large cosmic body, tends towards equilibria. Of course, once one system is balanced, another system disturbs it. If the Sun was not going to eventually destroy the Earth, it would have taken billions of years for the Earth to finally equalise, in which case it would become dormant.

So, yes, the Earth needed humans, along with everything else it sprouted. If the imbalance did not exist to make humans and their spread possible, then it wouldn't have happened. And the Earth clearly needs AI because humanity needs AI.
If that's not quasi-theistic and anthropocentric, I don't know what is. Nature, the universe, is what it is and humans call it equilibrium, but the universe does not need it, there's no teleological force asking for it. Equilibrium is correlated with entropy, disorder.
Sy Borg wrote: February 6th, 2023, 4:49 am
These are just basic logic. They are inarguable, and experts are talking about them all the time because most are already happening to some extent. There is not a single even remotely controversial statement in that list. The unfortunate side of this is that I needed to say any of this to you, that these blatant and basic situations are not obvious to you.
Talk about dogmatism! All of those statements are disputable and have nothing to do with "basic logic". They are assertions about the state of the world and human society in the future and if you think we are in an unavoidable path towards that state because of "logic", that just means that you have adopted a deterministic view of technology. The quasi-mesianic view of the technological myth. I have heard of the "experts", mostly hi-tech gurus and AI enthusiasts moved by the fascination with science fiction literature.
Sy Borg wrote: February 6th, 2023, 4:49 am By contrast, you assert that AI will never ever achieve consciousness, that it is completely impossible. Now THAT is dogma.
No, I assert a fact: currently, AI technology has not achieved consciousness. And current AI is based on an assumption that, proven false, implies that it is not replicating consciousness, so no matter how hard they work on the technicalities of this assumption, it will produce the same result. That was Searle's point with the Chinese Room Experiment. It is possible that some day, engineers do create a system that replicates consciousness, but it will require some other path of research, even if it involves some of what is currently developed as AI.
The wise are instructed by reason, average minds by experience, the stupid by necessity and the brute by instinct.
― Marcus Tullius Cicero
User avatar
Sy Borg
Site Admin
Posts: 15154
Joined: December 16th, 2013, 9:05 pm

Re: Debate with ChatGPT

Post by Sy Borg »

Count Lucanor wrote: February 6th, 2023, 9:04 am
Sy Borg wrote: February 6th, 2023, 4:49 am
Count Lucanor wrote: February 5th, 2023, 9:54 pm
Sy Borg wrote: February 5th, 2023, 6:12 pm Everyone's life could have gone a different way.
And that introduces contingency into the whole scheme and a more open future than your metaphysical naturalism allows.
Sy Borg wrote: February 5th, 2023, 6:12 pm What needs to be understood is that humanity is not only a sub-system of Earth, but the only way the biosphere has a chance of continued existence as the Sun becomes hotter. You figure that I treat Earth like a deity that has plans but continue to forget that humans are part of the Earth's system.
Humans are a contingency on Earth, and it will continue existing as system Earth even when humans are not around. They are a part, but not a necessary, essential part. The emergence of culture and technology is then not a necessary natural development, and being emergent, they are not reducible to the same spontaneous processes of nature as any other phenomena.
No, humans are necessary for Earth to spread out and continue its life.
You were the one accusing me of anthropocentrism. Go figure!
Well, you are obviously anthropocentric. I watched you talk about human populations for pages and pages without once referring to ecosystems, extinctions and animal welfare. Not once, until I raised the issues. Don't be embarrassed, it's pretty typical for humans to disregard other life forms.

By contrast, I'm not saying anything contentious or profound. Just basic logic. If dogs and giraffes develop a space program, then I will say they are species necessary for Earth life to continue in the far future. However, humans are the only species capable of carrying Earthly DNA, information and materials to other worlds.# It has nothing to do with anthropocentrism. It's simply obvious.

# aside from microbes ejected in meteor collisions, but they have no control at all.


Count Lucanor wrote: February 6th, 2023, 9:04 am
Sy Borg wrote: February 6th, 2023, 4:49 am How do I know that's what the Earth wants? Because that is what humanity wants, and humanity is the most intelligent and aware part of the Earth. We are it. It is us (and more).

It's not a matter of the basaltic rock deciding it wants to live, it's life itself.
You keep saying what life wants, what the Earth wants, what nature wants...can't help but see echoes of Schopenhauer here, consistent with the implied idealism of your assertions. Humanity is a generalization, an abstract concept. It doesn't actually need or want something, even if the general concept of humanity comprises living beings that actually need or want something. Same for Earth, same for nature. Your view of nature is teleological: it must have some purpose. I disagree.
Generally, all life wants to survive and reproduce. This is what links all of humanity and, in fact, all of life. Clearly a drive shared by almost all of humanity will be reflected in the whole.

However, you are right to point out that humanity is not a monolithic entity. Humanity's collective responses to situations (eg. climate change) tend to be chaotic. This chaos can be expected from an immature emergence like modern humanity, which has only existed for an evolutionary blink of the eye.


Count Lucanor wrote: February 6th, 2023, 9:04 am
Sy Borg wrote: February 6th, 2023, 4:49 am You say the Earth does not need humans but, aside from providing possible future prospects, if a major asteroid was heading this way, humans could prove useful. Fact is, the Earth is a mass of multiple intertwined systems all in various states of disequilibrium. The Earth, like any large cosmic body, tends towards equilibria. Of course, once one system is balanced, another system disturbs it. If the Sun was not going to eventually destroy the Earth, it would have taken billions of years for the Earth to finally equalise, in which case it would become dormant.

So, yes, the Earth needed humans, along with everything else it sprouted. If the imbalance did not exist to make humans and their spread possible, then it wouldn't have happened. And the Earth clearly needs AI because humanity needs AI.
If that's not quasi-theistic and anthropocentric, I don't know what is. Nature, the universe, is what it is and humans call it equilibrium, but the universe does not need it, there's no teleological force asking for it. Equilibrium is correlated with entropy, disorder.
You need to consider how imbalances work in a system. Earth's actions shift in response to changing circumstances, causing disequilibria to occur that drives responses from other systems.

Watch this video to better understand how these dynamics work. Life itself appears to be the result of disequilibrium, disassociated hydrogen ions on the surface caused by volcanism. Biology was seemingly the most efficient (low energy) means of equalising charges on the Earth's surface. It's complicated. The professor describes it far better than I can.

This video is clearly not quasi-religious. If it is, then it would seem I am theist. It just looks like hard science to me. The sciences of biochemistry and physics.


Count Lucanor wrote: February 6th, 2023, 9:04 am
Sy Borg wrote: February 6th, 2023, 4:49 am
These are just basic logic. They are inarguable, and experts are talking about them all the time because most are already happening to some extent. There is not a single even remotely controversial statement in that list. The unfortunate side of this is that I needed to say any of this to you, that these blatant and basic situations are not obvious to you.
Talk about dogmatism! All of those statements are disputable and have nothing to do with "basic logic". They are assertions about the state of the world and human society in the future and if you think we are in an unavoidable path towards that state because of "logic", that just means that you have adopted a deterministic view of technology. The quasi-mesianic view of the technological myth. I have heard of the "experts", mostly hi-tech gurus and AI enthusiasts moved by the fascination with science fiction literature.
Your problem is that they ARE indisputable - just basic knowledge*. You gather the quotes as content-free abstractions and never gave one of them an actual moment's consideration.

Let's check these "claims" as see if your denials hold any water:


Claim 1. Major companies already using ever more tech instead of human staff.

On what basis is this controversial? It's happening already.


Claim 2. That trend will continue.

On what basis is this controversial?


Claim 3. AI's progress will speed up and its functions will become more essential to society.

On what basis is this controversial? It's happening already, it's obvious.


Claim 4. People who are integrated with AI will have a competitive advantage over the unaltered.

Again, this is already happening. There is a significant gap in prospects between those with access to the internet and those without, especially in schools. The integration has started but it's not yet physical, except in Scandinavia and some east Asian countries where some people are using embedded chips.

It is only a dogmatic (and misguided) approach to materialism that prevents some people from appreciating that humans are already in the process of becoming cyborgs.


Claim 5: More of the human brain will be mapped.

On what basis is this controversial? It's in train already.


Claim 6: Vulnerable biological body and brain parts will be replaced by durable, superior synthetic parts

Again, it's already happening.

You failed to refer to qualifications made by me about future prospects - "failing catastrophe". I already discussed at length (the obvious fact) that growth and development are likelihoods but never guaranteed. So the above notions are indeed basic logic.

They are akin to claiming that a Year 9 student will be attending Year 10 next year. Sure, the child might be murdered, suffer serious illness or brain damage. They might go wild, be kidnapped or be caught cheating. Maybe they will have to repeat a year? However, none of these unlikely events invalidate the claim - especially if, as I did, apply the qualifier "failing catastrophe".

Please do not ignore qualifiers. They are provided for a reason, not to be ignored.

Count Lucanor wrote: February 6th, 2023, 9:04 am
Sy Borg wrote: February 6th, 2023, 4:49 am By contrast, you assert that AI will never ever achieve consciousness, that it is completely impossible. Now THAT is dogma.
No, I assert a fact: currently, AI technology has not achieved consciousness. And current AI is based on an assumption that, proven false, implies that it is not replicating consciousness, so no matter how hard they work on the technicalities of this assumption, it will produce the same result. That was Searle's point with the Chinese Room Experiment. It is possible that some day, engineers do create a system that replicates consciousness, but it will require some other path of research, even if it involves some of what is currently developed as AI.
My toaster doesn't do ballet either. Why would you expect AI to do what it's not designed to do? AI is designed to process information, not to feel emotions, hunger, thirst, sexual urges, excretion needs, aches, itches and stings - as already discussed.

However, as AI is refined further and becomes more complex, it seems likely that emergences will eventually occur.*

I have already noted that any emergent machine consciousness is unlikely to be the same as biology's. AI has no need for the animal aspects of humanity. All it needs is the the protruding tip of the human consciousness iceberg - abstractions.

AI could become conscious within this limited abstract realm, but we may not recognise it because so much of human consciousness is based on biological needs, and these aspects of consciousness do indeed seem likely to remain the exclusive domain of biology. There would seem to be little point in giving synthetic entities biological concerns.

So, as stated, humans and machines will continue to merge*.





* failing catastrophe
User avatar
Count Lucanor
Posts: 2318
Joined: May 6th, 2017, 5:08 pm
Favorite Philosopher: Umberto Eco
Location: Panama
Contact:

Re: Debate with ChatGPT

Post by Count Lucanor »

Sy Borg wrote: February 6th, 2023, 5:52 pm Well, you are obviously anthropocentric. I watched you talk about human populations for pages and pages without once referring to ecosystems, extinctions and animal welfare. Not once, until I raised the issues. Don't be embarrassed, it's pretty typical for humans to disregard other life forms.
That is your labeling, which apparently carries a negative connotation, so I'm surely entitled to ask why you label me with a tag you could put on yourself. I'm not moved one inch for whatever judgement you make about me on that matter and pretty confident I don't need to be embarrassed about anything.
Sy Borg wrote: February 6th, 2023, 5:52 pm By contrast, I'm not saying anything contentious or profound. Just basic logic. If dogs and giraffes develop a space program, then I will say they are species necessary for Earth life to continue in the far future. However, humans are the only species capable of carrying Earthly DNA, information and materials to other worlds.# It has nothing to do with anthropocentrism. It's simply obvious.
That's not basic logic, but bad logic. No historical contingency can become a necessity of the future, which still will be open and determined by the unpredictable relation between chance and necessity. There's no grand teleological cause behind it, it's just what happens, but yours is a futuristic narrative of predetermined fate, of predestination (replacing god with nature), which implies that if we rewinded the tape of natural history, everything would replay more or less the same, even perhaps the asteroid making the Chicxulub crater exactly 65 millions years ago. Unlike you, I don't believe humans are necessary for life, it did well without us before we showed up, and it will do well after us. Anyway, you appear to be contradicting yourself on this.
Sy Borg wrote: February 6th, 2023, 5:52 pm Generally, all life wants to survive and reproduce. This is what links all of humanity and, in fact, all of life. Clearly a drive shared by almost all of humanity will be reflected in the whole.
Sure, survival and reproduction are intrinsic to the concept of life. But you make wrong inferences from this. We cannot draw a perfect path, not even a winding one, into the future, of the survival and reproductive strategies of living beings as a whole, the biosphere if you want. The future is open, life flourishes and evolves in complex and unpredictable ways, not predetermined by some all-encompassing teleological cause. The keys to the processes of life are well-known, but they include contingency.
Sy Borg wrote: February 6th, 2023, 5:52 pm You need to consider how imbalances work in a system. Earth's actions shift in response to changing circumstances, causing disequilibria to occur that drives responses from other systems.
Once again, you treat the organic world as a closed, predetermined system, which is not. No matter how hard you try to make this as being caused by intrinsic factors, you're implicitly ruling out contingency and extrinsic factors. Earth does not NEED to respond to anything, it does evolve, it does respond to changing circumstances, but the fact that the system has a history, and there's in fact a natural history, it implies that it's not predetermined, it's unpredictable. In other words, its state of equilibrium is redefined as the system develops, which means there's no fixed equilibrium state.
Sy Borg wrote: February 6th, 2023, 5:52 pm Watch this video to better understand how these dynamics work. Life itself appears to be the result of disequilibrium, disassociated hydrogen ions on the surface caused by volcanism. Biology was seemingly the most efficient (low energy) means of equalising charges on the Earth's surface. It's complicated. The professor describes it far better than I can.

This video is clearly not quasi-religious. If it is, then it would seem I am theist. It just looks like hard science to me. The sciences of biochemistry and physics.
I skipped through the video to get the general idea, but I'll go back to watch it in detail later. It looks like a very interesting talk and I like that he mentions Gould's "Wonderful Life", which is in the top 5 list of my favorite books ever. But anyway, even if his argument is not disputable (which it is), it would cover life in general and still acknowledge the role of chance, but it would not make human history or the history of technology, the result of fate, predetermined and reducible to biochemistry and physics.
Sy Borg wrote: February 6th, 2023, 5:52 pm
Let's check these "claims" as see if your denials hold any water:


Claim 1. Major companies already using ever more tech instead of human staff.

On what basis is this controversial? It's happening already.
Predictions about jobs to be lost to AI have been proven false:
https://itif.org/publications/2022/09/3 ... nt-happen/
First, the U.S. economy has added 16 million jobs since 2013, while the unemployment rate was just 3.7 percent. Certainly, “the robots” did not lead to fewer jobs.

But maybe they destroyed some jobs while creating others. At first glance, it’s striking that the occupation that had the highest risk of going the way of the buggy whip manufacturer—insurance underwriters—actually saw employment grow 16.4 percent from 2013 to the end of 2021. In contrast, the occupation least likely to be automated—recreational therapists saw a decline of 8.9 percent of jobs. Overall, there was a negative correlation between the risk of job loss from computerization and actual job loss, but it was quite modest at 0.26. In other words, occupations with higher computerization risk scores were only slightly likely to see job loss.
I suggest you look up and listen to The CEO of ChatGPT who recently said that all the predictions about jobs and AI were wrong. That is because the initial prophecies from AI enthusiasts claimed that blue collar workers would fall first, then white collar ones, but it is going (according to him) exactly the opposite. In any case, there are more blue collar than white collar (although the definition of what is white/blue collar is not clear yet).
Sy Borg wrote: February 6th, 2023, 5:52 pm Claim 2. That trend will continue.

On what basis is this controversial?
First, as evidence shows, the "trend" has been pointed at before and turned out to be false. This is like Jehovah's Witnesses prophesying the end of the world, when it doesn't happen they just update their prophecy.
Sy Borg wrote: February 6th, 2023, 5:52 pm Claim 3. AI's progress will speed up and its functions will become more essential to society.

On what basis is this controversial? It's happening already, it's obvious.
Not more obvious than any other technology, and I already said AI will have a great impact on society, but the way you present this within all your futuristic narrative is simply exaggerated, as you foresee machines as a new class of beings, autonomous and conscious. That is certainly not happening already.
Sy Borg wrote: February 6th, 2023, 5:52 pm Claim 4. People who are integrated with AI will have a competitive advantage over the unaltered.

Again, this is already happening. There is a significant gap in prospects between those with access to the internet and those without, especially in schools. The integration has started but it's not yet physical, except in Scandinavia and some east Asian countries where some people are using embedded chips.
Notice that you changed your initial statement about "AI integrated with their brains" to AI integrated with people. Minor, scarce events in Scandinavia and Asian countries are no basis for a wild prediction. Maybe that will be a succesful development, maybe not, we are never sure about technological trends that depend on many social, economic and cultural factors.
Sy Borg wrote: February 6th, 2023, 5:52 pm It is only a dogmatic (and misguided) approach to materialism that prevents some people from appreciating that humans are already in the process of becoming cyborgs.
I won't even entertain such a wild statement taken out from the most fantasious dreams of futurists.
Sy Borg wrote: February 6th, 2023, 5:52 pm
Claim 5: More of the human brain will be mapped.

On what basis is this controversial? It's in train already.
That was not your claim, which I quoted: "If we push move further into the future, ever more of the brain WILL be mapped with AI's help..."
Now, that is not happening, because the brain is not a computer. In any case, didn't you remember the guy saying we have mapped the brain of a simple worm and we still don't understand it?
Sy Borg wrote: February 6th, 2023, 5:52 pm Claim 6: Vulnerable biological body and brain parts will be replaced by durable, superior synthetic parts

Again, it's already happening.
Musculoskeletal and sensory prosthetics are fine, but synthetic brain parts? That's overstating what really is and making prophecies out of it.
Sy Borg wrote: February 6th, 2023, 5:52 pm My toaster doesn't do ballet either. Why would you expect AI to do what it's not designed to do? AI is designed to process information, not to feel emotions, hunger, thirst, sexual urges, excretion needs, aches, itches and stings - as already discussed.

However, as AI is refined further and becomes more complex, it seems likely that emergences will eventually occur.*
So, basically, if we followed your line of thought, we might expect your toaster to emerge as a ballet dancer, once the engineers figure out how its current parts and configuration can be made more sophisticated.
Sy Borg wrote: February 6th, 2023, 5:52 pm I have already noted that any emergent machine consciousness is unlikely to be the same as biology's. AI has no need for the animal aspects of humanity. All it needs is the the protruding tip of the human consciousness iceberg - abstractions.
The point is, however, that whatever you want to call "machine consciousness", it is not currently merging with human consciousness. It is unlikely that this current "machine consciousness" will ever achieve it. Not if it doesn't think, if it doesn't feel. It will most likely be another technological tool, controlled by humans, to serve human purposes.
The wise are instructed by reason, average minds by experience, the stupid by necessity and the brute by instinct.
― Marcus Tullius Cicero
User avatar
Sy Borg
Site Admin
Posts: 15154
Joined: December 16th, 2013, 9:05 pm

Re: Debate with ChatGPT

Post by Sy Borg »

Count Lucanor wrote: February 8th, 2023, 10:49 am
Sy Borg wrote: February 6th, 2023, 5:52 pm Well, you are obviously anthropocentric. I watched you talk about human populations for pages and pages without once referring to ecosystems, extinctions and animal welfare. Not once, until I raised the issues. Don't be embarrassed, it's pretty typical for humans to disregard other life forms.
That is your labeling, which apparently carries a negative connotation, so I'm surely entitled to ask why you label me with a tag you could put on yourself. I'm not moved one inch for whatever judgement you make about me on that matter and pretty confident I don't need to be embarrassed about anything.
It's not a label, it's a description of your worldview.

When discussing overpopulation, you failed to mention other species, ecosystems, extinctions or animal suffering. How can that be anything but anthropocentric?

Further, can a deeply anthropocentric thinker dispassionately judge AI?


Count Lucanor wrote: February 8th, 2023, 10:49 am
Sy Borg wrote: February 6th, 2023, 5:52 pm By contrast, I'm not saying anything contentious or profound. Just basic logic. If dogs and giraffes develop a space program, then I will say they are species necessary for Earth life to continue in the far future. However, humans are the only species capable of carrying Earthly DNA, information and materials to other worlds.# It has nothing to do with anthropocentrism. It's simply obvious.
That's not basic logic, but bad logic. No historical contingency can become a necessity of the future, which still will be open and determined by the unpredictable relation between chance and necessity. There's no grand teleological cause behind it, it's just what happens, but yours is a futuristic narrative of predetermined fate, of predestination (replacing god with nature), which implies that if we rewinded the tape of natural history, everything would replay more or less the same, even perhaps the asteroid making the Chicxulub crater exactly 65 millions years ago. Unlike you, I don't believe humans are necessary for life, it did well without us before we showed up, and it will do well after us. Anyway, you appear to be contradicting yourself on this.
Predestination and teleology have already been addressed. Please read before posting.

I said:

1) that humans are sending Earthly things in space and

2) that this is the best hope that the Earth's stuff will survive the Sun's expansion.

You seem confused about the concept of teleology, so some tuition is needed.

A bear scratching itself on a boulder can alleviate its itches. Does that mean that the boulder exists as a bear scratching tool? No. the presence of the boulder simply opened up the possibility that the bear's itch may be alleviated. There is no destiny involved. Just life doing what life does - trying to feel better.

It's not purpose that driving phenomena. However, human activities in space have opened up the possibility that Earthly things may yet survive the Sun's expansion. It's not destiny, it's a possibility. That is just life doing what life does - trying to survive and to expand its influence.


Count Lucanor wrote: February 8th, 2023, 10:49 am
Sy Borg wrote: February 6th, 2023, 5:52 pm Generally, all life wants to survive and reproduce. This is what links all of humanity and, in fact, all of life. Clearly a drive shared by almost all of humanity will be reflected in the whole.
Sure, survival and reproduction are intrinsic to the concept of life. But you make wrong inferences from this. We cannot draw a perfect path, not even a winding one, into the future, of the survival and reproductive strategies of living beings as a whole, the biosphere if you want. The future is open, life flourishes and evolves in complex and unpredictable ways, not predetermined by some all-encompassing teleological cause. The keys to the processes of life are well-known, but they include contingency.
Again, you mistakenly assume teleology where none is stated or implied.

Of course we can draw paths into the future. There are only three broad possibilities:

1) Most Earthly things are destroyed, with just small bits of Earthly things scattered around in space. The Voyager probes may yet outlast everything else.

2) New outcrops develop on other worlds.

As for #2, whether what exists on other worlds is sentient or not is unknown.

According to you, AI will never become sentient, so that simplifies #2 for you. I see that assumption as hasty and myopic.


Count Lucanor wrote: February 8th, 2023, 10:49 am
Sy Borg wrote: February 6th, 2023, 5:52 pm You need to consider how imbalances work in a system. Earth's actions shift in response to changing circumstances, causing disequilibria to occur that drives responses from other systems.
Once again, you treat the organic world as a closed, predetermined system, which is not. No matter how hard you try to make this as being caused by intrinsic factors, you're implicitly ruling out contingency and extrinsic factors. Earth does not NEED to respond to anything, it does evolve, it does respond to changing circumstances, but the fact that the system has a history, and there's in fact a natural history, it implies that it's not predetermined, it's unpredictable. In other words, its state of equilibrium is redefined as the system develops, which means there's no fixed equilibrium state.
I didn't talk about need. That's your straw addition. I never ruled out contingency or external factors.

Disequilibrium is what drives activity on Earth, and those imbalances are largely driven by the Sun. If you seek complete equilibrium, wait for a few quintillion years and you'll get close.



Count Lucanor wrote: February 8th, 2023, 10:49 am
Sy Borg wrote: February 6th, 2023, 5:52 pm Watch this video to better understand how these dynamics work. Life itself appears to be the result of disequilibrium, disassociated hydrogen ions on the surface caused by volcanism. Biology was seemingly the most efficient (low energy) means of equalising charges on the Earth's surface. It's complicated. The professor describes it far better than I can.

This video is clearly not quasi-religious. If it is, then it would seem I am theist. It just looks like hard science to me. The sciences of biochemistry and physics.
I skipped through the video to get the general idea, but I'll go back to watch it in detail later. It looks like a very interesting talk and I like that he mentions Gould's "Wonderful Life", which is in the top 5 list of my favorite books ever. But anyway, even if his argument is not disputable (which it is), it would cover life in general and still acknowledge the role of chance, but it would not make human history or the history of technology, the result of fate, predetermined and reducible to biochemistry and physics.
I was pointing out that disequilibrium drove the complex changes on Earth, and no meaning or purpose is needed.



Count Lucanor wrote: February 8th, 2023, 10:49 am
Sy Borg wrote: February 6th, 2023, 5:52 pm
Let's check these "claims" as see if your denials hold any water:


Claim 1. Major companies already using ever more tech instead of human staff.

On what basis is this controversial? It's happening already.
Predictions about jobs to be lost to AI have been proven false:
https://itif.org/publications/2022/09/3 ... nt-happen/
First, the U.S. economy has added 16 million jobs since 2013, while the unemployment rate was just 3.7 percent. Certainly, “the robots” did not lead to fewer jobs.

But maybe they destroyed some jobs while creating others. At first glance, it’s striking that the occupation that had the highest risk of going the way of the buggy whip manufacturer—insurance underwriters—actually saw employment grow 16.4 percent from 2013 to the end of 2021. In contrast, the occupation least likely to be automated—recreational therapists saw a decline of 8.9 percent of jobs. Overall, there was a negative correlation between the risk of job loss from computerization and actual job loss, but it was quite modest at 0.26. In other words, occupations with higher computerization risk scores were only slightly likely to see job loss.
I suggest you look up and listen to The CEO of ChatGPT who recently said that all the predictions about jobs and AI were wrong. That is because the initial prophecies from AI enthusiasts claimed that blue collar workers would fall first, then white collar ones, but it is going (according to him) exactly the opposite. In any case, there are more blue collar than white collar (although the definition of what is white/blue collar is not clear yet).
You used a dodgy job market analysis, developed by an organisation that promotes technology. It might suit them not to point out how much work is being lost, yes?

Unemployment figures have been fiddled to present a rosier picture than is the case by treating casual workers as employed. In some jurisdictions, working just one hour per week means you are listed as "employed". The casualisation of the workforce has long hid the true underemployment rate - and that underemployment has been driven by intelligent technology.

These are early days. Just wait till ChatGPT has been superseded by far superior units and see how the job market is going.



Count Lucanor wrote: February 8th, 2023, 10:49 am
Sy Borg wrote: February 6th, 2023, 5:52 pm Claim 2. That trend will continue.

On what basis is this controversial?
First, as evidence shows, the "trend" has been pointed at before and turned out to be false. This is like Jehovah's Witnesses prophesying the end of the world, when it doesn't happen they just update their prophecy.
Monsenor Lucanor of the Church of Anthropomorphism bolsters his irrational beliefs with dodgy data that ignores one of the most significant changes in the workforce for many years - casualisation.

Of course the trend will continue - failing catastrophe, of course. I shouldn't have to say it because it's so obvious, but here we are.



Count Lucanor wrote: February 8th, 2023, 10:49 am
Sy Borg wrote: February 6th, 2023, 5:52 pm Claim 3. AI's progress will speed up and its functions will become more essential to society.

On what basis is this controversial? It's happening already, it's obvious.
Not more obvious than any other technology, and I already said AI will have a great impact on society, but the way you present this within all your futuristic narrative is simply exaggerated, as you foresee machines as a new class of beings, autonomous and conscious. That is certainly not happening already.
Either a misrepresentation or a lack of comprehension. I had made VERY clear that I did not see ChatGPT or any current AI as conscious. Why lie? You won't get away with it.

AI's progress is indeed already speeding up and, indeed, AI is already becoming ever more essential to society. That you argue with something so prosaic suggests that you are now just arguing for the sake of arguing. I thought you were better than this.



Count Lucanor wrote: February 8th, 2023, 10:49 am
Sy Borg wrote: February 6th, 2023, 5:52 pm Claim 4. People who are integrated with AI will have a competitive advantage over the unaltered.

Again, this is already happening. There is a significant gap in prospects between those with access to the internet and those without, especially in schools. The integration has started but it's not yet physical, except in Scandinavia and some east Asian countries where some people are using embedded chips.
Notice that you changed your initial statement about "AI integrated with their brains" to AI integrated with people. Minor, scarce events in Scandinavia and Asian countries are no basis for a wild prediction. Maybe that will be a succesful development, maybe not, we are never sure about
technological trends that depend on many social, economic and cultural factors.
Either a misrepresentation or a lack of comprehension.

There are obviously degrees of integration that will deepen over time. We are already deeply integrated with machine intelligence. If you doubt this, live without the internet for a year and see if your life changes. In Scandinavia, there is direct integration with the nervous system via implants. The trends are clear.

Researchers hope to map the human brain and be able to gradually replace or enhance damaged neurons or even entire areas of the brain, or enhance functionality. This is a long way off because the complexity of individual neurons is still not well understood.



Count Lucanor wrote: February 8th, 2023, 10:49 am
Sy Borg wrote: February 6th, 2023, 5:52 pm It is only a dogmatic (and misguided) approach to materialism that prevents some people from appreciating that humans are already in the process of becoming cyborgs.
I won't even entertain such a wild statement taken out from the most fantasious dreams of futurists.
Spare us the hysterics, Gwendolyn. You are already a cyborg. Give away all of your computer equipment and phone, all internet connections, and see how your life works.

If an ancient human saw how modern people operated, we would surely appear robotic to them. Just sitting there for hours, focused on symbols on a flat screen, sometimes tapping on keys or clicking the mouse.



Count Lucanor wrote: February 8th, 2023, 10:49 am
Sy Borg wrote: February 6th, 2023, 5:52 pm
Claim 5: More of the human brain will be mapped.

On what basis is this controversial? It's in train already.
That was not your claim, which I quoted: "If we push move further into the future, ever more of the brain WILL be mapped with AI's help..."
Now, that is not happening, because the brain is not a computer. In any case, didn't you remember the guy saying we have mapped the brain of a simple worm and we still don't understand it?
Brain mapping is already happening: https://study.com/learn/lesson/brain-ma ... iques.html

Again, AI is at an early stage. In fact, I was the one to tell you that even C. elegans's simple brain is yet to be fully mapped. There's reportedly more to learn about the complexity of individual neurons before .


Count Lucanor wrote: February 8th, 2023, 10:49 am
Sy Borg wrote: February 6th, 2023, 5:52 pm Claim 6: Vulnerable biological body and brain parts will be replaced by durable, superior synthetic parts

Again, it's already happening.
Musculoskeletal and sensory prosthetics are fine, but synthetic brain parts? That's overstating what really is and making prophecies out of it.
Work is being done: https://mitpress.mit.edu/9780262025775/ ... the-brain/
The continuing development of implantable neural prostheses signals a new era in bioengineering and neuroscience research. This collection of essays outlines current advances in research on the intracranial implantation of devices that can communicate with the brain in order to restore sensory, motor, or cognitive functions. The contributors explore the creation of biologically realistic mathematical models of brain function, the production of microchips that incorporate those models, and the integration of microchip and brain function through neuron-silicon interfaces. Recent developments in understanding the computational and cognitive properties of the brain and rapid advances in biomedical and computer engineering both contribute to this cutting-edge research.
Again, it's early days.

Your approach has been akin to denying a parent's claim that their intelligent child will probably one day be a professional on the basis that their five-year old does not already have a profession.



Count Lucanor wrote: February 8th, 2023, 10:49 am
Sy Borg wrote: February 6th, 2023, 5:52 pm My toaster doesn't do ballet either. Why would you expect AI to do what it's not designed to do? AI is designed to process information, not to feel emotions, hunger, thirst, sexual urges, excretion needs, aches, itches and stings - as already discussed.

However, as AI is refined further and becomes more complex, it seems likely that emergences will eventually occur.*
So, basically, if we followed your line of thought, we might expect your toaster to emerge as a ballet dancer, once the engineers figure out how its current parts and configuration can be made more sophisticated.
Let's check the "logic" of your response: Will the microbes of three billion years ago one day become mammals? No. They all died and, slowly, their descendants evolved. Over deep time, other life forms emerged. If technology was mapped like the Tree of Life, then a toaster would be a very, very distant and primitive ancestor of AI.


Count Lucanor wrote: February 8th, 2023, 10:49 am
Sy Borg wrote: February 6th, 2023, 5:52 pm I have already noted that any emergent machine consciousness is unlikely to be the same as biology's. AI has no need for the animal aspects of humanity. All it needs is the the protruding tip of the human consciousness iceberg - abstractions.
The point is, however, that whatever you want to call "machine consciousness", it is not currently merging with human consciousness. It is unlikely that this current "machine consciousness" will ever achieve it. Not if it doesn't think, if it doesn't feel. It will most likely be another technological tool, controlled by humans, to serve human purposes.
Intelligent machines are already merging with human consciousness. Again, try to live without the internet for a year.

From memory, the OP was actually about the past being erased ...
User avatar
Count Lucanor
Posts: 2318
Joined: May 6th, 2017, 5:08 pm
Favorite Philosopher: Umberto Eco
Location: Panama
Contact:

Re: Debate with ChatGPT

Post by Count Lucanor »

Sy Borg wrote: February 8th, 2023, 6:50 pm
Count Lucanor wrote: February 8th, 2023, 10:49 am
Sy Borg wrote: February 6th, 2023, 5:52 pm Well, you are obviously anthropocentric. I watched you talk about human populations for pages and pages without once referring to ecosystems, extinctions and animal welfare. Not once, until I raised the issues. Don't be embarrassed, it's pretty typical for humans to disregard other life forms.
That is your labeling, which apparently carries a negative connotation, so I'm surely entitled to ask why you label me with a tag you could put on yourself. I'm not moved one inch for whatever judgement you make about me on that matter and pretty confident I don't need to be embarrassed about anything.
It's not a label, it's a description of your worldview.

When discussing overpopulation, you failed to mention other species, ecosystems, extinctions or animal suffering. How can that be anything but anthropocentric?

Further, can a deeply anthropocentric thinker dispassionately judge AI?
Label, description, whatever you want to call it, it is your tagging game, not mine. You are too eager to signal my lack of virtue, buy I really couldn't care less. If I'm going to be forced into a morality contest, I'd like to know where the competent judge is.

I'm pretty sure I mentioned all the relevant variables when discussing overpopulation, but I'm not going to repeat that discussion.

Sy Borg wrote: February 8th, 2023, 6:50 pm
Predestination and teleology have already been addressed. Please read before posting.

I said:

1) that humans are sending Earthly things in space and

2) that this is the best hope that the Earth's stuff will survive the Sun's expansion.

You seem confused about the concept of teleology, so some tuition is needed.

A bear scratching itself on a boulder can alleviate its itches. Does that mean that the boulder exists as a bear scratching tool? No. the presence of the boulder simply opened up the possibility that the bear's itch may be alleviated. There is no destiny involved. Just life doing what life does - trying to feel better.
That is a terrible example. A bear is a particular living being, it does have needs, it certainly has drives, interests and purpose-driven behavior. If we thought of the bear species as a whole or the bear habitat as a system, it wouldn't make much sense to say it "needs boulders to alleviate itches", even if we wanted to generalize the needs of all bears. Neither we can use for that something as abstract as the concept of "life" in general. It is not life trying to feel better, it's just a bear doing it. Earth can be seen as a system comprised of living and non-living things, as a whole it doesn't have needs, drives and purpose-driven behavior, even though many of its parts (like bears) do. It has no "hope" of anything. In fact, it is humans who have created hopes about the Earth, and projecting those motivations to natural systems is good old teleology.
Sy Borg wrote: February 8th, 2023, 6:50 pm It's not purpose that driving phenomena. However, human activities in space have opened up the possibility that Earthly things may yet survive the Sun's expansion. It's not destiny, it's a possibility. That is just life doing what life does - trying to survive and to expand its influence.
Again, you're treating abstractions as if they were particulars. That's a fallacy of hypostatization.
Sy Borg wrote: February 8th, 2023, 6:50 pm Of course we can draw paths into the future. There are only three broad possibilities:

1) Most Earthly things are destroyed, with just small bits of Earthly things scattered around in space. The Voyager probes may yet outlast everything else.

2) New outcrops develop on other worlds.

As for #2, whether what exists on other worlds is sentient or not is unknown.

According to you, AI will never become sentient, so that simplifies #2 for you. I see that assumption as hasty and myopic.
Where is the third?

Anyway, it is incredible that you really think the future of the Earth is reduced to these few possibilities. It's ridiculous.

Notice what you're doing. You say that the statement "AI will never become sentient" is entirely mine, so you don't endorse it. Fine, let's remember that a few lines below when we deal with that statement again.
Sy Borg wrote: February 8th, 2023, 6:50 pm I didn't talk about need. That's your straw addition. I never ruled out contingency or external factors.
No, I didn't add anything. You specifically talked about "need", that's the word you used: "the Earth needs, etc."
Sy Borg wrote: February 8th, 2023, 6:50 pm You used a dodgy job market analysis, developed by an organisation that promotes technology. It might suit them not to point out how much work is being lost, yes?
So, organisations that promote technology are not to be trusted? Hmm...that's interesting. The fact is that the Frey and Osborne report from 2013 was used to promote the narrative of AI enthusiasts, but it has been proven false. The issue of job loss to new technology is way more complex: THE IMPACT OF TECHNOLOGY ON WORK AND THE WORKFORCE

But is it the loss of jobs to technology the issue in question? I had mentioned the current human dependency on technology and I added that technology could not evolve without humans directing it, and then you jumped to say that AI will displace humans at work and "ultimately it will be in control". But there's no evidence of this. Even if we lost all jobs to automation, that's no indication that a new class of intelligent beings will be taking over, technology will remain instrumental to humans. Automation does not mean complete autonomy from humans.
Sy Borg wrote: February 8th, 2023, 6:50 pm Monsenor Lucanor of the Church of Anthropomorphism...
That's a heck of a title, I don't care what it means, but I love it :D
Sy Borg wrote: February 8th, 2023, 6:50 pm bolsters his irrational beliefs with dodgy data that ignores one of the most significant changes in the workforce for many years - casualisation.
Dodgy data? How about this:
THE IMPACT OF TECHNOLOGY ON WORK AND THE WORKFORCE
Historically, as technology has changed the way work is done, the number of jobs created has outstripped the number of jobs eliminated.


Behind the headline number: Why not to rely on Frey and Osborne’s predictions of potential job loss from automation - Melbourne Institute
The seminal study making predictions of potential job loss due to automation and computerisation is by Frey and Osborne (2013, 2017) (hereafter, FO). They concluded that 47 per cent of jobs in the United States were at ‘high risk’ of automation in the next 10 to 20 years. The FO study has been and continues to be widely cited (over 4,100 citations on Google Scholar – 29 August 2019) and noted in the popular press and in government.2 FO’s estimates of the probabilities of specific occupations being automated have been applied to construct predictions of job losses in many other countries – including Australia (DurrantWhyte et al., 2015; Edmonds and Bradley, 2015); Finland and Norway (Pajarinen et al., 2015); Singapore (Lee, 2016); United Kingdom (Deloitte, 2013; Lawrence et al., 2017); Germany (Brzeski and Burk, 2015); Japan (David, 2017); South Africa (le Roux, 2018); and 40 developing countries covered by the World Development Report (World Bank, 2016).

We conclude there are major problems with FO’s method and predictions. First, the method FO use is problematic and opaque. The method is built on subjective assessments of the potential for individual occupations to be fully automated. Those assessments appear to have been based on limited information about the job content of the occupations. The outcome is a set of predicted probabilities of occupations being automated which are upward-biased and inconsistent with FO’s own model of the determinants of technology-induced job loss. Second, FO’s predictions of the probability of automation and job loss by occupation do not provide additional information for forecasting actual changes in occupation-level employment that occurred in the United States between 2013 and 2018, once account is taken of the now standard approach that economists use to think about the relation between technology and labour demand: classifying occupations as routine/non-routine and manual/cognitive. This contradicts FO’s claim (2017, p.255) that existing methods for understanding the impact of technological change in the labour market are inadequate. Our analysis of FO’s method and predictions demonstrates the importance of looking behind headline-grabbing predictions before relying on them;
Sy Borg wrote: February 8th, 2023, 6:50 pm Of course the trend will continue - failing catastrophe, of course. I shouldn't have to say it because it's so obvious, but here we are.
Trend? What trend?
Sy Borg wrote: February 8th, 2023, 6:50 pm
Count Lucanor wrote: February 8th, 2023, 10:49 am
Sy Borg wrote: February 6th, 2023, 5:52 pm Claim 3. AI's progress will speed up and its functions will become more essential to society.

On what basis is this controversial? It's happening already, it's obvious.
Not more obvious than any other technology, and I already said AI will have a great impact on society, but the way you present this within all your futuristic narrative is simply exaggerated, as you foresee machines as a new class of beings, autonomous and conscious. That is certainly not happening already.
Either a misrepresentation or a lack of comprehension. I had made VERY clear that I did not see ChatGPT or any current AI as conscious. Why lie? You won't get away with it.
That's a straw man. Look again: I didn't say that you said that currently AI is conscious. I said that current state of AI (not conscious) cannot be used as evidence that in the future AI will be conscious. Now, aren't you saying that AI will be conscious? Remember that line above where you said that the statement "AI will never become sentient" was entirely mine. You made it look as if I would not be justified in saying it, but then what do you think: AI will become sentient, yes or no?
Sy Borg wrote: February 8th, 2023, 6:50 pm There are obviously degrees of integration that will deepen over time. We are already deeply integrated with machine intelligence. If you doubt this, live without the internet for a year and see if your life changes. In Scandinavia, there is direct integration with the nervous system via implants. The trends are clear.
First of all, machines are not intelligent, not literally, nor in any way that resembles human cognition. Can we talk about humans being integrated with technology, perhaps since the dawn of civilization? Maybe, but there's no trend in the direction of machines becoming humanlike in terms of autonomy, nor vice versa.
Sy Borg wrote: February 8th, 2023, 6:50 pm Researchers hope to map the human brain and be able to gradually replace or enhance damaged neurons or even entire areas of the brain, or enhance functionality. This is a long way off because the complexity of individual neurons is still not well understood.
Hope, just hope.
Sy Borg wrote: February 8th, 2023, 6:50 pm Spare us the hysterics, Gwendolyn. You are already a cyborg. Give away all of your computer equipment and phone, all internet connections, and see how your life works.
No, being a user of computer technology does not make anyone a "cyborg". That's anyway a concept taken directly from science fiction literature.
Sy Borg wrote: February 8th, 2023, 6:50 pm Brain mapping is already happening: https://study.com/learn/lesson/brain-ma ... iques.html

Again, AI is at an early stage. In fact, I was the one to tell you that even C. elegans's simple brain is yet to be fully mapped. There's reportedly more to learn about the complexity of individual neurons before .
You're showing the standard brain mapping techniques of neuroscience, you're not showing anything related to your claim that "more of the brain WILL be mapped with AI's help".
Sy Borg wrote: February 8th, 2023, 6:50 pm Your approach has been akin to denying a parent's claim that their intelligent child will probably one day be a professional on the basis that their five-year old does not already have a profession.
Your approach is asserting that a child will inevitably be a professional based on the possibilities. Even worse, you would find reasonable to conclude that the child will probably teletransport because it has not been shown that it will always be impossible.
The wise are instructed by reason, average minds by experience, the stupid by necessity and the brute by instinct.
― Marcus Tullius Cicero
User avatar
Sy Borg
Site Admin
Posts: 15154
Joined: December 16th, 2013, 9:05 pm

Re: Debate with ChatGPT

Post by Sy Borg »

That is too convoluted.

You still quote sources that make light of underemployment. One hour's work per week is only employed in a technical sense. But they are the figures that are usually used. Black & white., Employed or not. It's not reality.

You seem as much in denial about tech-based job losses as you were about overpopulation. With overpopulation, you claim to have covered "all the important variables" while ignoring extinctions, ecosystem destruction and animal suffering. You truly are Monsenor Lucanor of the Church of Anthropocentrism.

Maybe someone else can talk now, and actually address the thread topic. I suspect that anyone who is still awake has taken your point that ChatGPT does not have animal consciousness.
User avatar
Count Lucanor
Posts: 2318
Joined: May 6th, 2017, 5:08 pm
Favorite Philosopher: Umberto Eco
Location: Panama
Contact:

Re: Debate with ChatGPT

Post by Count Lucanor »

Sy Borg wrote: February 9th, 2023, 5:03 am
You still quote sources that make light of underemployment. One hour's work per week is only employed in a technical sense. But they are the figures that are usually used. Black & white., Employed or not. It's not reality.
Reality comes with the evidence. At least I have shown sources that deal with the actual data and support my argument. Where is the data that supports your claim? Or are we, as always, left only to believe in the sacred words of The Most Reverend Bishop of The Futurism Temple? Your Grace, though, keeps missing the point: even if ALL jobs were lost to AI, that is not evidence that AI will be taking control as a new species of beings. Trains and cars sent horses to retirement, but no one ever thought trains and cars were becoming horses or their equivalent.
The wise are instructed by reason, average minds by experience, the stupid by necessity and the brute by instinct.
― Marcus Tullius Cicero
User avatar
Sy Borg
Site Admin
Posts: 15154
Joined: December 16th, 2013, 9:05 pm

Re: Debate with ChatGPT

Post by Sy Borg »

Count Lucanor wrote: February 9th, 2023, 8:55 am
Sy Borg wrote: February 9th, 2023, 5:03 am
You still quote sources that make light of underemployment. One hour's work per week is only employed in a technical sense. But they are the figures that are usually used. Black & white., Employed or not. It's not reality.
Reality comes with the evidence. At least I have shown sources that deal with the actual data and support my argument. Where is the data that supports your claim? Or are we, as always, left only to believe in the sacred words of The Most Reverend Bishop of The Futurism Temple? Your Grace, though, keeps missing the point: even if ALL jobs were lost to AI, that is not evidence that AI will be taking control as a new species of beings. Trains and cars sent horses to retirement, but no one ever thought trains and cars were becoming horses or their equivalent.
At least I have real life experience in HR statistics and I have witnessed the situation first-hand. Incomplete information can be more misleading than a lack of data. It can be generated by error, or is used in the wrong context or it's distorted via inconsistent scaling and exclusions in order to deceive.

The researchers openly admitted that they didn't take any new generation tech into account in the study - hmm, why didn't you mention the exclusions? It is not enough to know how many employees are on hand, available for work next Monday, another to know how many people are actually out of work.

In philosophy, we are supposed to be honest, not to hide facts that might be inconvenient to their side like a politician. Then again, as a left wing polemicist, I suppose your tricky approach reflects your partisan worldview.

Unemployment statistics mask the true underutilisation rate via various means and criteria. Exclusions are key here. You will invariably find many millions of unemployed people who are not included in statistics, and casualisation is being handled loosely in the stats.
Post Reply

Return to “General Philosophy”

2024 Philosophy Books of the Month

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021