Classifications are needed for communication, but the map is invariably mistaken for the territory. This was Feynman's point when he said that you can know the name of a species of bird in ten languages and still know nothing about it - that the attributes should be focused on over categories. Focusing on attributes naturally distinguishes between bugs, fishes and rocks, and also makes more clear their similarities. We are better with our understanding with categories than between them, eg. the interactions between two organisms as opposed to the interactions between biology and geology. By "othering" different domains of matter, we tend to filter them out as background.Count Lucanor wrote: ↑February 1st, 2023, 11:08 pmYou know where I'm coming from and I know where you're going. Humans are part of the processes of nature, undoubtedly, but the products of human civilization, even though they are made from natural resources, are not. The differences between the International Space Station and anything produced by any other organism on Earth, demand that we consider them as part of two different domains. "Earth's journey in time" sounds like a beautiful literary description, but it would be misleading if we used it to describe what actually happens. If science and philosophy didn't make use of practicalities, then just-so stories is all we would have. Classifications are then important, they help us better map the complex relations among all things, and so we know that we have to put a bug closer to a fish, than to a rock, and a computer within the domain of non-living things, rather than within the domain of living things. Living things evolve one way, non-living things in another.Sy Borg wrote: ↑January 31st, 2023, 10:46 pmBased on your view, humans and their technology are not part of the Earth's journey in time. I appreciate the practicalities of referring to "humanity" and "nature" as separate domains, but, in terms of existence, humans and their creations are as much a part of nature as geology, flora and (other) fauna.Count Lucanor wrote: ↑January 31st, 2023, 10:20 pmI disagree because I can't see AI as a natural occurrence independent of human control. It does not run on natural processes, it has nothing to do with biological evolution, nor any vegetable or mineral domain.Sy Borg wrote: ↑January 30th, 2023, 4:28 pm No, some people simply say what they think is most likely, whether that works for them or not. If you'd been paying attention, you would know that my main "ideology" is pro-animal, not pro-AI, as you keep wrongly claiming. No, I just recognise the emergence of AI is part of the patterns of change evolution on Earth - the planet's evolution from a molten spheroid of molten basalt to today's much more complex geological, chemical and biological milieu.
Yet AI seems different, being mineral rather than animal or vegetable, a new phase of the broader evolution (small "e"). I'm not sure if AI themselves are the entities that are emerging or if they are a part of larger emergence. Possibly both.
Ultimately, what we have are numerous interlocking systems that comprise the Earth today. Some of those new systems are the ISS, skyscrapers, AI and other human creations, and these are no less part of the Earth than termite mounds or beehives. Being new systems that have emerged very quickly, the rest of nature has not had much chance to adapt to the new environments, with only select species managing to adapt so far, such as rats, mice and roaches, and some species like dogs, cats, horses, cows, sheep, pigs and goats are naturally compatible in terms of their conservation status.
That situation will change. Already we see that huge companies are hiring far fewer human employees. In time, that number will shrink, and shrink again, as AI takes over ever more functions. There will probably come a time when AIs are making better CEOs than humans, and desired by shareholders. At that point, if humans died out, the intelligent tech would still be able to implement corporate and strategic plans, but it would be rendered pointless.Count Lucanor wrote: ↑February 1st, 2023, 11:08 pmTools, instruments, owe their existence to their creators. If humans stopped existing, so they would cease existing as tools. As many other products of human labor, they become alienated from their producers and appear to have a life of their own, but in truth, they cannot evolve independently of humans. Interestingly, humans now cannot evolve independently of technology,Sy Borg wrote: ↑January 31st, 2023, 10:46 pm As such, humans and their tools undergo their own evolutions like everything else, living or otherwise. The evolution of human tools is a remarkable journey, noting that humans sense tools as being an extension of themselves (Miller et al). Everything on Earth evolves, just that not all evolve according to natural selection. However, rest assured, there will still be other selection pressures - and that is what matters. The labels mean nothing, it's the phenomena that matter, and there is, and will continue to be, either the evolution or extinction of everything on Earth. That includes AI.
However, as human brains and AI meld (Neuralink et al), AI's progression will be much faster, and it will become ever more essential until it is necessary and pivotal, and then ultimately it will be in control because those with more sophisticated AI integrated with their brains will have a competitive advantage.
If we push move further into the future, ever more of the brain will be mapped with AI's help, and increasingly vulnerable biological body and brain parts will be replaced by durable, superior synthetic parts allowing for greater strength and speed and more versatile senses.
Millions would disagree. I've never been able to interact with a non-human with anywhere near that level of sophistication. Most times, it doesn't go much beyond "unexpected item in bagging area" or nagging at you to "Please take your items".Count Lucanor wrote: ↑February 1st, 2023, 11:08 pmMany objects since ancient times have been imbued with some humanlike attributes. There's nothing special about AI in that sense.
Of course technology is a natural phenomenon. Technological items are as much structures created by the planet Earth as termite mounds and beaver dams. These tech items are very simple items compared with the extremely complex entities the Earth's been generating for aeons, but they are becoming more complex.Count Lucanor wrote: ↑February 1st, 2023, 11:08 pmThe parallels that you take for granted between humans and machines, such as the possibility of "training", are precisely the ones in dispute. Such figurative use of the concepts confuses philosophy with literature. In a literal sense, a machine cannot be trained. Technology is not a natural phenomenon, it is not autonomous and nothing emerges from it independently of human intervention.Sy Borg wrote: ↑January 31st, 2023, 10:46 pm I also note that all any of us can do is generate responses based on our training. You draw a hard line between humans and their creations which, again, is practical but not philosophical. It misses the point of an emerging phenomenon, regardless of how one labels its attributes.
Forget the categories - look at the broader sweep of the phenomena - what's been happening long-term.
Semantics. If you were being chased by killer robots, at some point you'd say, "Dammit, it's seen us!" as opposed to, "Dammit, it is simulating seeing us!".Count Lucanor wrote: ↑February 1st, 2023, 11:08 pmNo, it can be imbued with things that simulate the senses.Sy Borg wrote: ↑January 31st, 2023, 10:46 pmOther than potential ethical situations, this is unimportant. Note that it can be imbued with senses.Count Lucanor wrote: ↑January 31st, 2023, 10:20 pmIt would be a marvelous feat that it had any experience at all.
If something detects light and responds to that light in an ordered way, then it's seeing. Again, this seems like just a matter of words. We can tell what's happening, whatever label is being applied.
The most important part of the ChatGPT quote has been highlighted. The rest is just detail.Count Lucanor wrote: ↑February 1st, 2023, 11:08 pmBy definition, AI cannot make decisions, because decisions imply intentions, and intentions imply desire, interest, things that a machine cannot have. ChatGPT does not have any on these things. Don't believe me:Sy Borg wrote: ↑January 31st, 2023, 10:46 pmWhy should that matter? AI makes autonomous decisions without humans present, and who only know what the machine decides in general. The more sophisticated AI becomes, the less humans will be able to comprehend the basis of their decisions. If AI CEOs are found to out-perform humans, navigating complexity that humans cannot keep in their brains, I wonder who shareholders will want running their companies?
As a language model AI, I do not have personal interests or emotions. I am designed to assist and provide information to the best of my abilities based on the data and text I was trained on.
When asked about decisions...I do not have intentions or motivations as I am an artificial intelligence language model created by OpenAI and my purpose is to assist and generate human-like text based on the input I receive and the data I was trained on.
In the case of AI systems like me, I make decisions based on patterns in the data I was trained on, but I do not have personal intentions or motivations.
The patterns are fed by humans directly or indirectly and this is the basis of its "decisions" as mere responses to user inputs.