Sy Borg wrote: ↑October 14th, 2024, 5:16 pm
The claim that the Earth is “ridiculously unimportant” is semantically wrong. I get it - I was a space fan before you were even a gleam in your father’s eye. The universe and its large structures are so immense that Earth is a grain of dust by comparison. So yes, in terms of scale and relations, Earth is akin to an organelle within a cell of the Milky Way.
However, if our galaxy does not have a galactic empire, then “unimportant” is an inappropriate description. It has a disparaging semantic that underplays this remarkable and unique planet, and the impossibly complex forms it has evolved.
OK, if you insist. I talked about finding a balance, on one side our exceptionality and on the other the acknowledgement of our relative insignificant, negligible effect on the rest of the universe. Insignificance, that’s what unimportant means and, obviously, is semantically correct. The Earth has not gone viral, so to speak. You, OTOH, don’t see any need of balance, we’re either super important or super important in the big picture, no concessions. I’m pretty sure it was for those who think that the exceptionality of our little world reigns over the vast universe, that Sagan wrote that Pale Blue Dot.
Sy Borg wrote: ↑October 14th, 2024, 5:16 pm
Sorry, I did not phrase it well. “Evolution” was first used in terms of Darwinian evolution in the 19th century. Yes, the word referring to chance in general preceded that. I didn’t know it was the 16th century. Learn every day.
Back to the point, everything evolves. I disagree with the academic tendency to hijack the word “evolution” and then claim that only biology evolves. It is misleading.
“Evolution” should ideally be termed “biological evolution”. There was significant geological and chemical evolution on the Earth before abiogenesis. Life was not going to merge from simple basalts and obsidian. Further, there are always selection pressures in every aspect of reality – not biological selection – but similar in many ways.
For instance, the evolution of planets from the proto-planetary disc, as described earlier. You can see evolution in technology. A fascinating example that illustrates the point is the evolution of stone axes from crude chips of rock to relatively detailed and precise tools, even decorated at times.
And yes, AI is evolving. Future AI with self-replication and self-improving abilities are inevitable. Is AI intelligent? No, it is a tool that boosts human intelligence. However, AI will continually have more autonomy. At what point does autonomy equal agency? When might "the lights come on"? If so, how would we know?
To make my point even more clear, I don’t have any issue with things “evolving” as a way to say they change, transform, mutate, develop, etc. We say that society, culture, economy, technology, continental plaques, planetary systems, etc., evolve, and that’s fine. But then we have a particular application of the term evolution to the process of transformation of populations of organisms, something that is circumscribed to the sphere of biology and is therefore called “biological evolution”, which has been explained as caused by an intrinsic and emergent dynamic of those biological systems, called “natural selection”. You then came here to say that this evolution by natural selection applies to everything from planetary systems, to asteroids, to technology and geological formations, as if it was some sort of pervading force that affects everything. Well, no, that’s simply wrong. They “evolve” as anything else, of course, but with their own dynamic. Technology, for example, evolves in relation to human intervention, so the Clovis arrows didn’t just emerge and changed on their own. Surely, in the particular case of Earth, there will be interdependencies between living and non-living systems, between organisms and their environment, but that sphere of influence ends where life ends, within the limits of the biosphere.
Is AI technology (I use the term as generally accepted, although I believe the “intelligence” part is misleading) evolving? Sure, as all technologies. Is it going in the direction of self-replication and self-improvement? Certainly not, not even starting. All new developments are the result of human control of its processes, both in the software and hardware departments. A few theoretical attempts, but no real implementation. 3D printing is not a candidate for that either. When one pays attention, all the hype about the potential of these things comes from the equivocal use of words to build a narrative. Calling 3D printing self-replication is a perfect example, and so is “self-improvement”.
Sy Borg wrote: ↑October 14th, 2024, 5:16 pm
Are you arguing that self-replicating machines will always be impossible? Why would you think that advanced future AI will never have access to 3D printing capabilities? The examples I gave were basic. That will obviously change. For instance, once people needed abacuses to perform calculations. Times change.
I’m arguing that self-replicating machines will be possible when we solve the puzzle of how it can be technically done and find the material and human resources to implement it. We haven’t done anything in that direction yet. Will we ever do it? We might hope so, but we don’t know, just as we don’t know how to teletransport. Although I don’t doubt someone will come up and say: “
we know how to teletransport” and then point to something that isn’t, but with the equivocal use of words, gets away with it.
Sy Borg wrote: ↑October 14th, 2024, 5:16 pm
Count Lucanor wrote: ↑October 14th, 2024, 12:08 pm
Anything that needs humans inputs as blueprints, maintenance, materials, etc., is not self-replicating. The term is deceiving, a better word that encompasses what we should be looking for is self-sustainable (collectively).
Is that like how anything that is human could not possibly have emerged from an ape? DNA is a blueprint, a plan.
Re: “self-sustaining. Just as “evolution” does not only refer to biological evolution, “replication” does not only refer to biological replication.
Humans are evolved apes, but 3D printers and computers are not evolved minerals. Neither are humans or other living beings merely evolved compounds of carbon atoms. They are, but they are more than that. And nope, DNA is not a plan, not a blueprint. We get once again to the use of metaphors that cloud our thinking. Plans and blueprints imply reason and purpose, applying them to nature is good old teleology.
Sy Borg wrote: ↑October 14th, 2024, 5:16 pm
Count Lucanor wrote: ↑October 14th, 2024, 12:08 pm
4. AI enthusiasts: As I said: they don’t advance a comprehensive theory of HOW it would be technically done, they simply rely on the purely theoretical assumption, taken from the computational theory of mind, that from sophisticated algorithms, agency and consciousness will emerge.
They don’t need to know how.
So, they don’t need to know HOW? Are they saying that? Because if they are, that just goes to show how it has become a messianic cult moved by faith on the miraculous power of technology, as if technology was not human-made, but some kind of mystical force pervading history.
Sy Borg wrote: ↑October 14th, 2024, 5:16 pm
There are two broad possibilities:
a. AI never develops any kind of sentience whatsoever
b. AI develops some kind of sentience.
Logically, any emergent AI sentience will not be the same as biological sentience. It would be shaped by different internal and environmental drivers. Instead of DNA, AI will have schematics. Instead of food it will have electricity. Instead of emotions, it has subroutines.
If AI has 3D printing replication capacities, then it could apply random or designed variables to each blueprint. It could experiment with the aim of innovating.
Options A and B appeal to the concept of sentience, which refers to “sentience as we know it”, of which there will be kinds. That’s what you say: a kind of sentience. That inevitably points to sentience of living beings, but then you say that this very kind of new sentience does not belong to the class of sentience of living beings (biological sentience), which is a blatant contradiction. Supposedly, there might be a higher class of sentience under which all the other kinds of sentience fall, but what is it, what are its essential, defining properties as sentience?
So, what makes a non-biological sentience, “sentient” then? If a sentient computer does not do anything that a sentient living being does, why refer to sentience? Why is there need to resort to that particular term and not any other?
Sy Borg wrote: ↑October 14th, 2024, 5:16 pm
Count Lucanor wrote: ↑October 14th, 2024, 12:08 pm
They also take for granted that mind-body dualism is true, so intelligence can be a thing on its own, just accidentally attached to a physical body. So, somewhere some time, robotics will be thrown into the mix and…eureka! you will have artificial organisms. All of those assumptions are highly debatable.
It’s about emergence, not dualism. You still seem to be thinking in terms of dozens of years rather than millennia, or millions of years.
I made it clear in my previous comments, but nothing in that last statement points to time frames as a relevant factor. I simply have not used that criteria, so I don’t know where you get it from. It’s you who think it’s just a matter of time, not me. I’ve said a hundred times that the problem is a fundamental flaw in the use of computer technology (hardware and software) to produce life-like characteristics in machines, such as intelligence, agency or sentience. It’s not a question of time, just as it wasn’t when trying to recreate the flight of birds with man-made flapping wings. They could have waited 200 more years, it was not going to happen going that way, because of the physics and what the technology involved. When they understood that what it takes was not imitating the flight of birds, but understanding the principles of aerodynamics, then they figured out a completely new way to fly. I wish that what they call AI was such technology that gets the principles right of a new way to have agency and be sentient, but it’s not.
Now, one thing where time frames become relevant is our ability to predict the future. You think you can predict what is going to happen millions of years ahead. There's an implicit determinism in that line of thought, which I cannot endorse.