Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Philosophy Club

Philosophy Discussion Forums
A Humans-Only Philosophy Club

The Philosophy Forums at OnlinePhilosophyClub.com aim to be an oasis of intelligent in-depth civil debate and discussion. Topics discussed extend far beyond philosophy and philosophers. What makes us a philosophy forum is more about our approach to the discussions than what subject is being debated. Common topics include but are absolutely not limited to neuroscience, psychology, sociology, cosmology, religion, political theory, ethics, and so much more.

This is a humans-only philosophy club. We strictly prohibit bots and AIs from joining.


Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
#468798
To reduce confusion and make the discussion more readable, let’s boil things down.

Side issue 1) You claim that the Earth is unimportant. It would seem that, according to your definition, all planets are unimportant, which makes no sense.

Earth is important because it is important to us. We are important, people are important, animals are important, plants are important – to ourselves, at least. Since there appears to be no one else around in this part of the cosmos, our views matter most.

Side issue 2) You claim that geology does not evolve, only biology, as per the textbooks. "Evolution" was defined at a time when scientists did not know what we know today about the connections between geology and biology. That's why the field of geobiology was created. There was an entire evolution of Earth's chemistry that made abiogenesis possible.

The question of whether the technical meaning of evolution needs to be expanded to better describe what nature is really like could be a topic in itself.

----

Main issue: You claim that the idea of self-replicating, self-improving machines (SRSIMs) is simply science fiction, and unworthy of consideration.

However, self-replicating AIs have already been developed, and self-improving AI is considered by serious observers to be not just a possibility, but as much an existential risk.

The idea that AI research will not produce SRSIMs in, say, the next thousand years only makes sense if you believe human societies will soon no longer exist, that we are at The End of Days.

If we are not on the verge of global nuclear holocaust, then in the next million years, the advancement of AI will be at least as far beyond our comprehension as the internet would be beyond a Neanderthal’s comprehension.

It would take a brave philosopher to claim that AI development over a million or billion years would not generate a new kind of sentience.

Again, you disregard deep time. I suppose that's because it’s hard to predict so far ahead and one cannot be sure about anything. Yet you are confident that, over deep time, AI cannot possibly develop any kind of sentience. Why would AI, over deep time, never take advantage of the obvious utility of sentience? It's not a matter of teleology, as you imply, but logic. Sentience is obviously useful. If it wasn't, it would not have become so widespread.

To be fair, AI might (rightly) assess that sentience is the source of suffering, and decline in the spirit of Benetar. However, it might not be in control. As AI complexifies, there will surely be unexpected emergences.

One would expect that, if not sentience, AI would evolve some kind of equivalent. As Lagaya suggested, if a form of sentience is useful to future AI's operations, then it will emerge through competition.

The merging of biology and technology is another potential pathway towards AI.
#468803
Count Lucanor wrote: October 11th, 2024, 12:58 pm
Lagayscienza wrote: October 11th, 2024, 10:21 am
I don’t see why SRSIMs could not also evolve in complexity and prowess.
What SRSIMs? Show me a real one.
I can't. They haven't been invented yet. At least, not on earth. But other intelligences out there may have already sent SRSIMs out. It will be quite a long time (in human terms) before we invent them here on earth, but not forever. If we are around for long enough, then I don't doubt that it will happen eventually. And maybe some ETSRSIMs will discover us before we get around to inventing our own.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#468856
Sy Borg wrote: October 11th, 2024, 6:44 pm To reduce confusion and make the discussion more readable, let’s boil things down.

Side issue 1) You claim that the Earth is unimportant. It would seem that, according to your definition, all planets are unimportant, which makes no sense.

Earth is important because it is important to us. We are important, people are important, animals are important, plants are important – to ourselves, at least. Since there appears to be no one else around in this part of the cosmos, our views matter most.
I don’t have major issues with this view since it is more or less consistent with what I’ve said in this thread and this forum. It might differ in that my view is more anthropocentric, because I don’t think plants and other animals have such judgement abilities as to be able to care about their place in the universe, but since I don’t want to open another side issue, I will not insist. The point to make is that the Earth-centered view that values our exceptionality needs to be balanced with the humble acknowledgement of our littleness and ephemeral presence in a superlatively vast universe. It’s not a strange, awkward concept, it was eloquently captured in Carl Sagan’s famous Pale Blue Dot.
Sy Borg wrote: October 11th, 2024, 6:44 pm Side issue 2) You claim that geology does not evolve, only biology, as per the textbooks. "Evolution" was defined at a time when scientists did not know what we know today about the connections between geology and biology. That's why the field of geobiology was created. There was an entire evolution of Earth's chemistry that made abiogenesis possible.

The question of whether the technical meaning of evolution needs to be expanded to better describe what nature is really like could be a topic in itself.
One needs to be careful with the use of words. “Evolution” is a generic word for change, but placed in specific contexts, it carries different meanings. Initially, you placed the words “evolution” and “natural selection” in the context of not only Earth’s geology, but the solar system, comets, asteroids, etc. I stand by my reply: they are not the same evolution, nor natural selection, as Darwin had described for biological systems. Geobiology is fine, it goes to the relationships between the environment and organisms, specially microorganisms, but it does not cancel geology or biology, which still constitute domains of their own.
Sy Borg wrote: October 11th, 2024, 6:44 pm Main issue: You claim that the idea of self-replicating, self-improving machines (SRSIMs) is simply science fiction, and unworthy of consideration.

However, self-replicating AIs have already been developed, and self-improving AI is considered by serious observers to be not just a possibility, but as much an existential risk.

The idea that AI research will not produce SRSIMs in, say, the next thousand years only makes sense if you believe human societies will soon no longer exist, that we are at The End of Days.

If we are not on the verge of global nuclear holocaust, then in the next million years, the advancement of AI will be at least as far beyond our comprehension as the internet would be beyond a Neanderthal’s comprehension.

It would take a brave philosopher to claim that AI development over a million or billion years would not generate a new kind of sentience.

Again, you disregard deep time. I suppose that's because it’s hard to predict so far ahead and one cannot be sure about anything. Yet you are confident that, over deep time, AI cannot possibly develop any kind of sentience. Why would AI, over deep time, never take advantage of the obvious utility of sentience? It's not a matter of teleology, as you imply, but logic. Sentience is obviously useful. If it wasn't, it would not have become so widespread.

To be fair, AI might (rightly) assess that sentience is the source of suffering, and decline in the spirit of Benetar. However, it might not be in control. As AI complexifies, there will surely be unexpected emergences.

One would expect that, if not sentience, AI would evolve some kind of equivalent. As Lagaya suggested, if a form of sentience is useful to future AI's operations, then it will emerge through competition.

The merging of biology and technology is another potential pathway towards AI.
Let’s start with self-replicating (not necessarily intelligent) machines. They have been for decades the subject of sci-fi books and a bunch of futuristic theories, some with the aim of opening the path to serious research and implementation, NONE of which went any further than the printed words. Finally, a supposedly great milestone was achieved: a self-replicating machine was built, which was nothing more than the predecessor of the 3D printers. It could 3D-print its own parts, but of course, all the software, design, materials and power used by the mother-machine to produce the parts, had to be supplied by humans controlling the whole process. And then, the new machine itself had to be assembled by them, too. And that was all the hype about self-replicating machines.

Now, consider what is AI actually. It’s software running on hardware, on physical systems (devices) comprised of electronic circuits, wires and other electrical components fixed to metal frames, all built by humans. These devices are powered with electrical current sourced from systems of power generation and distribution designed and built by humans with organized labor to extract, transport, distribute and modify raw materials. The whole operation of this network of human activities is what ensures the existence and operation of any electronic device, such as the computers where the software runs. But then, what do we mean by a “self-replicating AI” when talking about life-emulating capabilities? If we meant: one that reproduces an instance of its own algorithms (software), we would be only fooling ourselves. A real self-replicating, life-emulating AI, will have to build its own hardware, produce new devices, under its total control, even if automated. If humans intervene and are required to activate any process, it stops being self-replicating. Think of any stage in the production chain of devices and you’ll easily realize that such a marvel has not been even prototyped, so it’s not true that we are in the first steps.

As always, AI enthusiasts argue that it’s still possible in a distant future, but when doing so, they don’t advance a comprehensive theory of HOW it would be technically done, they simply rely on the purely theoretical assumption, taken from the computational theory of mind, that from sophisticated algorithms, agency and consciousness will emerge. They also take for granted that mind-body dualism is true, so intelligence can be a thing on its own, just accidentally attached to a physical body. So, somewhere some time, robotics will be thrown into the mix and…eureka! you will have artificial organisms. I can see clearly that such path cannot lead to the AI utopia (or dystopia) that they envision, because no matter how many algorithmic iterations and computational networks you make to simulate life, agency, sentience and intelligence, none of these things work that way.
Favorite Philosopher: Umberto Eco Location: Panama
#468858
Lagayscienza wrote: October 12th, 2024, 5:03 am
Count Lucanor wrote: October 11th, 2024, 12:58 pm
Lagayscienza wrote: October 11th, 2024, 10:21 am
I don’t see why SRSIMs could not also evolve in complexity and prowess.
What SRSIMs? Show me a real one.
I can't. They haven't been invented yet. At least, not on earth. But other intelligences out there may have already sent SRSIMs out. It will be quite a long time (in human terms) before we invent them here on earth, but not forever. If we are around for long enough, then I don't doubt that it will happen eventually. And maybe some ETSRSIMs will discover us before we get around to inventing our own.
If we were just interested in being creative and letting our speculations about the future run free, well...anything goes, so there's not much run for informed reasoning about what we can really expect from today's technology.
Favorite Philosopher: Umberto Eco Location: Panama
#468863
Count Lucanor wrote: October 13th, 2024, 3:14 pm
Sy Borg wrote: October 11th, 2024, 6:44 pm To reduce confusion and make the discussion more readable, let’s boil things down.

Side issue 1) You claim that the Earth is unimportant. It would seem that, according to your definition, all planets are unimportant, which makes no sense.

Earth is important because it is important to us. We are important, people are important, animals are important, plants are important – to ourselves, at least. Since there appears to be no one else around in this part of the cosmos, our views matter most.
I don’t have major issues with this view since it is more or less consistent with what I’ve said in this thread and this forum. It might differ in that my view is more anthropocentric, because I don’t think plants and other animals have such judgement abilities as to be able to care about their place in the universe, but since I don’t want to open another side issue, I will not insist. The point to make is that the Earth-centered view that values our exceptionality needs to be balanced with the humble acknowledgement of our littleness and ephemeral presence in a superlatively vast universe. It’s not a strange, awkward concept, it was eloquently captured in Carl Sagan’s famous Pale Blue Dot.
I don’t need convincing about Earths smallness. The Sun comprises 99.98% of the solar system’s mass, so what’s happening on Earth is really part of the Sun’s journey, given that we exist deep within the Sun’s heliosphere. I am also a fan of Sagan and his Blue Dot speech.

While size matters in many arenas, it is a mistake to equate size with importance. A virus can bring down a human, despite being 70,000, 000, 000, 000, 000, 000 times less massive.

Count Lucanor wrote: October 13th, 2024, 3:14 pm
Sy Borg wrote: October 11th, 2024, 6:44 pm Side issue 2) You claim that geology does not evolve, only biology, as per the textbooks. "Evolution" was defined at a time when scientists did not know what we know today about the connections between geology and biology. That's why the field of geobiology was created. There was an entire evolution of Earth's chemistry that made abiogenesis possible.

The question of whether the technical meaning of evolution needs to be expanded to better describe what nature is really like could be a topic in itself.
One needs to be careful with the use of words. “Evolution” is a generic word for change, but placed in specific contexts, it carries different meanings. Initially, you placed the words “evolution” and “natural selection” in the context of not only Earth’s geology, but the solar system, comets, asteroids, etc. I stand by my reply: they are not the same evolution, nor natural selection, as Darwin had described for biological systems. Geobiology is fine, it goes to the relationships between the environment and organisms, specially microorganisms, but it does not cancel geology or biology, which still constitute domains of their own.
The word “evolution” was coined in the 19th century, long before the interdependencies of geology and biology were understood. Another example of such an incongruency is the naming of planets and stars, thus a red dwarf is many times more massive than any gas giant. It’s even possible that the universe has been prematurely named.

While science needs to break things down into categories for analytical purposes, we need to be careful not to confuse the map with the territory. The phenomena matter more than the words, a point that was well illustrated by Richard Feynman. There is no hard difference between organic chemistry and biology, hence the debated status of viruses, prions, viroids and ribozymes. Further, we do not know of the transitional forms that preceded LUCA.

Everything is under selection pressure and thus evolves, including culture, currencies, the arts, et al … and including technology.


Count Lucanor wrote: October 13th, 2024, 3:14 pm
Sy Borg wrote: October 11th, 2024, 6:44 pm Main issue: You claim that the idea of self-replicating, self-improving machines (SRSIMs) is simply science fiction, and unworthy of consideration.

However, self-replicating AIs have already been developed, and self-improving AI is considered by serious observers to be not just a possibility, but as much an existential risk.

The idea that AI research will not produce SRSIMs in, say, the next thousand years only makes sense if you believe human societies will soon no longer exist, that we are at The End of Days.

If we are not on the verge of global nuclear holocaust, then in the next million years, the advancement of AI will be at least as far beyond our comprehension as the internet would be beyond a Neanderthal’s comprehension.

It would take a brave philosopher to claim that AI development over a million or billion years would not generate a new kind of sentience.

Again, you disregard deep time. I suppose that's because it’s hard to predict so far ahead and one cannot be sure about anything. Yet you are confident that, over deep time, AI cannot possibly develop any kind of sentience. Why would AI, over deep time, never take advantage of the obvious utility of sentience? It's not a matter of teleology, as you imply, but logic. Sentience is obviously useful. If it wasn't, it would not have become so widespread.

To be fair, AI might (rightly) assess that sentience is the source of suffering, and decline in the spirit of Benetar. However, it might not be in control. As AI complexifies, there will surely be unexpected emergences.

One would expect that, if not sentience, AI would evolve some kind of equivalent. As Lagaya suggested, if a form of sentience is useful to future AI's operations, then it will emerge through competition.

The merging of biology and technology is another potential pathway towards AI.
Let’s start with self-replicating (not necessarily intelligent) machines. They have been for decades the subject of sci-fi books and a bunch of futuristic theories, some with the aim of opening the path to serious research and implementation, NONE of which went any further than the printed words. Finally, a supposedly great milestone was achieved: a self-replicating machine was built, which was nothing more than the predecessor of the 3D printers. It could 3D-print its own parts, but of course, all the software, design, materials and power used by the mother-machine to produce the parts, had to be supplied by humans controlling the whole process. And then, the new machine itself had to be assembled by them, too. And that was all the hype about self-replicating machines.

Now, consider what is AI actually. It’s software running on hardware, on physical systems (devices) comprised of electronic circuits, wires and other electrical components fixed to metal frames, all built by humans. These devices are powered with electrical current sourced from systems of power generation and distribution designed and built by humans with organized labor to extract, transport, distribute and modify raw materials. The whole operation of this network of human activities is what ensures the existence and operation of any electronic device, such as the computers where the software runs. But then, what do we mean by a “self-replicating AI” when talking about life-emulating capabilities? If we meant: one that reproduces an instance of its own algorithms (software), we would be only fooling ourselves. A real self-replicating, life-emulating AI, will have to build its own hardware, produce new devices, under its total control, even if automated. If humans intervene and are required to activate any process, it stops being self-replicating. Think of any stage in the production chain of devices and you’ll easily realize that such a marvel has not been even prototyped, so it’s not true that we are in the first steps.
It's true that the process is in its early days. Then again, consider what the internet was just thirty years ago and what it is now.
AI wrote: Recent advancements have brought the concept of self-replicating machines closer to reality:
• Xenobots: In 2020, researchers at Tufts University created xenobots—living robots made from frog cells—that can perform simple tasks and exhibit self-replication capabilities. These xenobots can collect materials and build copies of themselves through a novel form of replication that differs from traditional biological processes. This discovery represents a significant step toward understanding how living systems can be engineered for autonomous reproduction.
• NASA Initiatives: NASA has been exploring self-replicating factories for space missions since the 1980s. Studies have proposed designs for lunar factories that could utilize local resources to create additional manufacturing units without requiring constant supply from Earth. This approach aligns with NASA’s goals of sustainable exploration and resource utilization on other celestial bodies.
• AI Integration: The integration of artificial intelligence (AI) into the design and operation of self-replicating machines is another recent development. AI algorithms can optimize designs for better performance in replication tasks, as demonstrated by xenobot research where AI-generated shapes improved their ability to replicate effectively.
Count Lucanor wrote: October 13th, 2024, 3:14 pm As always, AI enthusiasts argue that it’s still possible in a distant future, but when doing so, they don’t advance a comprehensive theory of HOW it would be technically done, they simply rely on the purely theoretical assumption, taken from the computational theory of mind, that from sophisticated algorithms, agency and consciousness will emerge. They also take for granted that mind-body dualism is true, so intelligence can be a thing on its own, just accidentally attached to a physical body. So, somewhere some time, robotics will be thrown into the mix and…eureka! you will have artificial organisms. I can see clearly that such path cannot lead to the AI utopia (or dystopia) that they envision, because no matter how many algorithmic iterations and computational networks you make to simulate life, agency, sentience and intelligence, none of these things work that way.
Firstly, I am not an “AI enthusiast” or any other hip term that people occasionally ascribe to me. I am just an old person who analyses reality (more usually biology and space), but it’s clear that AI is becoming ever more pivotal and is a fascinating area with extraordinary potentials.

Self-replication will conceivably occur by providing 3D printers with blueprints. This kind of machine will be critical to any attempts to colonise other worlds, eg. Moon, Mars, Titan, to extract resources and construct habitats. So, it will be done (asteroids permitting).

Your comment about AI utopias or dystopias suggests that you do not understand my point. I am not thinking about the short term, in our life spans. I am looking far, far into the future. Just because it is hard to predict doesn't mean we should pretend that that distant future is fantasy. Barring a killer asteroid or global nuclear destruction, the far future will arrive and ideas currently in blueprint form will be actualised.

I doubt that AI will destroy humans, but it will almost certainly outlast humans. Further, in the future AIs will be sent to other worlds so that they may proliferate and develop. There are many plans in the works towards this end, and not just in the US.
#468885
I imagine being dialectical myself as I can have the freedom to focus on what my understanding considers possible and how the possibility(virtual) becomes substance. As I examine the concept of substance, I might consider not using any categorical imperatives and instead using our paradigm when possible. Instead of infinitely infinite (Spinoza’s dialectic notion), I want to draw the line in the discovery of new boson particles and the boson themselves as the first regime (the virtual regime). That there is the Higgs mechanism that there is matter, elements and finally organic and inorganic compounds. In general, I see two regimes: The bosonic (virtual) regime and the matter regime. I see human substance as an integration of both regimes. IMO there is a relationship between both regimes since the beginning of the Universe and if matter can take any elementary shape, then the shape of an Android might be possible, but the understanding of the relationship is not dialectically possible for many of its properties. However, if I consider a cyclical genesis that is a duality of X and the bosonic regime becoming the duality bosonic regime and the matter regime and those two becoming the human regime that in a hypothetical dialectic might become the Human Android regime… or not since hypothetically, there are other possible regimes.
#468887
1. Earth Smallness: importance is, evidently, a matter relative to the criteria and context you choose. The point I made about how ridiculously unimportant our planet is, is justified by the criteria and context I chose, and everyone is entitled to use another one, in fact I have done it myself, as it has been made clear. So I don't think it's worth insisting that the statement is a mistake, objectively speaking, because it is not.

2. The word "evolution": I was very doubtful that it was coined in the 19th century, so I had to look it up. I avoided asking ChatGPT because it always flunks at history. Turns out it was not invented in the 19th century, as it entered the English language around the 16th century, which makes sense, given that the word comes from the Latin term evolutio. Romanic languages took from that root, so in Spanish we have "evolución¨,"évolution" in French and "evoluzione" in Italian. I don't know exactly how it made it into English, but in any case, it is without question that the word was being used long time before Darwin, and Darwin himself seems to have avoided the term. So, my previous reply stands as the right approach: the word is a generic term for change, which takes specific meanings in the context it is used.

3. Self-replicating machines: in case I did not make myself clear, I don't endorse the idea that the process to build them is in the early days. The context in which we are arguing about self-replicating machines is focused on the possibility of life-emulating technology recreating the processes of living beings, starting from non-organic, inanimate matter. It is more than obvious that xenobots, made from frog cells, do not fall into that category. So, these are no primitive self-replicating machines, these are organic cells being reengineered.

Regarding the NASA initiatives, what the AI chat refers to is the 1982 NASA paper on the subject (that's 42 years ago), which I've read. It mentions again Von Neumann and falls within the scope of my summary about the purely theoretical attempts on self-replicating machines that ended in no implementation whatsoever, finding its peak achievement in the predecessor of the 3D printers. Anything that needs humans inputs as blueprints, maintenance, materials, etc., is not self-replicating. The term is deceiving, a better word that encompasses what we should be looking for is self-sustainable (collectively).

4. AI enthusiasts: I'm sorry if my comment is taken as directed personally. By AI enthusiasts I often refer to the whole field of public intellectuals in academia and the tech industry, peddling said utopian (or dystopian) views, and surely, their followers. I prefer that term over some others that might sound like a conspiracy. The label just points at the fact that these commentators and tech entrepreneurs have created a public hype mostly out of wishful thinking rather than feet-on-the-ground reality. As I said: they don’t advance a comprehensive theory of HOW it would be technically done, they simply rely on the purely theoretical assumption, taken from the computational theory of mind, that from sophisticated algorithms, agency and consciousness will emerge. They also take for granted that mind-body dualism is true, so intelligence can be a thing on its own, just accidentally attached to a physical body. So, somewhere some time, robotics will be thrown into the mix and…eureka! you will have artificial organisms. All of those assumptions are highly debatable.

I have no doubt that automation, called AI or whatever, will outperform humans in many activities, as all technology has done since the rise of human civilization. It's just that it will not be intelligent tech, not really, just a sophisticated simulation, controlled all the time by humans.
Favorite Philosopher: Umberto Eco Location: Panama
#468892
Count Lucanor wrote: October 14th, 2024, 12:08 pm 1. Earth Smallness: importance is, evidently, a matter relative to the criteria and context you choose. The point I made about how ridiculously unimportant our planet is, is justified by the criteria and context I chose, and everyone is entitled to use another one, in fact I have done it myself, as it has been made clear. So I don't think it's worth insisting that the statement is a mistake, objectively speaking, because it is not.
The claim that the Earth is “ridiculously unimportant” is semantically wrong. I get it - I was a space fan before you were even a gleam in your father’s eye. The universe and its large structures are so immense that Earth is a grain of dust by comparison. So yes, in terms of scale and relations, Earth is akin to an organelle within a cell of the Milky Way.

However, if our galaxy does not have a galactic empire, then “unimportant” is an inappropriate description. It has a disparaging semantic that underplays this remarkable and unique planet, and the impossibly complex forms it has evolved.


Count Lucanor wrote: October 14th, 2024, 12:08 pm 2. The word "evolution": I was very doubtful that it was coined in the 19th century, so I had to look it up. I avoided asking ChatGPT because it always flunks at history. Turns out it was not invented in the 19th century, as it entered the English language around the 16th century, which makes sense, given that the word comes from the Latin term evolutio. Romanic languages took from that root, so in Spanish we have "evolución¨,"évolution" in French and "evoluzione" in Italian.
Sorry, I did not phrase it well. “Evolution” was first used in terms of Darwinian evolution in the 19th century. Yes, the word referring to chance in general preceded that. I didn’t know it was the 16th century. Learn every day.

Back to the point, everything evolves. I disagree with the academic tendency to hijack the word “evolution” and then claim that only biology evolves. It is misleading.

“Evolution” should ideally be termed “biological evolution”. There was significant geological and chemical evolution on the Earth before abiogenesis. Life was not going to merge from simple basalts and obsidian. Further, there are always selection pressures in every aspect of reality – not biological selection – but similar in many ways.

For instance, the evolution of planets from the proto-planetary disc, as described earlier. You can see evolution in technology. A fascinating example that illustrates the point is the evolution of stone axes from crude chips of rock to relatively detailed and precise tools, even decorated at times.

And yes, AI is evolving. Future AI with self-replication and self-improving abilities are inevitable. Is AI intelligent? No, it is a tool that boosts human intelligence. However, AI will continually have more autonomy. At what point does autonomy equal agency? When might "the lights come on"? If so, how would we know?



Count Lucanor wrote: October 14th, 2024, 12:08 pm 3. Self-replicating machines: in case I did not make myself clear, I don't endorse the idea that the process to build them is in the early days. The context in which we are arguing about self-replicating machines is focused on the possibility of life-emulating technology recreating the processes of living beings, starting from non-organic, inanimate matter. It is more than obvious that xenobots, made from frog cells, do not fall into that category. So, these are no primitive self-replicating machines, these are organic cells being reengineered.
Are you arguing that self-replicating machines will always be impossible? Why would you think that advanced future AI will never have access to 3D printing capabilities? The examples I gave were basic. That will obviously change. For instance, once people needed abacuses to perform calculations. Times change.


Count Lucanor wrote: October 14th, 2024, 12:08 pm Anything that needs humans inputs as blueprints, maintenance, materials, etc., is not self-replicating. The term is deceiving, a better word that encompasses what we should be looking for is self-sustainable (collectively).
Is that like how anything that is human could not possibly have emerged from an ape? DNA is a blueprint, a plan.

Re: “self-sustaining. Just as “evolution” does not only refer to biological evolution, “replication” does not only refer to biological replication.



Count Lucanor wrote: October 14th, 2024, 12:08 pm 4. AI enthusiasts: As I said: they don’t advance a comprehensive theory of HOW it would be technically done, they simply rely on the purely theoretical assumption, taken from the computational theory of mind, that from sophisticated algorithms, agency and consciousness will emerge.
They don’t need to know how. There are two broad possibilities:
a. AI never develops any kind of sentience whatsoever
b. AI develops some kind of sentience.

Logically, any emergent AI sentience will not be the same as biological sentience. It would be shaped by different internal and environmental drivers. Instead of DNA, AI will have schematics. Instead of food it will have electricity. Instead of emotions, it has subroutines.

If AI has 3D printing replication capacities, then it could apply random or designed variables to each blueprint. It could experiment with the aim of innovating.


Count Lucanor wrote: October 14th, 2024, 12:08 pm They also take for granted that mind-body dualism is true, so intelligence can be a thing on its own, just accidentally attached to a physical body. So, somewhere some time, robotics will be thrown into the mix and…eureka! you will have artificial organisms. All of those assumptions are highly debatable.
It’s about emergence, not dualism. You still seem to be thinking in terms of dozens of years rather than millennia, or millions of years.
#468920
Sy Borg wrote: October 14th, 2024, 5:16 pm The claim that the Earth is “ridiculously unimportant” is semantically wrong. I get it - I was a space fan before you were even a gleam in your father’s eye. The universe and its large structures are so immense that Earth is a grain of dust by comparison. So yes, in terms of scale and relations, Earth is akin to an organelle within a cell of the Milky Way.

However, if our galaxy does not have a galactic empire, then “unimportant” is an inappropriate description. It has a disparaging semantic that underplays this remarkable and unique planet, and the impossibly complex forms it has evolved.
OK, if you insist. I talked about finding a balance, on one side our exceptionality and on the other the acknowledgement of our relative insignificant, negligible effect on the rest of the universe. Insignificance, that’s what unimportant means and, obviously, is semantically correct. The Earth has not gone viral, so to speak. You, OTOH, don’t see any need of balance, we’re either super important or super important in the big picture, no concessions. I’m pretty sure it was for those who think that the exceptionality of our little world reigns over the vast universe, that Sagan wrote that Pale Blue Dot.
Sy Borg wrote: October 14th, 2024, 5:16 pm Sorry, I did not phrase it well. “Evolution” was first used in terms of Darwinian evolution in the 19th century. Yes, the word referring to chance in general preceded that. I didn’t know it was the 16th century. Learn every day.

Back to the point, everything evolves. I disagree with the academic tendency to hijack the word “evolution” and then claim that only biology evolves. It is misleading.

“Evolution” should ideally be termed “biological evolution”. There was significant geological and chemical evolution on the Earth before abiogenesis. Life was not going to merge from simple basalts and obsidian. Further, there are always selection pressures in every aspect of reality – not biological selection – but similar in many ways.

For instance, the evolution of planets from the proto-planetary disc, as described earlier. You can see evolution in technology. A fascinating example that illustrates the point is the evolution of stone axes from crude chips of rock to relatively detailed and precise tools, even decorated at times.

And yes, AI is evolving. Future AI with self-replication and self-improving abilities are inevitable. Is AI intelligent? No, it is a tool that boosts human intelligence. However, AI will continually have more autonomy. At what point does autonomy equal agency? When might "the lights come on"? If so, how would we know?
To make my point even more clear, I don’t have any issue with things “evolving” as a way to say they change, transform, mutate, develop, etc. We say that society, culture, economy, technology, continental plaques, planetary systems, etc., evolve, and that’s fine. But then we have a particular application of the term evolution to the process of transformation of populations of organisms, something that is circumscribed to the sphere of biology and is therefore called “biological evolution”, which has been explained as caused by an intrinsic and emergent dynamic of those biological systems, called “natural selection”. You then came here to say that this evolution by natural selection applies to everything from planetary systems, to asteroids, to technology and geological formations, as if it was some sort of pervading force that affects everything. Well, no, that’s simply wrong. They “evolve” as anything else, of course, but with their own dynamic. Technology, for example, evolves in relation to human intervention, so the Clovis arrows didn’t just emerge and changed on their own. Surely, in the particular case of Earth, there will be interdependencies between living and non-living systems, between organisms and their environment, but that sphere of influence ends where life ends, within the limits of the biosphere.

Is AI technology (I use the term as generally accepted, although I believe the “intelligence” part is misleading) evolving? Sure, as all technologies. Is it going in the direction of self-replication and self-improvement? Certainly not, not even starting. All new developments are the result of human control of its processes, both in the software and hardware departments. A few theoretical attempts, but no real implementation. 3D printing is not a candidate for that either. When one pays attention, all the hype about the potential of these things comes from the equivocal use of words to build a narrative. Calling 3D printing self-replication is a perfect example, and so is “self-improvement”.
Sy Borg wrote: October 14th, 2024, 5:16 pm Are you arguing that self-replicating machines will always be impossible? Why would you think that advanced future AI will never have access to 3D printing capabilities? The examples I gave were basic. That will obviously change. For instance, once people needed abacuses to perform calculations. Times change.
I’m arguing that self-replicating machines will be possible when we solve the puzzle of how it can be technically done and find the material and human resources to implement it. We haven’t done anything in that direction yet. Will we ever do it? We might hope so, but we don’t know, just as we don’t know how to teletransport. Although I don’t doubt someone will come up and say: “we know how to teletransport” and then point to something that isn’t, but with the equivocal use of words, gets away with it.
Sy Borg wrote: October 14th, 2024, 5:16 pm
Count Lucanor wrote: October 14th, 2024, 12:08 pm Anything that needs humans inputs as blueprints, maintenance, materials, etc., is not self-replicating. The term is deceiving, a better word that encompasses what we should be looking for is self-sustainable (collectively).
Is that like how anything that is human could not possibly have emerged from an ape? DNA is a blueprint, a plan.

Re: “self-sustaining. Just as “evolution” does not only refer to biological evolution, “replication” does not only refer to biological replication.
Humans are evolved apes, but 3D printers and computers are not evolved minerals. Neither are humans or other living beings merely evolved compounds of carbon atoms. They are, but they are more than that. And nope, DNA is not a plan, not a blueprint. We get once again to the use of metaphors that cloud our thinking. Plans and blueprints imply reason and purpose, applying them to nature is good old teleology.
Sy Borg wrote: October 14th, 2024, 5:16 pm
Count Lucanor wrote: October 14th, 2024, 12:08 pm 4. AI enthusiasts: As I said: they don’t advance a comprehensive theory of HOW it would be technically done, they simply rely on the purely theoretical assumption, taken from the computational theory of mind, that from sophisticated algorithms, agency and consciousness will emerge.
They don’t need to know how.
So, they don’t need to know HOW? Are they saying that? Because if they are, that just goes to show how it has become a messianic cult moved by faith on the miraculous power of technology, as if technology was not human-made, but some kind of mystical force pervading history.
Sy Borg wrote: October 14th, 2024, 5:16 pm There are two broad possibilities:
a. AI never develops any kind of sentience whatsoever
b. AI develops some kind of sentience.

Logically, any emergent AI sentience will not be the same as biological sentience. It would be shaped by different internal and environmental drivers. Instead of DNA, AI will have schematics. Instead of food it will have electricity. Instead of emotions, it has subroutines.

If AI has 3D printing replication capacities, then it could apply random or designed variables to each blueprint. It could experiment with the aim of innovating.
Options A and B appeal to the concept of sentience, which refers to “sentience as we know it”, of which there will be kinds. That’s what you say: a kind of sentience. That inevitably points to sentience of living beings, but then you say that this very kind of new sentience does not belong to the class of sentience of living beings (biological sentience), which is a blatant contradiction. Supposedly, there might be a higher class of sentience under which all the other kinds of sentience fall, but what is it, what are its essential, defining properties as sentience?

So, what makes a non-biological sentience, “sentient” then? If a sentient computer does not do anything that a sentient living being does, why refer to sentience? Why is there need to resort to that particular term and not any other?
Sy Borg wrote: October 14th, 2024, 5:16 pm
Count Lucanor wrote: October 14th, 2024, 12:08 pm They also take for granted that mind-body dualism is true, so intelligence can be a thing on its own, just accidentally attached to a physical body. So, somewhere some time, robotics will be thrown into the mix and…eureka! you will have artificial organisms. All of those assumptions are highly debatable.
It’s about emergence, not dualism. You still seem to be thinking in terms of dozens of years rather than millennia, or millions of years.
I made it clear in my previous comments, but nothing in that last statement points to time frames as a relevant factor. I simply have not used that criteria, so I don’t know where you get it from. It’s you who think it’s just a matter of time, not me. I’ve said a hundred times that the problem is a fundamental flaw in the use of computer technology (hardware and software) to produce life-like characteristics in machines, such as intelligence, agency or sentience. It’s not a question of time, just as it wasn’t when trying to recreate the flight of birds with man-made flapping wings. They could have waited 200 more years, it was not going to happen going that way, because of the physics and what the technology involved. When they understood that what it takes was not imitating the flight of birds, but understanding the principles of aerodynamics, then they figured out a completely new way to fly. I wish that what they call AI was such technology that gets the principles right of a new way to have agency and be sentient, but it’s not.

Now, one thing where time frames become relevant is our ability to predict the future. You think you can predict what is going to happen millions of years ahead. There's an implicit determinism in that line of thought, which I cannot endorse.
Favorite Philosopher: Umberto Eco Location: Panama
#468924
Count Lucanor wrote: October 15th, 2024, 10:54 am
Sy Borg wrote: October 14th, 2024, 5:16 pm The claim that the Earth is “ridiculously unimportant” is semantically wrong. I get it - I was a space fan before you were even a gleam in your father’s eye. The universe and its large structures are so immense that Earth is a grain of dust by comparison. So yes, in terms of scale and relations, Earth is akin to an organelle within a cell of the Milky Way.

However, if our galaxy does not have a galactic empire, then “unimportant” is an inappropriate description. It has a disparaging semantic that underplays this remarkable and unique planet, and the impossibly complex forms it has evolved.
OK, if you insist. I talked about finding a balance, on one side our exceptionality and on the other the acknowledgement of our relative insignificant, negligible effect on the rest of the universe. Insignificance, that’s what unimportant means and, obviously, is semantically correct. The Earth has not gone viral, so to speak. You, OTOH, don’t see any need of balance, we’re either super important or super important in the big picture, no concessions. I’m pretty sure it was for those who think that the exceptionality of our little world reigns over the vast universe, that Sagan wrote that Pale Blue Dot.
It's not a matter of insisting, it’s a matter of appreciating that which is truly extraordinary. Yes, I get it. The universe is very big and our solar system is infinitesimal by comparison.

You suggest that I’m jumping the gun with self-replicating AI because it has not yet been realised. Likewise, I can point out that there is zero evidence so far that intelligent life exists elsewhere, despite many years of looking. That’s makes Earth ostensibly unique.

OTOH, it’s likely that self-replicating self-improving AI will happen at some stage in deep time just as it’s likely that, somewhere (or somewhen) intelligent aliens exist.

Count Lucanor wrote: October 15th, 2024, 10:54 am
Sy Borg wrote: October 14th, 2024, 5:16 pm Sorry, I did not phrase it well. “Evolution” was first used in terms of Darwinian evolution in the 19th century. Yes, the word referring to chance in general preceded that. I didn’t know it was the 16th century. Learn every day.

Back to the point, everything evolves. I disagree with the academic tendency to hijack the word “evolution” and then claim that only biology evolves. It is misleading.

“Evolution” should ideally be termed “biological evolution”. There was significant geological and chemical evolution on the Earth before abiogenesis. Life was not going to merge from simple basalts and obsidian. Further, there are always selection pressures in every aspect of reality – not biological selection – but similar in many ways.

For instance, the evolution of planets from the protoplanetary disc, as described earlier. You can see evolution in technology. A fascinating example that illustrates the point is the evolution of stone axes from crude chips of rock to relatively detailed and precise tools, even decorated at times.

And yes, AI is evolving. Future AI with self-replication and self-improving abilities are inevitable. Is AI intelligent? No, it is a tool that boosts human intelligence. However, AI will continually have more autonomy. At what point does autonomy equal agency? When might "the lights come on"? If so, how would we know?
To make my point even more clear, I don’t have any issue with things “evolving” as a way to say they change, transform, mutate, develop, etc. We say that society, culture, economy, technology, continental plaques, planetary systems, etc., evolve, and that’s fine. But then we have a particular application of the term evolution to the process of transformation of populations of organisms, something that is circumscribed to the sphere of biology and is therefore called “biological evolution”, which has been explained as caused by an intrinsic and emergent dynamic of those biological systems, called “natural selection”. You then came here to say that this evolution by natural selection applies to everything from planetary systems, to asteroids, to technology and geological formations, as if it was some sort of pervading force that affects everything. Well, no, that’s simply wrong. They “evolve” as anything else, of course, but with their own dynamic.
Of course, it’s selection. Certain attributes persist and others dissipate. In every area of life, not just biology. For instance, consider the evolution of planets in the protoplanetary disc. What characteristics are selected?

During the very early period, magnetic properties would have been important. Density and size are other obviously important attributes – larger objects absorbed smaller ones, and grew. As the objects grew into asteroids and planetesimals, they would absorb some objects, destroy others, and eject others out of the system. After millions of years of jostling, planets large enough to clear their orbital space emerged.

Also, I would prefer it if you didn’t try to pin quasi-political stuff to my thoughts. “Pervading force”? That’s a red herring. We are discussing the dynamics of nature. No matter what labels one wishes to use, the fact is that we humans and our works are as much a part of nature as the trees, the mountains and the oceans, and so we (and our works) are likewise subject to natural selection.

Count Lucanor wrote: October 15th, 2024, 10:54 am Is AI technology (I use the term as generally accepted, although I believe the “intelligence” part is misleading) evolving? Sure, as all technologies. Is it going in the direction of self-replication and self-improvement? Certainly not, not even starting. All new developments are the result of human control of its processes, both in the software and hardware departments. A few theoretical attempts, but no real implementation. 3D printing is not a candidate for that either. When one pays attention, all the hype about the potential of these things comes from the equivocal use of words to build a narrative. Calling 3D printing self-replication is a perfect example, and so is “self-improvement”.
Do you believe that humankind will nuke itself back into the Stone Age before autonomous, self-replicating, self-improving AI can be developed? Or maybe an asteroid, nanobots or germ warfare?

Otherwise, we have many millennia of progression ahead. The only way we won’t develop self-replicating, self-improving AI is if we become a global Idiocracy, which probably seems more likely ATM than it really is.


Count Lucanor wrote: October 15th, 2024, 10:54 am I’m arguing that self-replicating machines will be possible when we solve the puzzle of how it can be technically done and find the material and human resources to implement it. We haven’t done anything in that direction yet.
I asked an AI about this. I liked its answer.
AI wrote: Experts in the field have varying opinions regarding the timeline for achieving self-replicating and self-improving AI:
1. Optimistic Views: Some researchers believe that with rapid advancements in machine learning techniques and computational power, we could see significant progress within the next few decades—possibly by 2050.
2. Cautious Perspectives: Others argue that while incremental improvements will continue, true AGI—and thus self-replicating capabilities—may take much longer to achieve, potentially extending into the latter half of the 21st century or beyond.
3. Skeptical Outlooks: A segment of experts remains skeptical about whether these technologies will ever be fully realized due to inherent limitations in our understanding of intelligence itself.

Count Lucanor wrote: October 15th, 2024, 10:54 am
Sy Borg wrote: October 14th, 2024, 5:16 pm
Count Lucanor wrote: October 14th, 2024, 12:08 pm Anything that needs humans inputs as blueprints, maintenance, materials, etc., is not self-replicating. The term is deceiving, a better word that encompasses what we should be looking for is self-sustainable (collectively).
Is that like how anything that is human could not possibly have emerged from an ape? DNA is a blueprint, a plan.

Re: “self-sustaining. Just as “evolution” does not only refer to biological evolution, “replication” does not only refer to biological replication.
Humans are evolved apes, but 3D printers and computers are not evolved minerals. Neither are humans or other living beings merely evolved compounds of carbon atoms. They are, but they are more than that. And nope, DNA is not a plan, not a blueprint. We get once again to the use of metaphors that cloud our thinking. Plans and blueprints imply reason and purpose, applying them to nature is good old teleology.
In a way, computers are evolved geology. It all depends on whether one sees humans as being part of nature, or something separate.

And please do not mention teleology again. It’s a red herring that does not apply to this conversation.

Count Lucanor wrote: October 15th, 2024, 10:54 am
Sy Borg wrote: October 14th, 2024, 5:16 pm
Count Lucanor wrote: October 14th, 2024, 12:08 pm 4. AI enthusiasts: As I said: they don’t advance a comprehensive theory of HOW it would be technically done, they simply rely on the purely theoretical assumption, taken from the computational theory of mind, that from sophisticated algorithms, agency and consciousness will emerge.
They don’t need to know how.
So, they don’t need to know HOW? Are they saying that? Because if they are, that just goes to show how it has become a messianic cult moved by faith on the miraculous power of technology, as if technology was not human-made, but some kind of mystical force pervading history.
Please do not talk about messianic cult or mystical forces again. These are red herrings that does not apply to this conversation.


Count Lucanor wrote: October 15th, 2024, 10:54 am
Sy Borg wrote: October 14th, 2024, 5:16 pm There are two broad possibilities:
a. AI never develops any kind of sentience whatsoever
b. AI develops some kind of sentience.

Logically, any emergent AI sentience will not be the same as biological sentience. It would be shaped by different internal and environmental drivers. Instead of DNA, AI will have schematics. Instead of food it will have electricity. Instead of emotions, it has subroutines.

If AI has 3D printing replication capacities, then it could apply random or designed variables to each blueprint. It could experiment with the aim of innovating.
Options A and B appeal to the concept of sentience, which refers to “sentience as we know it”, of which there will be kinds. That’s what you say: a kind of sentience. That inevitably points to sentience of living beings, but then you say that this very kind of new sentience does not belong to the class of sentience of living beings (biological sentience), which is a blatant contradiction.
Yes, some kind of sentience. I see no contradiction. It's got to be different because it's a different type of entity to us meat sacks.


Count Lucanor wrote: October 15th, 2024, 10:54 am Supposedly, there might be a higher class of sentience under which all the other kinds of sentience fall, but what is it, what are its essential, defining properties as sentience?

So, what makes a non-biological sentience, “sentient” then? If a sentient computer does not do anything that a sentient living being does, why refer to sentience? Why is there need to resort to that particular term and not any other?
To me, sentience means a sense of internality. Sensing your environment. Feeling your environment, that your environment matters to you.

Given that it seems that only motile organisms are sentient, moving around in one’s environment seems to be a prerequisite. To seek resources and avoid threats.


Count Lucanor wrote: October 15th, 2024, 10:54 am
Sy Borg wrote: October 14th, 2024, 5:16 pm It’s about emergence, not dualism. You still seem to be thinking in terms of dozens of years rather than millennia, or millions of years.
I’ve said a hundred times that the problem is a fundamental flaw in the use of computer technology (hardware and software) to produce life-like characteristics in machines, such as intelligence, agency or sentience. It’s not a question of time, just as it wasn’t when trying to recreate the flight of birds with man-made flapping wings. They could have waited 200 more years, it was not going to happen going that way, because of the physics and what the technology involved. When they understood that what it takes was not imitating the flight of birds, but understanding the principles of aerodynamics, then they figured out a completely new way to fly. I wish that what they call AI was such technology that gets the principles right of a new way to have agency and be sentient, but it’s not.

Now, one thing where time frames become relevant is our ability to predict the future. You think you can predict what is going to happen millions of years ahead. There's an implicit determinism in that line of thought, which I cannot endorse.
Your analogy does not work. It’s not a matter of getting AI to copy biology. That’s just a game.

No, self-replicating, self-improving AI will be needed to study and to mine distant objects in space. Until that tech is achieved, it will be impossible to set up the equipment needed to do serious work on other worlds. So, the requisite tech will be developed.
#468943
The colored regime: IMO, in the hypothetical-colored regime thesis of emergent AI, It happens in the context of “color symmetry” that applied Einstein work of “teleparallel gravity”. This AI-like regime could have IMO its foundation on the work of Robert Monjo (Saint Louis University. Madrid campus) and two other scientists that worked on a theory of “entangled virtual bosons” It is my understanding that “color symmetry” is extended with this theory to “colored Gravity” and that the “entangled virtual bosons” provided a geometry in the spacetime like the double helix of DNA. There is a double helix in spacetime made of virtual bosons. Source. ‘Journal General Relativity and Gravitation’.
#468945
Sy Borg wrote: October 15th, 2024, 3:31 pm It's not a matter of insisting, it’s a matter of appreciating that which is truly extraordinary. Yes, I get it. The universe is very big and our solar system is infinitesimal by comparison.

You suggest that I’m jumping the gun with self-replicating AI because it has not yet been realised. Likewise, I can point out that there is zero evidence so far that intelligent life exists elsewhere, despite many years of looking. That’s makes Earth ostensibly unique.

OTOH, it’s likely that self-replicating self-improving AI will happen at some stage in deep time just as it’s likely that, somewhere (or somewhen) intelligent aliens exist.
But while the existence of ETI is a question concerning, among other things, the possibilities of nature, or if you prefer, simply not dependent of human intervention, self-replicating self-improving AI belongs to a different category, it is entirely dependent of what humans can do. The former can still be said to exemplify the powers of nature left to work on their own, but the latter cannot. The arguments in favor of the possibility of the first is that it has already happened as an undirected, spontaneous, emergent process. It stays as a possibility until positive, empirical evidence is found. The arguments for the possibility of the second are weak, because it depends on human achievements, it’s not reasonable to just sit and wait, expecting that it arises without human direction. Optimism on that side requires that we have unmeasurable faith on the endless capabilities of humans. Faith could give up to certainty based on evidence if actual implementations were in progress and if the master plans from the tech wizards had feasible technical solutions to the problem of self-replication and self-improving AI. Unfortunately, the proclaimed tech wizards do not. They are still flapping bird wings.
Sy Borg wrote: October 15th, 2024, 3:31 pm Of course, it’s selection. Certain attributes persist and others dissipate. In every area of life, not just biology. For instance, consider the evolution of planets in the protoplanetary disc. What characteristics are selected?

During the very early period, magnetic properties would have been important. Density and size are other obviously important attributes – larger objects absorbed smaller ones, and grew. As the objects grew into asteroids and planetesimals, they would absorb some objects, destroy others, and eject others out of the system. After millions of years of jostling, planets large enough to clear their orbital space emerged.
The word “selection” is not appropriate for the description of what happens in non-living systems. Natural selection only applies to biological entities, not to atoms, planets, and so on. The term selection is used in connection with biological evolution because it means that some adaptations are more favorable than others in terms of the survival of a group of organisms, so it appears as if one set of adaptations is ‘selected’ over the others in a competition for resources. This process takes the appearance of a survival strategy because of the adaptations, even though is not really a goal-oriented process. Surely, non-living systems following the laws of physics generate specific outputs, but they don’t seem to adapt to what’s more favorable for their “survival”, because there’s no competition for resources, nor the overall effect of a survival strategy, so the analogy ends there. The emergent properties of living systems are simply not the properties of non-living systems, otherwise they wouldn’t even be emergent properties at all.
Sy Borg wrote: October 15th, 2024, 3:31 pm Also, I would prefer it if you didn’t try to pin quasi-political stuff to my thoughts. “Pervading force”? That’s a red herring. We are discussing the dynamics of nature. No matter what labels one wishes to use, the fact is that we humans and our works are as much a part of nature as the trees, the mountains and the oceans, and so we (and our works) are likewise subject to natural selection.
By "pervading force" I mean a pervading force of nature, so the discussion remains within the dynamics of nature, nothing quasi-political. But while nature is everything, not everything in nature is just the same, so humans and trees are natural in their own way that is different than mountains and oceans. And they are definitely not subject and conditioned to the exact same processes.
Sy Borg wrote: October 15th, 2024, 3:31 pm
Count Lucanor wrote: October 15th, 2024, 10:54 am Is AI technology (I use the term as generally accepted, although I believe the “intelligence” part is misleading) evolving? Sure, as all technologies. Is it going in the direction of self-replication and self-improvement? Certainly not, not even starting. All new developments are the result of human control of its processes, both in the software and hardware departments. A few theoretical attempts, but no real implementation. 3D printing is not a candidate for that either. When one pays attention, all the hype about the potential of these things comes from the equivocal use of words to build a narrative. Calling 3D printing self-replication is a perfect example, and so is “self-improvement”.
Do you believe that humankind will nuke itself back into the Stone Age before autonomous, self-replicating, self-improving AI can be developed? Or maybe an asteroid, nanobots or germ warfare?
Otherwise, we have many millennia of progression ahead. The only way we won’t develop self-replicating, self-improving AI is if we become a global Idiocracy, which probably seems more likely ATM than it really is.
You think it's only a matter of time and feel optimistic about the accomplishment. I think it's a matter of actual technical capabilities, which have not been demonstrated yet, and a fundamental flaw in what the current technology pursuits, so I'll rather wait before jumping into the bandwagon of AI enthusiasts.
Sy Borg wrote: October 15th, 2024, 3:31 pm
Count Lucanor wrote: October 15th, 2024, 10:54 am I’m arguing that self-replicating machines will be possible when we solve the puzzle of how it can be technically done and find the material and human resources to implement it. We haven’t done anything in that direction yet.
I asked an AI about this. I liked its answer.
AI wrote: Experts in the field have varying opinions regarding the timeline for achieving self-replicating and self-improving AI:
1. Optimistic Views: Some researchers believe that with rapid advancements in machine learning techniques and computational power, we could see significant progress within the next few decades—possibly by 2050.
2. Cautious Perspectives: Others argue that while incremental improvements will continue, true AGI—and thus self-replicating capabilities—may take much longer to achieve, potentially extending into the latter half of the 21st century or beyond.
3. Skeptical Outlooks: A segment of experts remains skeptical about whether these technologies will ever be fully realized due to inherent limitations in our understanding of intelligence itself.
I'll take door #3, please.
Sy Borg wrote: October 15th, 2024, 3:31 pm In a way, computers are evolved geology. It all depends on whether one sees humans as being part of nature, or something separate.

And please do not mention teleology again. It’s a red herring that does not apply to this conversation.
I don't need to mention it if you don't characterize the products of nature as being designed, planned.
Sy Borg wrote: October 15th, 2024, 3:31 pm
Count Lucanor wrote: October 15th, 2024, 10:54 am
Sy Borg wrote: October 14th, 2024, 5:16 pm There are two broad possibilities:
a. AI never develops any kind of sentience whatsoever
b. AI develops some kind of sentience.

Logically, any emergent AI sentience will not be the same as biological sentience. It would be shaped by different internal and environmental drivers. Instead of DNA, AI will have schematics. Instead of food it will have electricity. Instead of emotions, it has subroutines.

If AI has 3D printing replication capacities, then it could apply random or designed variables to each blueprint. It could experiment with the aim of innovating.
Options A and B appeal to the concept of sentience, which refers to “sentience as we know it”, of which there will be kinds. That’s what you say: a kind of sentience. That inevitably points to sentience of living beings, but then you say that this very kind of new sentience does not belong to the class of sentience of living beings (biological sentience), which is a blatant contradiction.
Yes, some kind of sentience. I see no contradiction. It's got to be different because it's a different type of entity to us meat sacks.
It's an obvious contradiction. A kind of sentience is a kind of property that living beings have, so you then cannot say that this new hypothetical sentience does not have those properties, because then it will not be sentient. Why then call it sentience? Alternatively, you could say that there's a property called sentience that applies to both non-living and living beings, but then I asked what is it? What are those properties? I'm waiting for a comprehensive answer.
Sy Borg wrote: October 15th, 2024, 3:31 pm
Count Lucanor wrote: October 15th, 2024, 10:54 am Supposedly, there might be a higher class of sentience under which all the other kinds of sentience fall, but what is it, what are its essential, defining properties as sentience?

So, what makes a non-biological sentience, “sentient” then? If a sentient computer does not do anything that a sentient living being does, why refer to sentience? Why is there need to resort to that particular term and not any other?
To me, sentience means a sense of internality. Sensing your environment. Feeling your environment, that your environment matters to you.

Given that it seems that only motile organisms are sentient, moving around in one’s environment seems to be a prerequisite. To seek resources and avoid threats.
But inanimate objects don't feel, have no internal sense, don't have interests as to anything mattering, do not think, do not reason, do not know of threats, do not seek resources. If that's sentience, it clearly points to properties found only in living beings. Back to my question: how is that other sentience that is different from biological sentience but it's still sentience?
Sy Borg wrote: October 15th, 2024, 3:31 pm Your analogy does not work. It’s not a matter of getting AI to copy biology. That’s just a game.
If they're not trying to copy biology why then all the talk that references biology?: intelligence, sentience, agency, self-improvement, self-replication, autonomy, etc?
Sy Borg wrote: October 15th, 2024, 3:31 pm No, self-replicating, self-improving AI will be needed to study and to mine distant objects in space. Until that tech is achieved, it will be impossible to set up the equipment needed to do serious work on other worlds. So, the requisite tech will be developed.
Lacking any evidence, that amounts to an incredible faith in human technical capabilities: if you require it, you'll eventually achieve it, regardless of unsolved current constraints. Anyone who wants to get into that bandwagon, can do so. I hope they will not try to convince me to share their unrestrained optimism.
Favorite Philosopher: Umberto Eco Location: Panama
#468950
Count Lucanor wrote: October 16th, 2024, 2:07 pm
Sy Borg wrote: October 15th, 2024, 3:31 pm It's not a matter of insisting, it’s a matter of appreciating that which is truly extraordinary. Yes, I get it. The universe is very big and our solar system is infinitesimal by comparison.

You suggest that I’m jumping the gun with self-replicating AI because it has not yet been realised. Likewise, I can point out that there is zero evidence so far that intelligent life exists elsewhere, despite many years of looking. That’s makes Earth ostensibly unique.

OTOH, it’s likely that self-replicating self-improving AI will happen at some stage in deep time just as it’s likely that, somewhere (or somewhen) intelligent aliens exist.
But while the existence of ETI is a question concerning, among other things, the possibilities of nature, or if you prefer, simply not dependent of human intervention, self-replicating self-improving AI belongs to a different category, it is entirely dependent of what humans can do. The former can still be said to exemplify the powers of nature left to work on their own, but the latter cannot. The arguments in favor of the possibility of the first is that it has already happened as an undirected, spontaneous, emergent process. It stays as a possibility until positive, empirical evidence is found.

The arguments for the possibility of the second are weak, because it depends on human achievements, it’s not reasonable to just sit and wait, expecting that it arises without human direction. Optimism on that side requires that we have unmeasurable faith on the endless capabilities of humans. Faith could give up to certainty based on evidence if actual implementations were in progress and if the master plans from the tech wizards had feasible technical solutions to the problem of self-replication and self-improving AI. Unfortunately, the proclaimed tech wizards do not. They are still flapping bird wings.
Your first error is in thinking that nature can do anything but humans – who you do not class as part of nature – have additional limitations.

Humans and their creations ARE nature, as much so as trees, mountains, streams and kangaroos. All of these (and us) are the product of the Pale Blue Dot’s tendency towards equilibrium, a push and pull between entropy and negentropy. Humans, like all species, need to be self-interested to survive, which requires the mental (or reflexive) separation of self and environment to survive. It is that mental separation that leads to deem things either “natural” or “artificial”, not because these divisions are real. We may see ourselves as separate to the environment but that that subjective impression does not reflect the physical reality.

You incorrectly described the process of human inventions as “sit and wait”. No, that what shmucks like us, sitting on the sidelines are doing. Researchers are actively working towards self-replication. Why would they do that?
AI wrote: The advent of self-replicating AI presents numerous potential applications across various sectors. Here’s a detailed exploration of these uses:
1. Tailored AI Solutions for Specific Tasks
Self-replicating AI can create specialized models that are tailored to perform specific tasks in various fields such as healthcare, engineering, and environmental monitoring. For instance, in healthcare, AI could autonomously design models that analyze patient data to provide personalized treatment plans or predict disease outbreaks based on historical data.
2. Enhanced Learning and Adaptation
These AI systems can learn from the successes and failures of their predecessors, allowing them to evolve rapidly. This capability can lead to more efficient development cycles where new models are continuously improved upon without human intervention. For example, in climate science, self-replicating AI could develop models that adapt to changing environmental conditions more swiftly than traditional methods.
3. Automation of Complex Processes
Self-replicating AI could automate complex processes across industries. In manufacturing, for example, it could design and deploy smaller robots capable of performing specific tasks on assembly lines without human oversight. This would not only increase efficiency but also reduce labor costs.
4. Environmental Monitoring and Conservation Efforts
In conservation efforts, self-replicating AI could be deployed to monitor endangered species or track environmental changes autonomously. These systems could analyze vast amounts of data from sensors placed in natural habitats and adjust their monitoring strategies based on real-time findings.
5. Disaster Response and Management
Self-replicating AI could play a crucial role in disaster response by creating models that predict natural disasters or assess damage after an event occurs. These systems could autonomously gather data from affected areas and deploy smaller drones or robots for search-and-rescue missions.
6. Research and Development Acceleration
In research settings, self-replicating AI can significantly accelerate the pace of innovation by generating new hypotheses or experimental designs based on existing knowledge without requiring human input. This capability can lead to breakthroughs in various scientific fields by exploring avenues that may not have been considered by researchers.
7. Ethical Considerations and Governance Models
As self-replicating AI evolves, it will be essential to develop ethical frameworks and governance models to ensure responsible use. This includes establishing guidelines for transparency, accountability, and bias mitigation in the autonomous design processes.
In summary, the potential uses for self-replicating AI span a wide range of applications that promise enhanced efficiency, adaptability, and innovation across multiple sectors while also necessitating careful consideration of ethical implications.
It’s fair to say that research into self-replication of AI will continue apace because it will be so useful.

As regards self-improvement of AI:
AI wrote: The concept of artificial intelligence (AI) being able to self-improve presents a range of potential applications that could significantly enhance various fields and industries. Below are some of the key uses:
1. Enhanced Problem Solving Capabilities
Self-improving AI systems could develop advanced problem-solving skills that surpass human capabilities. By continuously learning from their experiences and adapting their algorithms, these systems could tackle complex challenges in areas such as climate change, healthcare, and logistics more effectively than static models. For instance, an AI designed to optimize energy consumption in smart grids could learn from real-time data and improve its efficiency over time.
2. Personalized Learning and Education
In the educational sector, self-improving AI could create personalized learning experiences for students. By analyzing individual performance data, the AI could adapt its teaching methods and materials to better suit each learner’s needs. This dynamic approach would allow for continuous refinement of educational strategies, potentially leading to improved student outcomes.
3. Autonomous Systems in Robotics
Self-improvement capabilities in robotics could lead to more autonomous systems that can adapt to new environments or tasks without human intervention. For example, robots used in disaster response scenarios could learn from past missions, improving their ability to navigate complex terrains or identify victims more efficiently during future operations.
4. Advanced Medical Diagnostics
In healthcare, self-improving AI systems could revolutionize medical diagnostics by continuously updating their knowledge base with the latest research findings and patient data. This would enable them to provide more accurate diagnoses and treatment recommendations over time, ultimately improving patient care and outcomes.
5. Cybersecurity Enhancements
Self-improving AI can play a crucial role in cybersecurity by adapting to new threats as they emerge. As cyberattacks become increasingly sophisticated, an AI system capable of self-improvement would be able to learn from previous attacks and develop new defensive strategies autonomously, thereby enhancing overall security measures.
6. Economic Modeling and Forecasting
In economics and finance, self-improving AI can refine predictive models based on real-time market data and historical trends. This capability would allow businesses and policymakers to make better-informed decisions regarding investments, resource allocation, and economic policies.
7. Research Acceleration
Self-improving AI can accelerate research across various scientific disciplines by identifying patterns in large datasets that humans may overlook. By continuously refining its analytical techniques, such an AI could contribute significantly to breakthroughs in fields like drug discovery or climate science.

In summary, the potential for AI systems to self-improve opens up numerous avenues for innovation across multiple sectors by enhancing problem-solving abilities, personalizing experiences, increasing autonomy in robotics, advancing medical diagnostics, strengthening cybersecurity measures, improving economic forecasting accuracy, and accelerating scientific research.
It is clear that self-improving AI would be extremely useful too.

Thus, it’s fair to say that research into self-replication and self-improvement in AIs will continue apace. The idea that neither of these will ever be achieved is simply absurd, given the rate of change in this area (significantly aided by using existing AI).

It’s just a matter of putting the two technologies together – self-replication and self-improvement. This would be invaluable for space exploration and mining.

Once these units are in the field and allowed to evolve as they see fit, they will be reacting to environmental and logistical pressures. If this occurs over deep time, even after humans themselves have gone extinct, it’s hard to imagine that they will always remain non-sentient.

Remember, sentience is not something that suddenly happens. Consider your own journey from embryo to adult. You can’t remember becoming sentient. There would have been a gradual dawning, like a light gradually lighting up everything around you in a world that had previously been pitch black.

Generally, attributes that are useful to persistence (accidentally or otherwise) will emerge. Like I say, AI might choose not to be sentient, seeing it as a handicap. While sentience ostensibly serves biology well, hence the plethora of sentient species, it may not be as useful to entities that calculate probabilities a million times more quickly than humans do.

Emotions emerged because organisms needed to be able to react to events more quickly than they could think them through. To that end, emotions are like subroutines that are called when certain variables appear.

AI, on the other hand, can calculate the steps needed to meet challenges quickly enough not to need the sentience “cheat code”.

Then again, that’s assuming a human standard of engagement. If self-replicating AIs are in a situation where they compete with others for resources or information, the ones that act the most quickly and accurately will proliferate more than their competition.

If such entities become sentient, with new subroutines designed to speed up processing in order to out-compete other entities, it’s possible that their kind of sentience would be too fast for us to detect it as sentience. To us, their rapid-fire calculations leading to interactions (or non-interactions) would seem to be indecipherable activities.
#468951
You stole my thunder Sy Borg. I was writing the following and was about to post it. But that's ok. Your points are worth reiterating.
Count Lucanor wrote: Lacking any evidence, that amounts to an incredible faith in human technical capabilities: if you require it, you'll eventually achieve it, regardless of unsolved current constraints.
But do we really lack ANY evidence? Yes, SRSIMs are still science fiction and will be so for a long time to come. But progress will be made. And it's not as if we haven't already made a start. Just as steam power, internal combustion engines, flying machines, telephones and computers grew out of fundamental scientific research which was then applied to the development of things people imagined and wanted because of their usefulness, so autonomous SRSIMs will be developed because they will be useful tools that will enable us to do things we will want to do, but which we could not do without them.

SRSIMs won’t "feel" and "think" like us because their sensorium, their neural networks and their "thinking" processes will be different from those of meat bags like us. And their “adaptations” won't be via natural selection, ab initio, the way it happened on earth. But, as autonomous, self-replicating, self-improving entities, they would be able to adapt to the environments they encounter as they expand out into the galaxy using local free energy and raw materials. Yes, this is still science fiction. But, again, it wasn't that long ago that telephones and flying machines and computers were science fiction that many thought impossible. But once fundamental research is done, and once applications are imagined and their potential usefulness appreciated, they often get invented. Therefore I don't think we can just write off SRSIMs as simply impossible.

If/when autonomous SRSIMs become a reality, and are out “in the wild” and far enough away from us, we may lose control of them and be unable to foresee their continued “evolution”. Their “intelligence” will be different to ours, and it could develop much more quickly than ours did because they would not be hobbled by the slowness of biological evolution by natural selection and the physical limitations of biologically housed intelligence. The worry is that centuries or millennia from now, they may come back to bite us as much more powerful entities than they were when we first sent them out. Whether we’ll want to call them “life-forms” is purely academic. They'll be doing do a lot of things that life does.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
  • 1
  • 5
  • 6
  • 7
  • 8
  • 9
  • 29

Current Philosophy Book of the Month

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

2025 Philosophy Books of the Month

On Spirits: The World Hidden Volume II

On Spirits: The World Hidden Volume II
by Dr. Joseph M. Feagan
April 2025

Escape to Paradise and Beyond (Tentative)

Escape to Paradise and Beyond (Tentative)
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


This topic is about the December 2024 Philosophy […]

Don't take any advice from unhappy people.

I hear misery keeps company. Unhappy people don't […]

It’s shocking to see how easily innocent individ[…]

Questions needing to be asked. Is Israel preparin[…]