Sy Borg wrote: ↑October 9th, 2024, 7:03 pm
Count Lucanor wrote: ↑October 9th, 2024, 4:17 pm
The 0.0000001% of anything is insignificant for the 99.999999% left, except only for that 0.0000001% . Don't get me wrong, when I was being accused of anthropocentrism because of the exceptionalism of human life, I was pretty much aware of what such exceptionalism implies from a human point of view, even though the idea was dismissed as a triviality in a greater scheme.
Percentages are only one factor. The ventrolateral frontal cortex weight perhaps 40-60 grams. That's less than a thousandth of total human mass, yet at that scale it is critical to our distinctly human consciousness.
Percentages just show the differences in magnitudes, it's about scale, the radius of influence. That is key for something to have an effect on something else. Can we say that the ventrolateral frontal cortex of my neighbor has any influence on the ventrolateral frontal cortex of a retired man in the Swiss Alps? Definitely not. Now, just imagine trillions of ventrolateral frontal cortices separated by the same distances. No matter how special my neighbor's cortex might be, it is completely irrelevant in the larger context.
Sy Borg wrote: ↑October 9th, 2024, 7:03 pm
But it's not just human life. Imagine the thrill if we found simple tube worms around hydrothermal vents on Europa. Or even bacteria. The Earth's exceptional qualities extend far beyond just humans. Earth is so much more alive than other worlds around us that there is no competition.
But human life is just part of life in general, which is constrained to Earth anyway, dependent of its capacity to host those organic processes. That capacity does not extend, obviously, beyond Earth, where its life-harboring qualities are lost.
Sy Borg wrote: ↑October 9th, 2024, 7:03 pm
Count Lucanor wrote: ↑October 9th, 2024, 4:17 pm
Something not happening does not increase the chances of ever happening. When talking about technology, we are not merely constrained by chances, but by actual technical feasibility.
Again, you are thinking in terms of today whereas I am thinking in terms of deep time. Today means nothing in context, akin to judging a baby's potentials based on its current achievements.
The analogy does not apply. Potentiality is not so simply reducible to time frames. A baby is just a human in a given stage of development, their potential as a human individual is entirely determined by their innate capacities plus their behavior and contingencies of the environment. In other words, a man can be thought of as a system put in motion with initial conditions and then, as multiple interactions take place, conditions change and then so many things can happen in the future that we can say it is undetermined. But we can surely define limits based only on the initial conditions and the following experiences: we know that his legs will not allow him to run as fast as a cheetah and that he will never grow to a height of 10 meters, nor he will see with his eyes like the JWS telescope, nor he will be in two places at the same time, etc. There are variables, but they are not infinite, without limits. So, the argument "anything is possible, given enough time" is false. You can see the potential of things, considering their limits, and then make some reasonable predictions. We don't see that in those who predict that AI technology will develop into something that resembles life (as per Lagayscienza's definition: "
some combination of energy use, growth, reproduction, response to stimuli, complex goal directed behaviours and adaptation to the environment originating from within...") in autonomous, independent, conscious beings, imbued with volition and social mechanisms of interaction, constituting their own social domain, so that they would control the domain of human culture and even replace humans. Nothing in the initial conditions and inherent capabilities of what they call AI technology, including robotics, is evident for that future possibility. A technology might appear tomorrow and then we would be able to say it's reasonable to expect such a future outcome, but until then, all we have is the wishful thinking and enthusiasm of the sci-fi industry and futurologists.
Sy Borg wrote: ↑October 9th, 2024, 7:03 pm
Count Lucanor wrote: ↑October 9th, 2024, 4:17 pm
The trick of the mind is to think that internet was predestined to exist. It didn't appear as the inevitable expression of a greater scheme that unfolds in human or natural history.
Predestination is not the point, and also not my claim. I'm not claiming to know what will happen in the future, but the idea that humans as they stand are the ultimate expression of sentience - that no greater sentience is possible than humans - strikes me as absurd, given that we are still just chaotic apes with a gift for inventiveness. To think that evolution stops with us makes no sense to me. Chances are that evolution will continue and, given the usefulness of sentience, it's hard to imagine self-replicating, self-improving machines never achieving sentience - not in a thousand years, not in a million years, not in a billion years.
Since we have no other reference for life, sentience, intelligence, agency and social power derived from the behavior of organic matter, we cannot just make up new ones out of the blue and try to predict anything. The belief that mere computational power could achieve any of these things, strikes me as absurd, too. It's like expecting that given enough time and resources, it is possible that the machine in the Chinese Room experiment can eventually understand Chinese, decide to break out and lead a revolution to topple all the world's governments. So absurd it looks right now.
Sy Borg wrote: ↑October 9th, 2024, 7:03 pm
Count Lucanor wrote: ↑October 9th, 2024, 4:17 pm
Sy Borg wrote: ↑October 8th, 2024, 10:25 pm
Your argument reminds me of those of creationists who claim that evolution is impossible. Like you, they think in terms of human life spans and profoundly underestimate deep time.
Evolution is a process for which we have seen more than enough evidence of happening. We don't need guessing and theorizing about its possibility. But thinking of it teleologically, as something that was predetermined to exist following some inherent necessity of the universe, a primal cause, is certainly a mistake.
No need for teleology. That's a red herring. The necessity is not that of the universe but of the subjects. Either sentience is a useful adaptation for highly intelligent entities or it is a manifestation of God. Take your pick.
The issue is whether we can assess possibilities based on real world scenarios or mere gambling speculations. My point is that, unlike Creationists, I base mine on the actual evidence at hand, which shows purposeless, contingent processes. If that stance is countered with the idea that nature is purpose-driven, so that what has not happened yet, eventually will happen, moved by that higher purpose, I call that teleology, notwithstanding the theological implications.
Sy Borg wrote: ↑October 9th, 2024, 7:03 pm
Natural selection is not accidental. If an adaptation is potent, then it will continue to be selected. Vision is a good example. Early on, all life was blind. Over time, eyes have evolved about forty times, making clear how useful it is to be able to detect light. Sentience too has proved useful, with a multitude of sentient animals. Sapience, which is basically sentience with time awareness, has proved to be extremely potent. It appears that the reasons why it's not more common are:
1) Homo sapiens out-competed other hominids
2) Big brains take a lot of energy, so great success in resource gathering is needed.
I can argue that natural selection is the product of contingencies, so if we started all over again, life would not have turned out exactly the same, perhaps far from it. But that's beyond the point: natural selection exists. Not as a fundamental force, with permanent presence transcending organic matter.
Sy Borg wrote: ↑October 9th, 2024, 7:03 pm
Sapience, which is basically sentience with time awareness, has proved to be extremely potent. It appears that the reasons why it's not more common are:
1) Homo sapiens out-competed other hominids
2) Big brains take a lot of energy, so great success in resource gathering is needed.
Presumably, neither of these limits will apply to self-replicating, self-improving machines (SRSIMs to save my arthritic fingers). What we don't know is how machine sapience can produce sentience.
I get it. I don't think machines are going to start becoming emotional, or replicating biological feelings. The sentience I'm referring to is not what we feel, or our dogs feel. More likely that some equivalent meta-awareness will emerge because that's how nature works. Certain thresholds are reached, something breaks, and that results in the emergence of new features. If the features aid survival, they are called adaptations.
But natural selection is a process of organic, living matter. Trying to extend its application to the world in general as if it were some fundamental force that keeps hitting inorganic matter to produce sentience again, even if it's a new non-organic sentience, is not justified by any evidence that we have available now. What we do have evidence for is machines doing nothing more than what humans decide to do for their own benefit. If we want to call that "nature working", fine, but necessarily channeled through the abilities of humans and not bypassed as an independent process by the "force" of natural selection.