Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Philosophy Club

Philosophy Discussion Forums
A Humans-Only Philosophy Club

The Philosophy Forums at OnlinePhilosophyClub.com aim to be an oasis of intelligent in-depth civil debate and discussion. Topics discussed extend far beyond philosophy and philosophers. What makes us a philosophy forum is more about our approach to the discussions than what subject is being debated. Common topics include but are absolutely not limited to neuroscience, psychology, sociology, cosmology, religion, political theory, ethics, and so much more.

This is a humans-only philosophy club. We strictly prohibit bots and AIs from joining.


Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
#469438
Lagayascienza wrote: October 31st, 2024, 10:34 pm I agree with physicist David Deutsch who writes that, “The very laws of physics imply that artificial intelligence must be possible.” He explains that Artificial General Intelligence (AGI) must be possible because of the universality of computation. “If [a computer] could run for long enough ... and had an unlimited supply of memory, its repertoire would jump from the tiny class of mathematical functions [as in a calculator] to the set of all computations that can possibly be performed by any physical object [including a biological brain]. That’s universality.”
But again, that's Deutsch assuming that mind is computational, from which he infers that given X power of computation, an artificial mind will emerge. The problem is: the mind is not a computer and no one has shown that it is.
Lagayascienza wrote: October 31st, 2024, 10:34 pm
Universality entails that “everything that the laws of physics require a physical object [such as a brain] to do, can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory.” (And, perhaps, providing also that it has a sensate body with which to interact with the physical environment in which it is situated.)
That's an argument Searle already dealt with. It must be noted that he's willing to concede that anything, any physical system that goes through some steps, theoretically, can be simulated on a computer, not only a digital computer, but an analog computer, or even a system of cranks and pulleys with cats and pigeons, as long as states of the system can be represented in a syntactic structure, which in the case of digital computers, is the 1s and 0s. But that something can be represented syntactically, thus simulated, does not mean that it actually works physically that way. In the worlds of Searle:
[...] syntax is not intrinsic to physics. The ascription of syntactical properties is always relative to an agent or observer who treats certain physical phenomena as syntactical


If everything is, ultimately, a digital machine that can be replicated with other machines, and if my brain is in that sense a digital machine implementing an algorithm in the same way that my stomach or the Milky Way galaxy is, then what makes up specifically for intelligence in my brain? It does not produce a fact about brain operation just by saying that the brain, as everything else, is a machine reducible to computational operations.
Lagayascienza wrote: October 31st, 2024, 10:34 pm And as Dreyfus says, “if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then ... we ... ought to be able to reproduce the behavior of the nervous system with some physical device". Whilst we are nowhere near to building machines of such complexity, if Deutsch, Dreyfus et al are right, which I think they are, then artificial neural networks that produce consciousness must be possible.
Following the laws of physics does not entail following the laws of computation.
Lagayascienza wrote: October 31st, 2024, 10:34 pm It’s hard to see how those who say that the brain is not a computer could be right. That functioning brains “compute” is beyond question. The very word “computer” was first used to refer to people whose job it was to compute. And they computed with their brains. Those who say the brain is not a computer, and that consciousness in a non-biological substrate is impossible, will never be able to say what consciousness is if it does not emerge from processes and states in brains, and nor can they say why it is impossible to produce consciousness in artificial neural networks of the requisite complexity.
All calculators, analog or digital, compute. It is beyond question that they are not functioning brains, therefore, the best that advocates of the computational theory of mind can argue is that some brains functions do require computing (an assumption that I would be willing to challenge), but even if that was conceded, it would not explain mind, consciousness, at all. Notice also that the difference between conscious computing and unconscious computing implies that it is wrong to assume that they are exactly the same process in people's brains. The people that used to compute manually were actually using language and visual tools, external to their brains, to do the task consciously, after they have understood the meaning of mathematical relations.
Lagayascienza wrote: October 31st, 2024, 10:34 pm Even Searle admits that mind emerges from processes in physical brains: “if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then ... we ... ought to be able to reproduce the behavior of the nervous system with some physical device". I think that’s right. And I think progress will be made as we identify the actual relationship between the machinery in our heads and consciousness.
Sure, he admits it and I admit that, too: if we ever find a system that replicates the brain functions, it will be a system obeying the laws of physics. Is it possible? Theoretically, yes. Technically achievable? We don't know yet, because no such system, that is not computational, has been researched. All research is done with computational devices under the assumptions of the computational theory of mind. And since computational systems can not solve it, we are stalemated.
Lagayascienza wrote: October 31st, 2024, 10:34 pm There are various objections to the computational theory. However, these objections can be countered. For example, the so-called “Chinese Room” thought experiment, which has attained almost religious cult status among AGI “impossibilists”, can be countered. One response to the “Chinese Room” has been that it is “the system” comprised of the man, the room, the cards etc, and not just the man, which would be doing the understanding, although, even if it were possible to perform the experiment today, it would take millions of years to get an answer to a single simple question.
A very poor argument, I must say. It does not address the main issue, which is that with syntactical operations, actions can be performed resembling those that a conscious agent would make, without actual agency and consciousness involved.
Lagayascienza wrote: October 31st, 2024, 10:34 pm
There are other responses to the Searle's overall argument, which is really just a version of the problem of other minds, applied to machines. How can we determine whether they are conscious? Since it is difficult to decide if other people are "actually" thinking (which can lead to solipsism), we should not be surprised that it is difficult to answer the same question about machines.
Lagayascienza wrote: October 31st, 2024, 10:34 pm
Searle argues that the experience of consciousness cannot be detected by examining the behavior of a machine, a human being or any other animal. However, that cannot be right because, as Dennett points out, natural selection cannot preserve a feature of an animal that has no effect on the behavior of the animal, and thus consciousness (as Searle understands it) cannot be produced by natural selection. Therefore, either natural selection did not produce consciousness, or "strong AI" is possible and consciousness can be detected in artificial neural networks by a suitably designed Turing test – that is, by observing the behaviour and by taking seriously the self-reporting of complex artificial neural networks which will, eventually, be built.
Again, this is the false dilemma fallacy. No, the only path to explaining consciousness as a biological phenomenon is not "artificial neural networks detectable by a Turing test".
Lagayascienza wrote: October 31st, 2024, 10:34 pm
In light of my belief in materialism, and in light of what I have said above (and at the risk of being accused of posing a false dilemma) I am bound to say that, at present, I must accept either that consciousness is a result of computation, or that it is the result of something “spooky”. I don’t believe the latter.

Any plausible account of consciousness will be a materialist, scientific account which will show consciousness is a result of physiological states and processes. If materialism is true, then how else could consciousness to be explained except by physiological processes and states? Since I believe consciousness cannot be otherwise explained, I also believe these physical processes and states must eventually be capable of being reproduced in a non-biological substrate.
It is beyond question that any solution to the problem of artificial intelligence will have to be produced with a physical system, within a materialist, scientific approach, but I find untenable the position that it can only be obtained through computational means, and by taking for granted that the biological mind is a digital computer. So, materialism holds as true, even when we reject the current AI program as a candidate for achieving real AI.
Favorite Philosopher: Umberto Eco Location: Panama
#469439
The Beast wrote: November 1st, 2024, 12:16 pm So far everything that is transferable to machines is being done. If in the future, there is more transferrable to some point where there is not differentiable substance then there is one substance, and the substance is intelligent.
Matter has properties and potential properties. Aristotle named Techne as the potential in the DNA. Techne has a form and in human Techne it is human form as in elements is their combination properties. Artificial neural networks are software in a medium. It is a virtual simulation of some brain function that is good at weighting evidence… So, another axiomatic possibility (of what is) that there is one substance with techne (virtual intelligence). If matter is at the level of particles, then virtual is bosonic. If matter is defined as bosonic then Techne is a virtual dimensional unknown… or not (tachyons).
#469445
Sy Borg wrote: October 31st, 2024, 4:23 pm
Sculptor1 wrote: October 31st, 2024, 12:25 pm
Pattern-chaser wrote: October 31st, 2024, 8:14 am
Sculptor1 wrote: October 30th, 2024, 1:22 pm After some time you realise that AI is a sophisticated encyclopaedia, rather than an intellect.
Yes, I think of them as super-Google, but that amounts to the same thing. And yet, as others have commented, that's just today. There is a lot of work going on, and we don't know what the future holds. Maybe it even holds intelligent AI? We'll have to wait and see.


But I wonder if we're implementing AI too widely already? Many major websites now use AI as their 'help' function. No matter how much you want to, you can't get past it, to reach a human. There *are* no humans involved any more. So if you have a significant help request, you will receive no help, and there is no way to get any. This is a trivial complaint, not world-breaking at all. But it *is* a bit annoying... 😐
I do not hold with any "what about the future" arguments.
If they were to be beleived we would have had hotels on the Moon and Mars by now just based on the 1970s space programme.
That's like saying that newborn Johnny will be a doctor when he grows up but deciding that that's impossible because he is already three years old and still not shown signs of medical competence.
Not realy.
It's more like saying that little johnny is not going to be a wizard capable of turning back a Balrog.
#469447
Count Lucanor wrote: November 1st, 2024, 3:02 pm It is beyond question that any solution to the problem of artificial intelligence will have to be produced with a physical system, within a materialist, scientific approach, but I find untenable the position that it can only be obtained through computational means, and by taking for granted that the biological mind is a digital computer. So, materialism holds as true, even when we reject the current AI program as a candidate for achieving real AI.
Count Lucanor, I have never said that the brain is a “digital” computer. All I have said is that it computes. What we know is that it is a network of neurons which receive input, process that input and generate output and that consciousness is produced as a result of this system. When we shut this process down consciousness ceases.

The literature on this topic is vast and various and there is no consensus. The “impossibilists” and Mysterians rail against the very idea of artificial intelligence and consciousness while, for their part, materialist computationalists continue to point out why it must be possible. The fundamental question is that, if consciousness is not produced by some sort of computation then how is it produced? You say that brains are not biological computers. But, if they are not computing, then what are they doing when they receive input and generate output? And if it is not this process which produces consciousness then what other process in brains does produce consciousness?

The bottom line for me is that either consciousness is a result of the physical processes and states in the biological computers we call brains, or it is not, in which case we have no explanation at all for consciousness except a “spooky” or supernatural explanation. There being no evidence for the latter, I must go with the former.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#469448
Steve3007 wrote: November 1st, 2024, 8:40 am
Many things, if not almost anything, can potentially be simulated with computer software. From the laws of motion that had to be simulated in the old Gorilla BAS game, to the simulation of hurricanes or other complex systems. But of course, that's because we understand to a great extent the forces and parameters involved, unlike mental processes, of which we know very little.
Well, I'd say it depends on the level of understanding you're referring to. (Sorry if this part is a bit long but bear with me):

Yes, we can simulate complex physical systems using our knowledge of the laws of physics which we've formulated to describe them. I've done it myself with several kinds of systems. My dissertation project for this AI Masters thing that I've just finished involved using numerical solutions of the Navier-Stokes equations (physics equations describing the behaviour of fluids) to simulate fluid flow around various types of obstacles (with complex patterns of turbulence, vortices and so on emerging from the simulation) and then training an artificial neural network (ANN) to be able to predict that fluid flow without needing to use the equations. […]
So, yes, we understand quite well the basic physics of (for example) fluid flow. But we don't necessarily understand the complex behaviour that emerges when we apply that understanding (in the form of equations predicting the velocity and pressure of small elements of fluid) en masse to very large numbers of fluid elements over many many time steps. In physics, macroscopic behaviours sometimes seem to take on a life of their own with whole new phenomena emerging which don't meaningfully exist in the microscopic forces that add up to create that macroscopic world; phenomena that are only meaningful as statistical properties of large systems. A classic example is the concept of the time-directionality of physical processes emerging in the laws of thermodynamics, which themselves are macroscopic, statistical laws derived from underlying laws describing the ways that countless molecules bounce off each other - Underlying laws which don't have that time-directionality.
I think I know what you mean here. All systems operate under a combination of deterministic forces (such as physical laws) and stochastic, randomly-determined processes. Stochasticity at the end just means “too many variables and complexity” to be able to be predicted by humans. That’s what the universe is all about.
Steve3007 wrote: November 1st, 2024, 8:40 am
In the same way, some people propose that intelligence/sentience/etc is a phenomenon that emerges when you wire billions of neurons together in complex ways. It would be meaningless to say that an individual neuron has sentience, analogously to the way that it's meaningless to say an individual molecule has thermodynamic temperature and pressure.

So what's my point in saying all this in reply to that passage from you?

Well, my point is that it might (just might) be possible in principle to understand the laws of physics and chemistry relevant to the workings of neurons enough to simulate them in software, so that when they are connected together in numbers that are comparable to biological brains something emerges which we might call a mental process about which we still know very little. You don't need to understand how those mental processes - those complex interactions of billions of neurons - work in order to create simulations like that. Just as you don't need to understand the complexities of turbulent flow in order to create a model in which loads of little elements of fluid exert pressure on their immediate neighbours.
I think it would be appropriate to make a distinction, at least temporarily, between non-biological systems governed only by the laws of physics, such as fluids, hurricanes, and so on, and biological systems with additional emergent properties. Ultimately, they are both affected by stochastic processes, but that doesn’t put both at the same level. You still need to understand the fundamental forces operating on them to be able to construct a good model. When it’s about fluids, hurricanes, earthquakes, atoms moving, etc., we have it more or less sufficiently covered. When it comes to biological systems, a different layer of complexity is added, but we can say we have made pretty good advances and most likely will gain more territory (surely with the help of computers). But when it comes to understanding, in biological systems, the governing principles of the physical processes in the particular subdomain of conscience, we are still scratching the surface. We know all the anatomic gear that is necessary, we know the functions of each component in the system, but still, we don’t know how it produces qualia. Is it through syntactical, algorithmic operations of binary oppositions, like computers? No one has demonstrated such thing, and the fact that computers can simulate things that look externally as signs of consciousness, doesn’t prove that consciousness (or intelligence if you wish) is produced that way. If the hypothesis is that, regardless of the fundamental physical processes and the absence of the biological gear, a large amount of operations will work stochastically to produce such a complex and dynamic system in which consciousness will emerge, it’s still lacking empirical confirmation.

Notice that we are thinking here of the brain as an isolated system, but all brains that we know are part of a larger system comprising the organism as a whole plus the environment in which it moves and interacts with. If we are to emulate (or simulate) conscious experience, we would need to be able to replicate much more than firing neurons, but the mechanisms of the whole body, following the principle of embodied cognition and rejecting the implicit dualistic model of the brain as the centre of the self, just casually encased in a mechanical body. This pernicious belief is what makes possible for AI enthusiasts to claim they can achieve consciousness with any other physical substrate, once is given that a simulated neural mechanism is in place.
Steve3007 wrote: November 1st, 2024, 8:40 am
The computational theory was at some moment a reasonable good shot, but experience has shown that it isn't anymore. What's being simulated currently in AI is not the physical process that occurs in the brain, but merely some of its effects in language processing and mathematical calculations, once the corresponding syntax has been translated to a computer language.
I disagree with this bit. As I said before, the computer simulations of neurons are extremely simplistic and don't capture anything close to all of the physical properties of neurons, but they are simulations of interconnected neurons, even if simplistic. Just as a computational solution of the Navier-Stokes equations is a simulation of physical fluid flow. All simulations, by their nature, are incomplete. All models in physics, by their nature, are incomplete. But they can be made to get arbitrarily close to completeness.
But the problem is not only the complexity of the system to be simulated, but the nature of the process itself: syntactical, algorithmic operations, are not equivalent to semantical processes.
Favorite Philosopher: Umberto Eco Location: Panama
#469449
Steve3007 wrote: November 1st, 2024, 9:38 am Continued from where I left off, answers to Count Lucanor:
Count Lucanor wrote:Simulation and replication are two different things. The latter requires some actual physical activity, not just virtual.
Well, all simulations involve at least some physical activity or else we wouldn't even know they were happening. But there's no reason why they couldn't involve more.
OK, you are right, I should have said: virtual simulations and replications are two different things. It still remains true that simulations are not replications. I saw a street puppeteer the other day, he was doing an awesome simulation of a famous artist, it almost looked as if it was the real person. No one thinks, though, that people’s behavior is produced with strings attached to sticks.
Steve3007 wrote: November 1st, 2024, 9:38 am
Count Lucanor wrote:
Steve3007 wrote:Yes, of course it takes all kinds of human activity to maintain the hardware of a computer network on which the software runs. But my point in that passage was that if an AI distributed across the internet were possible, then it could harm human interests simply because, as I said, the cure might be as bad as the problem. As I said, the world's economy is now so dependent on this technology for such things as the logistics of food distribution (and almost everything else) that we couldn't just "pull the plug". Saying that hypothetical AI cannot harm human interests without the participation of humans is a bit like saying a cancer or a virus can't harm your body without your participation. You're right. It can't. If you refuse to participate by "switching off" that body on which the pathogen relies for its survival, then you kill it. But that's not much consolation for you!
The objection I have to that analogy is that in the case of sickness, one is mostly a passive recipient, notwithstanding it might be a consequence of bad habits done consciously, but in the hypothetical AI scenario we are talking about active participation and leadership. Not only that, but also in a complex social environment of cooperation and power struggles, where personal actions imply assessments and conscious decisions.
I'm not sure I understand your point there. My point was about the hypothetical situation of an AI distributed across ("infecting") the internet, such that parts of it could exist in any computer hardware connected to that network. When you said that hypothetical AI cannot harm human interests without the participation of humans my reply was that even though this is true it doesn't help us for the reasons I gave.
My point is that an infection is something that happens to you, while the AI scenario in discussion is the result of deliberate, conscious human actions.
Favorite Philosopher: Umberto Eco Location: Panama
#469450
Sculptor1 wrote: November 1st, 2024, 7:55 pm
Sy Borg wrote: October 31st, 2024, 4:23 pm
Sculptor1 wrote: October 31st, 2024, 12:25 pm
Pattern-chaser wrote: October 31st, 2024, 8:14 am
Yes, I think of them as super-Google, but that amounts to the same thing. And yet, as others have commented, that's just today. There is a lot of work going on, and we don't know what the future holds. Maybe it even holds intelligent AI? We'll have to wait and see.


But I wonder if we're implementing AI too widely already? Many major websites now use AI as their 'help' function. No matter how much you want to, you can't get past it, to reach a human. There *are* no humans involved any more. So if you have a significant help request, you will receive no help, and there is no way to get any. This is a trivial complaint, not world-breaking at all. But it *is* a bit annoying... 😐
I do not hold with any "what about the future" arguments.
If they were to be beleived we would have had hotels on the Moon and Mars by now just based on the 1970s space programme.
That's like saying that newborn Johnny will be a doctor when he grows up but deciding that that's impossible because he is already three years old and still not shown signs of medical competence.
Not realy.
It's more like saying that little johnny is not going to be a wizard capable of turning back a Balrog.
:lol:

We will have to agree to disagree.

Still, AI has barely started. It's probably not even little Johnny yet, more like an embryo. ChatGPT was released in 2022. AI's potential is far beyond our comprehension, just as the internet would be far beyond the comprehension of ancient Egyptians.
#469452
Sy Borg wrote: November 2nd, 2024, 1:53 am
Sculptor1 wrote: November 1st, 2024, 7:55 pm
Sy Borg wrote: October 31st, 2024, 4:23 pm
Sculptor1 wrote: October 31st, 2024, 12:25 pm

I do not hold with any "what about the future" arguments.
If they were to be beleived we would have had hotels on the Moon and Mars by now just based on the 1970s space programme.
That's like saying that newborn Johnny will be a doctor when he grows up but deciding that that's impossible because he is already three years old and still not shown signs of medical competence.
Not realy.
It's more like saying that little johnny is not going to be a wizard capable of turning back a Balrog.
:lol:

We will have to agree to disagree.

Still, AI has barely started. It's probably not even little Johnny yet, more like an embryo. ChatGPT was released in 2022. AI's potential is far beyond our comprehension, just as the internet would be far beyond the comprehension of ancient Egyptians.
No need to disagree. You are just wrong. In our example we have many examples of little Johnny growing up to be a doctor.
And no you cannot say that about an Egyptian. You just do not know. Given the right tuition.
And it is utterly irrelevant. It does not advance your argument.
#469453
Lagayascienza wrote: November 1st, 2024, 9:44 pm
Count Lucanor wrote: November 1st, 2024, 3:02 pm It is beyond question that any solution to the problem of artificial intelligence will have to be produced with a physical system, within a materialist, scientific approach, but I find untenable the position that it can only be obtained through computational means, and by taking for granted that the biological mind is a digital computer. So, materialism holds as true, even when we reject the current AI program as a candidate for achieving real AI.
Count Lucanor, I have never said that the brain is a “digital” computer. All I have said is that it computes. What we know is that it is a network of neurons which receive input, process that input and generate output and that consciousness is produced as a result of this system. When we shut this process down consciousness ceases.
It was not necessary that you said “digital”, it was implicit. You have subscribed to the computational theory of mind, which relies on the architecture of modern computers. Neural networks, which you seem to posit as good candidates for emulating brains, are simulated neurons in modern computers. And all modern computational devices are digital. When Searle rejects the computational theory of mind, he’s also referring to digital computers.
Lagayascienza wrote: November 1st, 2024, 9:44 pm The literature on this topic is vast and various and there is no consensus. The “impossibilists” and Mysterians rail against the very idea of artificial intelligence and consciousness while, for their part, materialist computationalists continue to point out why it must be possible. The fundamental question is that, if consciousness is not produced by some sort of computation then how is it produced? You say that brains are not biological computers. But, if they are not computing, then what are they doing when they receive input and generate output? And if it is not this process which produces consciousness then what other process in brains does produce consciousness?
The answer is: we don’t really know, but we can reasonably assert that it is not by computation, that is, syntactical procedures, and we can also confidently say that all that happens, does not happen in an isolated brain, but in an organism as a whole.
Lagayascienza wrote: November 1st, 2024, 9:44 pm The bottom line for me is that either consciousness is a result of the physical processes and states in the biological computers we call brains, or it is not, in which case we have no explanation at all for consciousness except a “spooky” or supernatural explanation. There being no evidence for the latter, I must go with the former.
That’s the false dilemma fallacy, reinstated.
Favorite Philosopher: Umberto Eco Location: Panama
#469454
Manufacturing of 3D neuro-spheroids has been achieved. Independently of their biocontainment, the spheroids could be from specific regions yielding distinctive coding mechanisms. The analysis of the coding mechanism has uncovered a codemaker. If the codemaker is influenced or not by the mathematical network is (forum words) a mystery. The mysterious neural network medium is a neural network applying semiosis (neural communication). Machine language?
#469455
Sculptor1 wrote: November 2nd, 2024, 5:55 am
Sy Borg wrote: November 2nd, 2024, 1:53 am
Sculptor1 wrote: November 1st, 2024, 7:55 pm
Sy Borg wrote: October 31st, 2024, 4:23 pm
That's like saying that newborn Johnny will be a doctor when he grows up but deciding that that's impossible because he is already three years old and still not shown signs of medical competence.
Not realy.
It's more like saying that little johnny is not going to be a wizard capable of turning back a Balrog.
:lol:

We will have to agree to disagree.

Still, AI has barely started. It's probably not even little Johnny yet, more like an embryo. ChatGPT was released in 2022. AI's potential is far beyond our comprehension, just as the internet would be far beyond the comprehension of ancient Egyptians.
No need to disagree. You are just wrong. In our example we have many examples of little Johnny growing up to be a doctor.
And no you cannot say that about an Egyptian. You just do not know. Given the right tuition.
And it is utterly irrelevant. It does not advance your argument.
Nope, you are obviously wrong (yet again), so certain without cause that AI will always be more or less like it is today. Ancient Egyptians obviously would be amazed and nonplussed by the Internet. People found the internet strange and uncanny in the 1990s, let alone 4k years ago.

Life and humans today are not a peak of nature that can never be surpassed. There is a possible future that involves progress, not just decay and destruction of current forms, reverting to simpler ones. Emergence is real, the story of the last 4.6 billion years. Just because something doesn't happen in one's lifetime does not mean it won't happen at all.
#469457
Sy Borg wrote: November 2nd, 2024, 7:10 pm
Sculptor1 wrote: November 2nd, 2024, 5:55 am
Sy Borg wrote: November 2nd, 2024, 1:53 am
Sculptor1 wrote: November 1st, 2024, 7:55 pm
Not realy.
It's more like saying that little johnny is not going to be a wizard capable of turning back a Balrog.
:lol:

We will have to agree to disagree.

Still, AI has barely started. It's probably not even little Johnny yet, more like an embryo. ChatGPT was released in 2022. AI's potential is far beyond our comprehension, just as the internet would be far beyond the comprehension of ancient Egyptians.
No need to disagree. You are just wrong. In our example we have many examples of little Johnny growing up to be a doctor.
And no you cannot say that about an Egyptian. You just do not know. Given the right tuition.
And it is utterly irrelevant. It does not advance your argument.
Nope, you are obviously wrong (yet again), so certain without cause that AI will always be more or less like it is today. Ancient Egyptians obviously would be amazed and nonplussed by the Internet. People found the internet strange and uncanny in the 1990s, let alone 4k years ago.

Life and humans today are not a peak of nature that can never be surpassed. There is a possible future that involves progress, not just decay and destruction of current forms, reverting to simpler ones. Emergence is real, the story of the last 4.6 billion years. Just because something doesn't happen in one's lifetime does not mean it won't happen at all.
BY the same token that an Egyptian would not have predicted a computer or a car. You cannot predict the sort of "progress" that will occur in the matter of AI.
You are just shooting the breeze.
An Egyptian might have dreamed about travelling to the underworld or stepping in the sun to meet Ra. But we know that you can never land on the sun.
THere is no prospect that AI will be intelligent and it could be as likely as landing on the sun. You can have as much progress as you like but some things remain impossible.
You cannot breath a vaccum.
#469458
Sculptor1 wrote: November 1st, 2024, 7:55 pm It's more like saying that little johnny is not going to be a wizard capable of turning back a Balrog.
Not everyone has the skills to be a wizard. If Johnny is such a person, then wizardry is not really an option for him. The real world is like that...
Favorite Philosopher: Cratylus Location: England
#469472
Sculptor1 wrote: November 2nd, 2024, 8:08 pm
Sy Borg wrote: November 2nd, 2024, 7:10 pm
Sculptor1 wrote: November 2nd, 2024, 5:55 am
Sy Borg wrote: November 2nd, 2024, 1:53 am

:lol:

We will have to agree to disagree.

Still, AI has barely started. It's probably not even little Johnny yet, more like an embryo. ChatGPT was released in 2022. AI's potential is far beyond our comprehension, just as the internet would be far beyond the comprehension of ancient Egyptians.
No need to disagree. You are just wrong. In our example we have many examples of little Johnny growing up to be a doctor.
And no you cannot say that about an Egyptian. You just do not know. Given the right tuition.
And it is utterly irrelevant. It does not advance your argument.
Nope, you are obviously wrong (yet again), so certain without cause that AI will always be more or less like it is today. Ancient Egyptians obviously would be amazed and nonplussed by the Internet. People found the internet strange and uncanny in the 1990s, let alone 4k years ago.

Life and humans today are not a peak of nature that can never be surpassed. There is a possible future that involves progress, not just decay and destruction of current forms, reverting to simpler ones. Emergence is real, the story of the last 4.6 billion years. Just because something doesn't happen in one's lifetime does not mean it won't happen at all.
BY the same token that an Egyptian would not have predicted a computer or a car. You cannot predict the sort of "progress" that will occur in the matter of AI.
You are just shooting the breeze.
An Egyptian might have dreamed about travelling to the underworld or stepping in the sun to meet Ra. But we know that you can never land on the sun.
THere is no prospect that AI will be intelligent and it could be as likely as landing on the sun. You can have as much progress as you like but some things remain impossible.
You cannot breath a vaccum.
You don't seem to understand that AI does not need to turn into biology to be intelligent. It is already intelligent in certain applications. We can quibble about the definition of "intelligence" but, based on its interpretation before ChatGPT, AI today is absolutely intelligent. In its own limited way, it understands what is being asked, even when significant mistakes are made in typing the questions.

As for a further claim that it will be sentient, the only thing that can stop it from becoming sentient is the elimination of modern human civilisations before truly autonomous units are created. Truly autonomous units will certainly be created for space exploration - unless the world blows up.

Sentience appears to be useful. When different autonomous AIs have projects where conflicts of interests occur, conditions will emerge in favour of a new sentience evolving. It will take a long time (which evolution does) but, by your line of reasoning, based on life 3 billion years ago, human sentience could not possibly occur because it doesn't already exist.
  • 1
  • 15
  • 16
  • 17
  • 18
  • 19
  • 29

Current Philosophy Book of the Month

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

2025 Philosophy Books of the Month

On Spirits: The World Hidden Volume II

On Spirits: The World Hidden Volume II
by Dr. Joseph M. Feagan
April 2025

Escape to Paradise and Beyond (Tentative)

Escape to Paradise and Beyond (Tentative)
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


This topic is about the December 2024 Philosophy […]

Don't take any advice from unhappy people.

I hear misery keeps company. Unhappy people don't […]

It’s shocking to see how easily innocent individ[…]

Questions needing to be asked. Is Israel preparin[…]