Log In   or  Sign Up for Free
The Philosophy Forums at OnlinePhilosophyClub.com aim to be an oasis of intelligent in-depth civil debate and discussion. Topics discussed extend far beyond philosophy and philosophers. What makes us a philosophy forum is more about our approach to the discussions than what subject is being debated. Common topics include but are absolutely not limited to neuroscience, psychology, sociology, cosmology, religion, political theory, ethics, and so much more.
This is a humans-only philosophy club. We strictly prohibit bots and AIs from joining.
Lagayascienza wrote: ↑October 31st, 2024, 10:34 pm I agree with physicist David Deutsch who writes that, “The very laws of physics imply that artificial intelligence must be possible.” He explains that Artificial General Intelligence (AGI) must be possible because of the universality of computation. “If [a computer] could run for long enough ... and had an unlimited supply of memory, its repertoire would jump from the tiny class of mathematical functions [as in a calculator] to the set of all computations that can possibly be performed by any physical object [including a biological brain]. That’s universality.”But again, that's Deutsch assuming that mind is computational, from which he infers that given X power of computation, an artificial mind will emerge. The problem is: the mind is not a computer and no one has shown that it is.
Lagayascienza wrote: ↑October 31st, 2024, 10:34 pmThat's an argument Searle already dealt with. It must be noted that he's willing to concede that anything, any physical system that goes through some steps, theoretically, can be simulated on a computer, not only a digital computer, but an analog computer, or even a system of cranks and pulleys with cats and pigeons, as long as states of the system can be represented in a syntactic structure, which in the case of digital computers, is the 1s and 0s. But that something can be represented syntactically, thus simulated, does not mean that it actually works physically that way. In the worlds of Searle:
Universality entails that “everything that the laws of physics require a physical object [such as a brain] to do, can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory.” (And, perhaps, providing also that it has a sensate body with which to interact with the physical environment in which it is situated.)
[...] syntax is not intrinsic to physics. The ascription of syntactical properties is always relative to an agent or observer who treats certain physical phenomena as syntactical
Lagayascienza wrote: ↑October 31st, 2024, 10:34 pm And as Dreyfus says, “if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then ... we ... ought to be able to reproduce the behavior of the nervous system with some physical device". Whilst we are nowhere near to building machines of such complexity, if Deutsch, Dreyfus et al are right, which I think they are, then artificial neural networks that produce consciousness must be possible.Following the laws of physics does not entail following the laws of computation.
Lagayascienza wrote: ↑October 31st, 2024, 10:34 pm It’s hard to see how those who say that the brain is not a computer could be right. That functioning brains “compute” is beyond question. The very word “computer” was first used to refer to people whose job it was to compute. And they computed with their brains. Those who say the brain is not a computer, and that consciousness in a non-biological substrate is impossible, will never be able to say what consciousness is if it does not emerge from processes and states in brains, and nor can they say why it is impossible to produce consciousness in artificial neural networks of the requisite complexity.All calculators, analog or digital, compute. It is beyond question that they are not functioning brains, therefore, the best that advocates of the computational theory of mind can argue is that some brains functions do require computing (an assumption that I would be willing to challenge), but even if that was conceded, it would not explain mind, consciousness, at all. Notice also that the difference between conscious computing and unconscious computing implies that it is wrong to assume that they are exactly the same process in people's brains. The people that used to compute manually were actually using language and visual tools, external to their brains, to do the task consciously, after they have understood the meaning of mathematical relations.
Lagayascienza wrote: ↑October 31st, 2024, 10:34 pm Even Searle admits that mind emerges from processes in physical brains: “if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then ... we ... ought to be able to reproduce the behavior of the nervous system with some physical device". I think that’s right. And I think progress will be made as we identify the actual relationship between the machinery in our heads and consciousness.Sure, he admits it and I admit that, too: if we ever find a system that replicates the brain functions, it will be a system obeying the laws of physics. Is it possible? Theoretically, yes. Technically achievable? We don't know yet, because no such system, that is not computational, has been researched. All research is done with computational devices under the assumptions of the computational theory of mind. And since computational systems can not solve it, we are stalemated.
Lagayascienza wrote: ↑October 31st, 2024, 10:34 pm There are various objections to the computational theory. However, these objections can be countered. For example, the so-called “Chinese Room” thought experiment, which has attained almost religious cult status among AGI “impossibilists”, can be countered. One response to the “Chinese Room” has been that it is “the system” comprised of the man, the room, the cards etc, and not just the man, which would be doing the understanding, although, even if it were possible to perform the experiment today, it would take millions of years to get an answer to a single simple question.A very poor argument, I must say. It does not address the main issue, which is that with syntactical operations, actions can be performed resembling those that a conscious agent would make, without actual agency and consciousness involved.
Lagayascienza wrote: ↑October 31st, 2024, 10:34 pm
There are other responses to the Searle's overall argument, which is really just a version of the problem of other minds, applied to machines. How can we determine whether they are conscious? Since it is difficult to decide if other people are "actually" thinking (which can lead to solipsism), we should not be surprised that it is difficult to answer the same question about machines.Lagayascienza wrote: ↑October 31st, 2024, 10:34 pmAgain, this is the false dilemma fallacy. No, the only path to explaining consciousness as a biological phenomenon is not "artificial neural networks detectable by a Turing test".
Searle argues that the experience of consciousness cannot be detected by examining the behavior of a machine, a human being or any other animal. However, that cannot be right because, as Dennett points out, natural selection cannot preserve a feature of an animal that has no effect on the behavior of the animal, and thus consciousness (as Searle understands it) cannot be produced by natural selection. Therefore, either natural selection did not produce consciousness, or "strong AI" is possible and consciousness can be detected in artificial neural networks by a suitably designed Turing test – that is, by observing the behaviour and by taking seriously the self-reporting of complex artificial neural networks which will, eventually, be built.Lagayascienza wrote: ↑October 31st, 2024, 10:34 pmIt is beyond question that any solution to the problem of artificial intelligence will have to be produced with a physical system, within a materialist, scientific approach, but I find untenable the position that it can only be obtained through computational means, and by taking for granted that the biological mind is a digital computer. So, materialism holds as true, even when we reject the current AI program as a candidate for achieving real AI.
In light of my belief in materialism, and in light of what I have said above (and at the risk of being accused of posing a false dilemma) I am bound to say that, at present, I must accept either that consciousness is a result of computation, or that it is the result of something “spooky”. I don’t believe the latter.
Any plausible account of consciousness will be a materialist, scientific account which will show consciousness is a result of physiological states and processes. If materialism is true, then how else could consciousness to be explained except by physiological processes and states? Since I believe consciousness cannot be otherwise explained, I also believe these physical processes and states must eventually be capable of being reproduced in a non-biological substrate.
The Beast wrote: ↑November 1st, 2024, 12:16 pm So far everything that is transferable to machines is being done. If in the future, there is more transferrable to some point where there is not differentiable substance then there is one substance, and the substance is intelligent.Matter has properties and potential properties. Aristotle named Techne as the potential in the DNA. Techne has a form and in human Techne it is human form as in elements is their combination properties. Artificial neural networks are software in a medium. It is a virtual simulation of some brain function that is good at weighting evidence… So, another axiomatic possibility (of what is) that there is one substance with techne (virtual intelligence). If matter is at the level of particles, then virtual is bosonic. If matter is defined as bosonic then Techne is a virtual dimensional unknown… or not (tachyons).
Sy Borg wrote: ↑October 31st, 2024, 4:23 pmNot realy.Sculptor1 wrote: ↑October 31st, 2024, 12:25 pmThat's like saying that newborn Johnny will be a doctor when he grows up but deciding that that's impossible because he is already three years old and still not shown signs of medical competence.Pattern-chaser wrote: ↑October 31st, 2024, 8:14 amI do not hold with any "what about the future" arguments.Sculptor1 wrote: ↑October 30th, 2024, 1:22 pm After some time you realise that AI is a sophisticated encyclopaedia, rather than an intellect.Yes, I think of them as super-Google, but that amounts to the same thing. And yet, as others have commented, that's just today. There is a lot of work going on, and we don't know what the future holds. Maybe it even holds intelligent AI? We'll have to wait and see.
But I wonder if we're implementing AI too widely already? Many major websites now use AI as their 'help' function. No matter how much you want to, you can't get past it, to reach a human. There *are* no humans involved any more. So if you have a significant help request, you will receive no help, and there is no way to get any. This is a trivial complaint, not world-breaking at all. But it *is* a bit annoying...
If they were to be beleived we would have had hotels on the Moon and Mars by now just based on the 1970s space programme.
Count Lucanor wrote: ↑November 1st, 2024, 3:02 pm It is beyond question that any solution to the problem of artificial intelligence will have to be produced with a physical system, within a materialist, scientific approach, but I find untenable the position that it can only be obtained through computational means, and by taking for granted that the biological mind is a digital computer. So, materialism holds as true, even when we reject the current AI program as a candidate for achieving real AI.Count Lucanor, I have never said that the brain is a “digital” computer. All I have said is that it computes. What we know is that it is a network of neurons which receive input, process that input and generate output and that consciousness is produced as a result of this system. When we shut this process down consciousness ceases.
Steve3007 wrote: ↑November 1st, 2024, 8:40 amI think I know what you mean here. All systems operate under a combination of deterministic forces (such as physical laws) and stochastic, randomly-determined processes. Stochasticity at the end just means “too many variables and complexity” to be able to be predicted by humans. That’s what the universe is all about.Many things, if not almost anything, can potentially be simulated with computer software. From the laws of motion that had to be simulated in the old Gorilla BAS game, to the simulation of hurricanes or other complex systems. But of course, that's because we understand to a great extent the forces and parameters involved, unlike mental processes, of which we know very little.Well, I'd say it depends on the level of understanding you're referring to. (Sorry if this part is a bit long but bear with me):
Yes, we can simulate complex physical systems using our knowledge of the laws of physics which we've formulated to describe them. I've done it myself with several kinds of systems. My dissertation project for this AI Masters thing that I've just finished involved using numerical solutions of the Navier-Stokes equations (physics equations describing the behaviour of fluids) to simulate fluid flow around various types of obstacles (with complex patterns of turbulence, vortices and so on emerging from the simulation) and then training an artificial neural network (ANN) to be able to predict that fluid flow without needing to use the equations. […]
So, yes, we understand quite well the basic physics of (for example) fluid flow. But we don't necessarily understand the complex behaviour that emerges when we apply that understanding (in the form of equations predicting the velocity and pressure of small elements of fluid) en masse to very large numbers of fluid elements over many many time steps. In physics, macroscopic behaviours sometimes seem to take on a life of their own with whole new phenomena emerging which don't meaningfully exist in the microscopic forces that add up to create that macroscopic world; phenomena that are only meaningful as statistical properties of large systems. A classic example is the concept of the time-directionality of physical processes emerging in the laws of thermodynamics, which themselves are macroscopic, statistical laws derived from underlying laws describing the ways that countless molecules bounce off each other - Underlying laws which don't have that time-directionality.
Steve3007 wrote: ↑November 1st, 2024, 8:40 amI think it would be appropriate to make a distinction, at least temporarily, between non-biological systems governed only by the laws of physics, such as fluids, hurricanes, and so on, and biological systems with additional emergent properties. Ultimately, they are both affected by stochastic processes, but that doesn’t put both at the same level. You still need to understand the fundamental forces operating on them to be able to construct a good model. When it’s about fluids, hurricanes, earthquakes, atoms moving, etc., we have it more or less sufficiently covered. When it comes to biological systems, a different layer of complexity is added, but we can say we have made pretty good advances and most likely will gain more territory (surely with the help of computers). But when it comes to understanding, in biological systems, the governing principles of the physical processes in the particular subdomain of conscience, we are still scratching the surface. We know all the anatomic gear that is necessary, we know the functions of each component in the system, but still, we don’t know how it produces qualia. Is it through syntactical, algorithmic operations of binary oppositions, like computers? No one has demonstrated such thing, and the fact that computers can simulate things that look externally as signs of consciousness, doesn’t prove that consciousness (or intelligence if you wish) is produced that way. If the hypothesis is that, regardless of the fundamental physical processes and the absence of the biological gear, a large amount of operations will work stochastically to produce such a complex and dynamic system in which consciousness will emerge, it’s still lacking empirical confirmation.
In the same way, some people propose that intelligence/sentience/etc is a phenomenon that emerges when you wire billions of neurons together in complex ways. It would be meaningless to say that an individual neuron has sentience, analogously to the way that it's meaningless to say an individual molecule has thermodynamic temperature and pressure.
So what's my point in saying all this in reply to that passage from you?
Well, my point is that it might (just might) be possible in principle to understand the laws of physics and chemistry relevant to the workings of neurons enough to simulate them in software, so that when they are connected together in numbers that are comparable to biological brains something emerges which we might call a mental process about which we still know very little. You don't need to understand how those mental processes - those complex interactions of billions of neurons - work in order to create simulations like that. Just as you don't need to understand the complexities of turbulent flow in order to create a model in which loads of little elements of fluid exert pressure on their immediate neighbours.
Steve3007 wrote: ↑November 1st, 2024, 8:40 amBut the problem is not only the complexity of the system to be simulated, but the nature of the process itself: syntactical, algorithmic operations, are not equivalent to semantical processes.The computational theory was at some moment a reasonable good shot, but experience has shown that it isn't anymore. What's being simulated currently in AI is not the physical process that occurs in the brain, but merely some of its effects in language processing and mathematical calculations, once the corresponding syntax has been translated to a computer language.I disagree with this bit. As I said before, the computer simulations of neurons are extremely simplistic and don't capture anything close to all of the physical properties of neurons, but they are simulations of interconnected neurons, even if simplistic. Just as a computational solution of the Navier-Stokes equations is a simulation of physical fluid flow. All simulations, by their nature, are incomplete. All models in physics, by their nature, are incomplete. But they can be made to get arbitrarily close to completeness.
Steve3007 wrote: ↑November 1st, 2024, 9:38 am Continued from where I left off, answers to Count Lucanor:OK, you are right, I should have said: virtual simulations and replications are two different things. It still remains true that simulations are not replications. I saw a street puppeteer the other day, he was doing an awesome simulation of a famous artist, it almost looked as if it was the real person. No one thinks, though, that people’s behavior is produced with strings attached to sticks.
Count Lucanor wrote:Simulation and replication are two different things. The latter requires some actual physical activity, not just virtual.Well, all simulations involve at least some physical activity or else we wouldn't even know they were happening. But there's no reason why they couldn't involve more.
Steve3007 wrote: ↑November 1st, 2024, 9:38 amMy point is that an infection is something that happens to you, while the AI scenario in discussion is the result of deliberate, conscious human actions.Count Lucanor wrote:I'm not sure I understand your point there. My point was about the hypothetical situation of an AI distributed across ("infecting") the internet, such that parts of it could exist in any computer hardware connected to that network. When you said that hypothetical AI cannot harm human interests without the participation of humans my reply was that even though this is true it doesn't help us for the reasons I gave.Steve3007 wrote:Yes, of course it takes all kinds of human activity to maintain the hardware of a computer network on which the software runs. But my point in that passage was that if an AI distributed across the internet were possible, then it could harm human interests simply because, as I said, the cure might be as bad as the problem. As I said, the world's economy is now so dependent on this technology for such things as the logistics of food distribution (and almost everything else) that we couldn't just "pull the plug". Saying that hypothetical AI cannot harm human interests without the participation of humans is a bit like saying a cancer or a virus can't harm your body without your participation. You're right. It can't. If you refuse to participate by "switching off" that body on which the pathogen relies for its survival, then you kill it. But that's not much consolation for you!The objection I have to that analogy is that in the case of sickness, one is mostly a passive recipient, notwithstanding it might be a consequence of bad habits done consciously, but in the hypothetical AI scenario we are talking about active participation and leadership. Not only that, but also in a complex social environment of cooperation and power struggles, where personal actions imply assessments and conscious decisions.
Sculptor1 wrote: ↑November 1st, 2024, 7:55 pmSy Borg wrote: ↑October 31st, 2024, 4:23 pmNot realy.Sculptor1 wrote: ↑October 31st, 2024, 12:25 pmThat's like saying that newborn Johnny will be a doctor when he grows up but deciding that that's impossible because he is already three years old and still not shown signs of medical competence.Pattern-chaser wrote: ↑October 31st, 2024, 8:14 amI do not hold with any "what about the future" arguments.
Yes, I think of them as super-Google, but that amounts to the same thing. And yet, as others have commented, that's just today. There is a lot of work going on, and we don't know what the future holds. Maybe it even holds intelligent AI? We'll have to wait and see.
But I wonder if we're implementing AI too widely already? Many major websites now use AI as their 'help' function. No matter how much you want to, you can't get past it, to reach a human. There *are* no humans involved any more. So if you have a significant help request, you will receive no help, and there is no way to get any. This is a trivial complaint, not world-breaking at all. But it *is* a bit annoying...
If they were to be beleived we would have had hotels on the Moon and Mars by now just based on the 1970s space programme.
It's more like saying that little johnny is not going to be a wizard capable of turning back a Balrog.
Sy Borg wrote: ↑November 2nd, 2024, 1:53 amNo need to disagree. You are just wrong. In our example we have many examples of little Johnny growing up to be a doctor.Sculptor1 wrote: ↑November 1st, 2024, 7:55 pmSy Borg wrote: ↑October 31st, 2024, 4:23 pmNot realy.Sculptor1 wrote: ↑October 31st, 2024, 12:25 pmThat's like saying that newborn Johnny will be a doctor when he grows up but deciding that that's impossible because he is already three years old and still not shown signs of medical competence.
I do not hold with any "what about the future" arguments.
If they were to be beleived we would have had hotels on the Moon and Mars by now just based on the 1970s space programme.
It's more like saying that little johnny is not going to be a wizard capable of turning back a Balrog.
We will have to agree to disagree.
Still, AI has barely started. It's probably not even little Johnny yet, more like an embryo. ChatGPT was released in 2022. AI's potential is far beyond our comprehension, just as the internet would be far beyond the comprehension of ancient Egyptians.
Lagayascienza wrote: ↑November 1st, 2024, 9:44 pmIt was not necessary that you said “digital”, it was implicit. You have subscribed to the computational theory of mind, which relies on the architecture of modern computers. Neural networks, which you seem to posit as good candidates for emulating brains, are simulated neurons in modern computers. And all modern computational devices are digital. When Searle rejects the computational theory of mind, he’s also referring to digital computers.Count Lucanor wrote: ↑November 1st, 2024, 3:02 pm It is beyond question that any solution to the problem of artificial intelligence will have to be produced with a physical system, within a materialist, scientific approach, but I find untenable the position that it can only be obtained through computational means, and by taking for granted that the biological mind is a digital computer. So, materialism holds as true, even when we reject the current AI program as a candidate for achieving real AI.Count Lucanor, I have never said that the brain is a “digital” computer. All I have said is that it computes. What we know is that it is a network of neurons which receive input, process that input and generate output and that consciousness is produced as a result of this system. When we shut this process down consciousness ceases.
Lagayascienza wrote: ↑November 1st, 2024, 9:44 pm The literature on this topic is vast and various and there is no consensus. The “impossibilists” and Mysterians rail against the very idea of artificial intelligence and consciousness while, for their part, materialist computationalists continue to point out why it must be possible. The fundamental question is that, if consciousness is not produced by some sort of computation then how is it produced? You say that brains are not biological computers. But, if they are not computing, then what are they doing when they receive input and generate output? And if it is not this process which produces consciousness then what other process in brains does produce consciousness?The answer is: we don’t really know, but we can reasonably assert that it is not by computation, that is, syntactical procedures, and we can also confidently say that all that happens, does not happen in an isolated brain, but in an organism as a whole.
Lagayascienza wrote: ↑November 1st, 2024, 9:44 pm The bottom line for me is that either consciousness is a result of the physical processes and states in the biological computers we call brains, or it is not, in which case we have no explanation at all for consciousness except a “spooky” or supernatural explanation. There being no evidence for the latter, I must go with the former.That’s the false dilemma fallacy, reinstated.
Sculptor1 wrote: ↑November 2nd, 2024, 5:55 amNope, you are obviously wrong (yet again), so certain without cause that AI will always be more or less like it is today. Ancient Egyptians obviously would be amazed and nonplussed by the Internet. People found the internet strange and uncanny in the 1990s, let alone 4k years ago.Sy Borg wrote: ↑November 2nd, 2024, 1:53 amNo need to disagree. You are just wrong. In our example we have many examples of little Johnny growing up to be a doctor.Sculptor1 wrote: ↑November 1st, 2024, 7:55 pm:lol:Sy Borg wrote: ↑October 31st, 2024, 4:23 pmNot realy.
That's like saying that newborn Johnny will be a doctor when he grows up but deciding that that's impossible because he is already three years old and still not shown signs of medical competence.
It's more like saying that little johnny is not going to be a wizard capable of turning back a Balrog.
We will have to agree to disagree.
Still, AI has barely started. It's probably not even little Johnny yet, more like an embryo. ChatGPT was released in 2022. AI's potential is far beyond our comprehension, just as the internet would be far beyond the comprehension of ancient Egyptians.
And no you cannot say that about an Egyptian. You just do not know. Given the right tuition.
And it is utterly irrelevant. It does not advance your argument.
Sy Borg wrote: ↑November 2nd, 2024, 7:10 pmBY the same token that an Egyptian would not have predicted a computer or a car. You cannot predict the sort of "progress" that will occur in the matter of AI.Sculptor1 wrote: ↑November 2nd, 2024, 5:55 amNope, you are obviously wrong (yet again), so certain without cause that AI will always be more or less like it is today. Ancient Egyptians obviously would be amazed and nonplussed by the Internet. People found the internet strange and uncanny in the 1990s, let alone 4k years ago.Sy Borg wrote: ↑November 2nd, 2024, 1:53 amNo need to disagree. You are just wrong. In our example we have many examples of little Johnny growing up to be a doctor.Sculptor1 wrote: ↑November 1st, 2024, 7:55 pm
Not realy.
It's more like saying that little johnny is not going to be a wizard capable of turning back a Balrog.
We will have to agree to disagree.
Still, AI has barely started. It's probably not even little Johnny yet, more like an embryo. ChatGPT was released in 2022. AI's potential is far beyond our comprehension, just as the internet would be far beyond the comprehension of ancient Egyptians.
And no you cannot say that about an Egyptian. You just do not know. Given the right tuition.
And it is utterly irrelevant. It does not advance your argument.
Life and humans today are not a peak of nature that can never be surpassed. There is a possible future that involves progress, not just decay and destruction of current forms, reverting to simpler ones. Emergence is real, the story of the last 4.6 billion years. Just because something doesn't happen in one's lifetime does not mean it won't happen at all.
Sculptor1 wrote: ↑November 1st, 2024, 7:55 pm It's more like saying that little johnny is not going to be a wizard capable of turning back a Balrog.Not everyone has the skills to be a wizard. If Johnny is such a person, then wizardry is not really an option for him. The real world is like that...
Sculptor1 wrote: ↑November 2nd, 2024, 8:08 pmYou don't seem to understand that AI does not need to turn into biology to be intelligent. It is already intelligent in certain applications. We can quibble about the definition of "intelligence" but, based on its interpretation before ChatGPT, AI today is absolutely intelligent. In its own limited way, it understands what is being asked, even when significant mistakes are made in typing the questions.Sy Borg wrote: ↑November 2nd, 2024, 7:10 pmBY the same token that an Egyptian would not have predicted a computer or a car. You cannot predict the sort of "progress" that will occur in the matter of AI.Sculptor1 wrote: ↑November 2nd, 2024, 5:55 amNope, you are obviously wrong (yet again), so certain without cause that AI will always be more or less like it is today. Ancient Egyptians obviously would be amazed and nonplussed by the Internet. People found the internet strange and uncanny in the 1990s, let alone 4k years ago.Sy Borg wrote: ↑November 2nd, 2024, 1:53 amNo need to disagree. You are just wrong. In our example we have many examples of little Johnny growing up to be a doctor.
:lol:
We will have to agree to disagree.
Still, AI has barely started. It's probably not even little Johnny yet, more like an embryo. ChatGPT was released in 2022. AI's potential is far beyond our comprehension, just as the internet would be far beyond the comprehension of ancient Egyptians.
And no you cannot say that about an Egyptian. You just do not know. Given the right tuition.
And it is utterly irrelevant. It does not advance your argument.
Life and humans today are not a peak of nature that can never be surpassed. There is a possible future that involves progress, not just decay and destruction of current forms, reverting to simpler ones. Emergence is real, the story of the last 4.6 billion years. Just because something doesn't happen in one's lifetime does not mean it won't happen at all.
You are just shooting the breeze.
An Egyptian might have dreamed about travelling to the underworld or stepping in the sun to meet Ra. But we know that you can never land on the sun.
THere is no prospect that AI will be intelligent and it could be as likely as landing on the sun. You can have as much progress as you like but some things remain impossible.
You cannot breath a vaccum.
How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024
Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023
Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023
Negotiation & Productive Communication: Less i[…]
It is still not much talked about but there ar[…]