Count Lucanor wrote: ↑November 7th, 2024, 10:00 am
Lagayascienza wrote: ↑November 6th, 2024, 6:24 pm
By “compute” I mean fundamental processes such as judging the distance between two objects or performing an arithmetic operation. You say brains cannot do this. They clearly do. And so do computers.
To compute is to perform mathematical and logical operations, using formal rules that constitute the syntax. Humans compute, in fact the firsts computers were human teams performing tedious mathematical calculations. Syntactic rules behind mathematical and logical operations, though, are a human artifice, I find very unlikely that they are hardwired in humans brains as if they were natural computers. Our intuitions and perceptions about space and time relations must be of some other nature, not intrinsically mathematical. I don’t think spiders or birds calculate distances with an internal mathematical language, either. OTOH, computers can compute just because humans have transposed their mathematical language (formal logic is sort of a mathematical language) to the machines. That’s why when the metaphor of the computational mind is used, Searle warns about the homunculus fallacy, as if a little man inside our bodies was consciously doing the math.
I don’t find it hard to believe that organisms are hardwired with at least some of their ability to “compute”. Even an unschooled child's neural network can register differences in quantity and number. “Whaaa! How come he got more pieces of candy than me?!!!” Primates, corvids and other animals can do this, too. Evolution is amazing? And if these abilities were not the result of evolution, then how did they come about? I think that as we develop from infancy we supplement what is hard-wired by evolution with a model of the world which is built through experience and learning and which is stored in memory. Accessing this learning, and registering differences from our mental model, are fundamental to the production of intelligence and conscious awareness. There is no need for an homunculus. We just need our hard-wired abilities in logic, our ability to learn and the ability of our neural network to access memory and register differences from our mental model. I think some combination like this creates intelligence and consciousness. And the ability to
imagine difference is probably also important. I think that
something like this model of intelligence and consciousness will turn out to be correct.
If our neural networks perform processes such as, for example, arithmetic operations and judging distance, and if we notice things that don’t fit with our mental model of the world, then we “compute” in my sense of the word “compute”. If spiders’ neural networks judge distance and notice difference, then they also compute. Some people are hung-up about the word “compute”. I don’t mean that human or animal neural networks perform computations in the way that present day digital computers do. In fact, I’m pretty certain that they do not. Again, I think high-level animal intelligence and consciousness are more about accessing learning in memory and registering difference from a model of the world built up over time.
If we are to build machines that are intelligent in the way that organisms are intelligent, and perhaps even conscious, then the machines we build will have to do things in a way that is similar to the way in which organisms do them. They will need to be constructed on similar principles. “Digitality” won’t do the trick, IMO. It will have to be something more like the processes that occur in organic neural networks.
Even if organic neural networks do not compute digitally but rely on evolutionary hard-wiring, learning, and a memory-based model of the world, something analogous to computation occurs such that a judgement of distance, or an answer to an arithmetic problem, is produced. And there is no reason in principle why this process cannot be reproduced in an artificial substrate once we understand more about how organic neural networks do it. It may even need to be a synthesis of organic and inorganic architecture, but it will be possible.
Lagayascienza wrote: ↑November 6th, 2024, 6:24 pm
I do not say that this alone makes our current computers intelligent or conscious. For that, computers would need to be more like brains. They key to making them more like brains would be to first discover in more detail how brains do what they do and then to build a machine that does what the brain does. That must be possible in principle because intelligence and consciousness do not happen by magic. They are the result of processes which occur in physical brains. There is no reason why, in principle, these processes could not occur in non-organic brains.
Count Lucanor wrote:I have a strong feeling that this would be a failed project, among other things, for the simple reason that such approach assumes cognition to be the business of an isolated brain, which I think is a legacy of mind-body dualism. If we want to build intelligent machines, we will have to build artificial organisms.
That is what I have been saying. But I don’t think the project is doomed to failure. While we may have to build artificial "organisms", they will not need to be exact copies of natural organisms which have been built over billions of years by natural selection. Just as we did not need to build airplanes with wings that flap like birds, so we will not need to build intelligent machines exactly like intelligent organisms.
Lagayascienza wrote: ↑November 6th, 2024, 6:24 pm
This does not mean that intelligent and conscious machines would have to be exact replicas of brains. They would just need to do the job. When we built flying machines they did not need to be exactly like birds that flap their wings. Now our flying machines fly faster and higher than birds. Similarly, intelligent machines will one day surpass humans.
Count Lucanor wrote:The problem right now seems to be that, trapped in the hype of the Turing-based computational metaphor, engineers are looking for the equivalent of flapping wings in the form of algorithms. Machines will not be intelligent that way, although as any technology, they will be instrumental to humans for implementing processes that surpass innate human abilities.
I am not trapped in a Turing-based metaphor. And nor am I obsessed with “digitality”. There are researchers who I've been reading who are searching for a new metaphor and that search is now beginning to inform the literature. Anything that does not breach the laws of physics is possible. Intelligence and consciousness in organic neural networks happen in accord with the laws of physics and not by magic. Therefore, building machines in accord with the laws of physics with these capabilities must be possible in principle. I think that intelligence and consciousness in a non-organic substrate will be different in some ways to that embodied in natural organisms, but that difference will not make artificial intelligence and consciousness impossible. The “impossibilists” insist, unreasonably IMO, that it is not possible to construct intelligent conscious machines, just as creationists insist that evolution is impossible. I don’t agree with either of them. Maybe they don't want it to be possible. But that doesn't make it impossible.