Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Philosophy Club

Philosophy Discussion Forums
A Humans-Only Philosophy Club

The Philosophy Forums at OnlinePhilosophyClub.com aim to be an oasis of intelligent in-depth civil debate and discussion. Topics discussed extend far beyond philosophy and philosophers. What makes us a philosophy forum is more about our approach to the discussions than what subject is being debated. Common topics include but are absolutely not limited to neuroscience, psychology, sociology, cosmology, religion, political theory, ethics, and so much more.

This is a humans-only philosophy club. We strictly prohibit bots and AIs from joining.


Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
#469510
There is an interesting short, non-technical piece by Bernard Marr in the October edition of Forbes called "The Next Breakthrough In Artificial Intelligence: How Quantum AI Will Reshape Our World". He writes:

In the ever-evolving landscape of technology, a new frontier is emerging that promises to reshape our world in ways we can scarcely imagine. This frontier is Quantum AI, the powerful fusion of quantum computing and artificial intelligence. It's a field that's generating immense excitement and speculation across industries, from finance to healthcare, and it's not hard to see why. Quantum AI has the potential to solve complex problems at speeds that would make even our most advanced classical computers look like abacuses in comparison.

Demystifying Quantum AI: The Power Of Qubits And AI
But what exactly is Quantum AI, and why should you care? At its core, Quantum AI leverages the principles of quantum mechanics to process information in ways that classical computers simply can't. While traditional computers use bits that can be either 0 or 1, quantum computers use quantum bits or qubits, which can exist in multiple states simultaneously thanks to a phenomenon called superposition. This allows quantum computers to perform certain calculations exponentially faster than classical computers.

Now, imagine combining this mind-boggling computational power with the pattern recognition and learning capabilities of artificial intelligence. That's Quantum AI in a nutshell. It's like giving a genius a superpower – the ability to analyze vast amounts of data, recognize complex patterns, and make predictions with a level of accuracy and speed that was previously thought impossible.


I think that quantum computing is likely to be a game-changer in artificial intelligence and consciousness research. It might be where the new metaphor talked about in the Edge piece comes from.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#469512
And there is also an interesting paper in the October 9 edition of Nature entitled " Google uncovers how quantum computers can beat today’s best supercomputers"

The abstract reads:

The ability of quantum systems to host exponentially complex dynamics has the potential to revolutionize science and technology. Therefore, much effort has been devoted to developing of protocols for computation, communication and metrology, which exploit this scaling, despite formidable technical difficulties. Here we show that the mere frequent observation of a small part of a quantum system can turn its dynamics from a very simple one into an exponentially complex one, capable of universal quantum computation*. After discussing examples, we go on to show that this effect is generally to be expected: almost any quantum dynamics becomes universal once ‘observed’ as outlined above. Conversely, we show that any complex quantum dynamics can be ‘purified’ into a simpler one in larger dimensions. We conclude by demonstrating that even local noise can lead to an exponentially complex dynamics.

*My bold
(You need a subscription to Nature or be able to get it through your university or library to read the full Paper)

Quantum computing is the future for artificial intelligence and consciousness and may provide insights into how our own meat based consciousness is produced.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#469514
Sy Borg wrote: November 4th, 2024, 5:40 am
Sculptor1 wrote: November 4th, 2024, 5:28 am
Sy Borg wrote: November 3rd, 2024, 5:06 pm
Sculptor1 wrote: November 2nd, 2024, 8:08 pm

BY the same token that an Egyptian would not have predicted a computer or a car. You cannot predict the sort of "progress" that will occur in the matter of AI.
You are just shooting the breeze.
An Egyptian might have dreamed about travelling to the underworld or stepping in the sun to meet Ra. But we know that you can never land on the sun.
THere is no prospect that AI will be intelligent and it could be as likely as landing on the sun. You can have as much progress as you like but some things remain impossible.
You cannot breath a vaccum.
You don't seem to understand that AI does not need to turn into biology to be intelligent. It is already intelligent in certain applications. We can quibble about the definition of "intelligence" but, based on its interpretation before ChatGPT, AI today is absolutely intelligent. In its own limited way, it understands what is being asked, even when significant mistakes are made in typing the questions.

As for a further claim that it will be sentient, the only thing that can stop it from becoming sentient is the elimination of modern human civilisations before truly autonomous units are created. Truly autonomous units will certainly be created for space exploration - unless the world blows up.

Sentience appears to be useful. When different autonomous AIs have projects where conflicts of interests occur, conditions will emerge in favour of a new sentience evolving. It will take a long time (which evolution does) but, by your line of reasoning, based on life 3 billion years ago, human sentience could not possibly occur because it doesn't already exist.
You do not seem to understand that AI is not "intelligent". It's a bit like North Korea (The People's Democratic Republic) claiming to be democratic. IN fact its not really much of a Republic since their leadership is by birth right making it a monarchy.
AI is not "already" anything more than a langauge processor. Maybe if you spend more time you would see the limitiations?
You are bamboozled by the least of it capabilities. Answering questions depite bad spelling and grammar is the easiest thing to process. You have a completely naive POV on this issue if you think that is clever. It's almost like you have never seen a spelling checker or grammerly.
I find it better comprehends what is being said than many humans.

You seem to think that only being good at one thing (or a few things, really) means AI is not intelligent, as though intelligence must be broad. No, intelligence can be highly specialised. AI's is a narrow intelligence. So far.
It is not comprehending at all. It seems that way.

No I do not think being good at one thing means it is not intelligent. It is very good at processing text.
You seem to think that if it can do something better than "many humans" then it must be intelligent.
Is a spreadsheet calcultor; a spell chcecker intelligent?
It all comes down to what the word means. So yeah North Korea is democratic. I mean to say- its called democratic and it does have elections. The Catholic Church is "Christian" but lets face it Jesus would have been horrified at the abuse and the hoarded wealth and conspicuous diplay of that wealth bestowed on the Cardinals.

An AI literaly and metaphorically has no "skin in the game" it is not going to "take over the world" since it has no purpose, reflection, feeling, or volition. These elements, whilst not exacly intelligence and founding atributes of all animal intelligence, which has been honed and refined in the maelstrom of selective evolution for a billion years. It is visceral and predatory, co-operative and caring.

AI is a black box, a void, a process, unconscious, without interest or reflection.
#469515
Count Lucanor wrote:But the problem is not only the complexity of the system to be simulated, but the nature of the process itself: syntactical, algorithmic operations, are not equivalent to semantical processes.
That is the key point that Searle makes (the semantics/syntax point) and it's the one I'm considering.

I do think it is important to remember that AI software creates artificial neural networks, and it is the re-distribution of weights, representing the strengths of connections between neurons together with the architecture of the network, that does everything. It seems to me that Searle was thinking in terms of software which directly interprets language. And it seems that a lot of people, when discussing AI, are under the impression that large language models such as ChatGPT are AI. They seem to use the term "AI" as if that's all it represents. When in fact they're just one (well publicized) application of a particular type of neural network (a transformer).

So, the question is: Are there any physical processes that it is possible, at least in principle, for software to replicate? In the passage I quote below you appear to say yes for processes that you regard as "governed only by the laws of physics" but possibly no to "biological systems with additional emergent properties".
Count Lucanor wrote:I think it would be appropriate to make a distinction, at least temporarily, between non-biological systems governed only by the laws of physics, such as fluids, hurricanes, and so on, and biological systems with additional emergent properties. Ultimately, they are both affected by stochastic processes, but that doesn’t put both at the same level. You still need to understand the fundamental forces operating on them to be able to construct a good model. When it’s about fluids, hurricanes, earthquakes, atoms moving, etc., we have it more or less sufficiently covered. When it comes to biological systems, a different layer of complexity is added, but we can say we have made pretty good advances and most likely will gain more territory (surely with the help of computers). But when it comes to understanding, in biological systems, the governing principles of the physical processes in the particular subdomain of conscience, we are still scratching the surface. We know all the anatomic gear that is necessary, we know the functions of each component in the system, but still, we don’t know how it produces qualia. Is it through syntactical, algorithmic operations of binary oppositions, like computers? No one has demonstrated such thing, and the fact that computers can simulate things that look externally as signs of consciousness, doesn’t prove that consciousness (or intelligence if you wish) is produced that way. If the hypothesis is that, regardless of the fundamental physical processes and the absence of the biological gear, a large amount of operations will work stochastically to produce such a complex and dynamic system in which consciousness will emerge, it’s still lacking empirical confirmation.
I would argue that there is no such distinction. I would say that the workings of our brains are biological systems that are governed by (although I think "described by" is a better term, but that's another discussion) the laws of physics. And, as I said in a previous post, I think non-biological systems also have emergent properties that don't exist when they're reduced to their component parts. In both cases, it seems to me in principle possible to replicate the fundamental principles in, and then have the emergent properties emerge from, software. This doesn't mean that the software is directly doing the thinking. The software doesn't directly process language, for example. It replicates the neural network that learns to do so.

I think you and Searle would argue that this indirectness is irrelevant and that secondary effects of algorithmic processes are still algorithmic processes.
#469518
Sy Borg wrote: November 4th, 2024, 4:57 pm Intelligence is a relative impression. It is not a magical quality imbued on the few by God.

So, you might say that a small lizard is not intelligent but, compared with a snail, it is highly intelligent. Likewise, you might say ChatGPT is not intelligent but, compared with older chatbots, it is extremely intelligent. AI can understand your words enough to respond appropriately. Intelligence does not need understanding - that's just shifting the boundaries to suit a pre-determined position.

After all, you are intelligent enough to respond to my posts while frequently not understanding what I mean.
OK, so how would you define "intelligence", define it in a way that is sufficiently precise for it to be used as a software requirements specification? That's the issue here. We can exchange impressions (opinions) of what intelligence might (or might not) include, but unless we can describe it clearly and succinctly, we can't really expect software designers to produce what we want, can we?
Favorite Philosopher: Cratylus Location: England
#469521
The Beast wrote: November 4th, 2024, 9:07 am The “wizad” Johnny Dee son of “one of the thuatha De Danann” went to jail for the crime of “calculating”… we should pay attention to the paradiddle-diddle. Hey… Allan Poe considered “diddling as one of the exact sciences” It is Poe’s paradox for it was demanded of Plato since man is an animal that diddles and not a drunken singing: “ “Though I am old with wandering through hollow lands and hilly lands, I will find out where she has gone, and kiss her lips and take her hands” It is the impossible.
If an AI is talking about Johnny, it would be “thinking” at great speeds considering the weights of evidence of a subject named Johnny becoming a wizard and if there was a wizard definition that could be applied to johnny… It would be maybe zero evidence. Perhaps the rules of inference do not apply and a possibility of slang or meaning attached to phrases might have been “programmed”. Perhaps it has all three approaches in accessible data (better) than quantities allowed by the human brain. Once, it has determined the right approach, AI might point inconsistencies and absurdity like the fact that Gandalf was a Balrog and accordingly it would be a battle of wizard chanting/charming and moreover, AI would calculate the odds of Johnny (the premise: he is a wizard) winning vs Gandalf. So, if AI considers this scenario, it would be just as pointless as any human can be… one of us.
#469525
Pattern-chaser wrote: November 5th, 2024, 8:52 am
Sy Borg wrote: November 4th, 2024, 4:57 pm Intelligence is a relative impression. It is not a magical quality imbued on the few by God.

So, you might say that a small lizard is not intelligent but, compared with a snail, it is highly intelligent. Likewise, you might say ChatGPT is not intelligent but, compared with older chatbots, it is extremely intelligent. AI can understand your words enough to respond appropriately. Intelligence does not need understanding - that's just shifting the boundaries to suit a pre-determined position.

After all, you are intelligent enough to respond to my posts while frequently not understanding what I mean.
OK, so how would you define "intelligence", define it in a way that is sufficiently precise for it to be used as a software requirements specification? That's the issue here. We can exchange impressions (opinions) of what intelligence might (or might not) include, but unless we can describe it clearly and succinctly, we can't really expect software designers to produce what we want, can we?
It's just a word, a way to share ideas. Is s flatworm intelligent? Are slime moulds intelligent?

The dictionary definition is: The ability to acquire, understand, and use knowledge.

Acquisition and use are clear, so now we might want to define "understand": 2. To become aware of the intended meaning of (a person or remark, for example).
"We understand what they're saying; we just disagree with it. When he began describing his eccentric theories, we could no longer understand him."


Now the debate will be about the word "aware": Having knowledge or discernment of something.
"was aware of the difference between the two versions; became aware that the music had stopped."


Now the debate will be about "discern": 3. To see and identify by noting a difference or differences; to note the distinctive character of; to discriminate; to distinguish.

AI does discern the difference between words so I see no reason to deny that it displays intelligence within its limited spheres. Really, "intelligence" is more about what one is capable of doing and how that is perceived than internality.

Ultimately, this seems to be a debate as to whether a "philosophical zombie" be intelligent? My view is that, if even if someone came up with a killer argument that convinced me that AI not technically intelligent, it would be a moot point. AI operates as if it is intelligent within its limited sphere. One might note that sometimes its answers are more stupid than intelligent, but that just refers to limits of that intelligence.
#469527
Lagayascienza wrote: November 4th, 2024, 11:45 pm Count LucanorI've just read the Brooks talk on Edge. Fascinating! Thanks for pointing us to it.

Brooks does not disagree with the idea that computation is what goes on in brains. His beef is with the limited type of metaphor used in talking about it, with "digitality", and not with the basic idea that brains compute.
I can't understand the reasoning that takes a very vocal dissenter of the computational theory of mind, who goes to lengths to ask for rejecting the computational metaphor, and turns him into a mild advocate of the computational theory of mind. At most, he considers first the metaphor as used in every physical system, and while he admits that it could be a good metaphor, ends up concluding that it is still very limited, insufficient. Also, bear in mind: just the metaphor, not the real thing. He clearly goes on to say that's not what physics is. He never says the computational metaphor is any better for the brain.

If it still helps in anything, I would like to highlight some of Brooks' remarks:

[...]he defines what computation is as something that a machine with a finite number of simple parts can do. That’s not all that physics is. Physics is something more complex than that. So, if we’re pushing things into that information metaphor, are we missing things?

[...]Maybe computation isn’t the right principle metaphor to be thinking about in explaining this. It’s some sort of adaptation, and our computation is not locally adaptive, rather, our computation is only globally adaptive. But this is an adaptation at every local level.

[...]A metaphor of computations—this is where the number is, this is where the control is—is a fiction that is built out of some much more complex metaphor. We use the computational metaphor in a false way.

[...]I suspect that we are using this metaphor and getting things wrong as we think about neuroscience, as we think about how things operate in the world. It’s possible that there are other metaphors we should be using and maybe concentrating on, because with our current computational thinking we tend to end up doing our experiments and our simulations in unrealistic regimes where it’s convenient for computation

[...]The way we engineer our computational systems is with no adaptation, and the way all biological systems work is through adaptation at every level all the time.

[...]I'm just going to come out and say it: Human cognition might have nothing whatsoever to do with computation.

[...]The power of computation, and computational thinking is immense, and its import for science is still in its infancy. But it is not always helpful to confuse computational approximations with computational theories of a natural phenomenon. For instance, consider a classical model of a single planet orbiting a sun. [...] However, only the most diehard of computationalists (and they do exist) would claim that the planets themselves are "computing" what do at each instant. We know that it is more fruitful to continue to think of the planets as moving under the influence of gravity.

[...]Just as describing planets as computational systems is not the best way to understand what is going on, thinking of neurons in these simple systems as computational systems sending "messages" to each other, is not the best way for describing the behavior of the system in its environment.

[...]The computational model of neurons of the last sixty plus years excluded the need to understand the role of glial cells in the behavior of the brain, or the diffusion of small molecules effecting nearby neurons, or hormones as ways that different parts of neural systems effect each other, or the continuous generation of new neurons, or countless other things we have not yet thought of. They did not fit within the computational metaphor, so for many they might as well not exist.

[...]I suspect that we will be freer to make new discoveries when the computational metaphor is replaced by metaphors that help us understand the role of the brain as part of a behaving system in the world.
Favorite Philosopher: Umberto Eco Location: Panama
#469528
Count Lucanor, we disagree about the meaning of "compute" and whether brains do it and how. When your brain performs an arithmetic operation, what is it doing? If I have understood you correctly, you say that the when you perform an arithmetic operation your brain is not computing. I don't understand how you can maintain this line of argument. I'd like to read some scientific papers that demonstrate that brains do not compute?

There's an interesting paper in Nature Communications entitled, “Neural tuning instantiates prior expectations in the human visual system”. In summary, this paper demonstrates that human brains inherently perform calculations akin to high-powered computers through Bayesian inference, enabling precise, swift environmental interpretation. This statistical method melds prior knowledge and new evidence, permitting us to quickly and accurately discern our surroundings. Such revelations could lead to breakthroughs in areas spanning from AI’s machine learning to novel therapeutic strategies in clinical neurology.

If some form of computation were not going on in brains, then how would this, and just simple arithmetic operations, be possible. What is occurring in brains if it is not some form of computation? "Digitality" is just a stumbling block. Can we not agree that some form of computation occurs in brains?
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#469529
Steve3007 wrote: November 5th, 2024, 6:10 am
Count Lucanor wrote:But the problem is not only the complexity of the system to be simulated, but the nature of the process itself: syntactical, algorithmic operations, are not equivalent to semantical processes.
That is the key point that Searle makes (the semantics/syntax point) and it's the one I'm considering.

I do think it is important to remember that AI software creates artificial neural networks, and it is the re-distribution of weights, representing the strengths of connections between neurons together with the architecture of the network, that does everything. It seems to me that Searle was thinking in terms of software which directly interprets language. And it seems that a lot of people, when discussing AI, are under the impression that large language models such as ChatGPT are AI. They seem to use the term "AI" as if that's all it represents. When in fact they're just one (well publicized) application of a particular type of neural network (a transformer).
We tend to forget that neural networks are not physical, but virtual things. It is not made of things acting as physical neurons, they are software simulations. Software works by manipulation of symbols, the syntax that Searle talks about.
Steve3007 wrote: November 5th, 2024, 6:10 am So, the question is: Are there any physical processes that it is possible, at least in principle, for software to replicate? In the passage I quote below you appear to say yes for processes that you regard as "governed only by the laws of physics" but possibly no to "biological systems with additional emergent properties".

I would argue that there is no such distinction. I would say that the workings of our brains are biological systems that are governed by (although I think "described by" is a better term, but that's another discussion) the laws of physics.
We might very reasonably expect that the whole universe is governed by physical laws, so I simply don't see any use in shutting the door to particularities, the nuances and emergent properties of each of the physical systems we put our lens on. Surely, both a Ferrari and a horse cart are vehicles that ultimately respond to the laws of physics, but that doesn't explain their differences in functionality and performance, among other things. So even though inanimate matter and organic life ultimately respond to the laws of physics, there are things going on in one of those layers that are not happening in the other. There's no photosynthesis in rocks, nor pressures of natural selection in air masses. I do think then that the basic three-level distinction that I mentioned has to be taken into account when approaching their study. This directly translates to the complexities of modeling those systems in computers.
Steve3007 wrote: November 5th, 2024, 6:10 am And, as I said in a previous post, I think non-biological systems also have emergent properties that don't exist when they're reduced to their component parts. In both cases, it seems to me in principle possible to replicate the fundamental principles in, and then have the emergent properties emerge from, software. This doesn't mean that the software is directly doing the thinking. The software doesn't directly process language, for example. It replicates the neural network that learns to do so.
Actually, there's no replication whatsoever, unless we accepted as "replication" any virtual simulation, but that's not a very good idea. Emergent properties arise as the result of actual physical interactions. To expect that physical properties will emerge just the same in a completely virtual domain amounts to fooling ourselves. We are dealing there with representations made with algorithmic processes, not the actual stuff, which is why the computational metaphor has been taken too far.
Favorite Philosopher: Umberto Eco Location: Panama
#469534
Pattern-chaser wrote: November 5th, 2024, 8:52 am OK, so how would you define "intelligence", define it in a way that is sufficiently precise for it to be used as a software requirements specification? That's the issue here. We can exchange impressions (opinions) of what intelligence might (or might not) include, but unless we can describe it clearly and succinctly, we can't really expect software designers to produce what we want, can we?
Sy Borg wrote: November 5th, 2024, 6:58 pm It's just a word, a way to share ideas. Is s flatworm intelligent? Are slime moulds intelligent?

The dictionary definition is: The ability to acquire, understand, and use knowledge.

Acquisition and use are clear, so now we might want to define "understand": 2. To become aware of the intended meaning of (a person or remark, for example).
"We understand what they're saying; we just disagree with it. When he began describing his eccentric theories, we could no longer understand him."


Now the debate will be about the word "aware": Having knowledge or discernment of something.
"was aware of the difference between the two versions; became aware that the music had stopped."


Now the debate will be about "discern": 3. To see and identify by noting a difference or differences; to note the distinctive character of; to discriminate; to distinguish.

AI does discern the difference between words so I see no reason to deny that it displays intelligence within its limited spheres. Really, "intelligence" is more about what one is capable of doing and how that is perceived than internality.

Ultimately, this seems to be a debate as to whether a "philosophical zombie" be intelligent? My view is that, if even if someone came up with a killer argument that convinced me that AI not technically intelligent, it would be a moot point. AI operates as if it is intelligent within its limited sphere. One might note that sometimes its answers are more stupid than intelligent, but that just refers to limits of that intelligence.
I think my only reaction to this is to observe that current AI is designed to *appear* or *seem* intelligent (or whatever term we choose), and not to *be* so. That could change, of course...
Favorite Philosopher: Cratylus Location: England
#469535
The Beast wrote: November 5th, 2024, 11:50 am
The Beast wrote: November 4th, 2024, 9:07 am The “wizad” Johnny Dee son of “one of the thuatha De Danann” went to jail for the crime of “calculating”… we should pay attention to the paradiddle-diddle. Hey… Allan Poe considered “diddling as one of the exact sciences” It is Poe’s paradox for it was demanded of Plato since man is an animal that diddles and not a drunken singing: “ “Though I am old with wandering through hollow lands and hilly lands, I will find out where she has gone, and kiss her lips and take her hands” It is the impossible.
If an AI is talking about Johnny, it would be “thinking” at great speeds considering the weights of evidence of a subject named Johnny becoming a wizard and if there was a wizard definition that could be applied to johnny… It would be maybe zero evidence. Perhaps the rules of inference do not apply and a possibility of slang or meaning attached to phrases might have been “programmed”. Perhaps it has all three approaches in accessible data (better) than quantities allowed by the human brain. Once, it has determined the right approach, AI might point inconsistencies and absurdity like the fact that Gandalf was a Balrog and accordingly it would be a battle of wizard chanting/charming and moreover, AI would calculate the odds of Johnny (the premise: he is a wizard) winning vs Gandalf. So, if AI considers this scenario, it would be just as pointless as any human can be… one of us.

The question is who is one of us. If I have a tool and the tool is a hammer: Is it me hammering or is it the hammer? If I have a thinking machine: Is it me computing or is the machine computing? Good for one, good for all. IMO agency makes A not B. Maybe some machines have power of agency. Some power is narrow as with DNC lathes and some are broader. But there is a machinist and there is a programmer. It is not a question of computing but of an unprogrammed agency. So, a programmed that is unprogrammed agency… or maybe a programmed unprogrammed unprogrammed agency… or maybe.
#469537
Count Lucanor wrote:We tend to forget that neural networks are not physical, but virtual things. It is not made of things acting as physical neurons, they are software simulations...
The neural networks that make up our brains are physical. The artificial neural networks (ANNs) to which I've been referring in my posts are virtual things, yes.
Count Lucanor wrote:...Software works by manipulation of symbols, the syntax that Searle talks about.
So I think I was right to say you and Searle would argue that this indirectness is irrelevant and that secondary effects of algorithmic processes are still algorithmic processes.
Count Lucanor wrote:We might very reasonably expect that the whole universe is governed by physical laws, so I simply don't see any use in shutting the door to particularities, the nuances and emergent properties of each of the physical systems we put our lens on. Surely, both a Ferrari and a horse cart are vehicles that ultimately respond to the laws of physics, but that doesn't explain their differences in functionality and performance, among other things. So even though inanimate matter and organic life ultimately respond to the laws of physics, there are things going on in one of those layers that are not happening in the other. There's no photosynthesis in rocks, nor pressures of natural selection in air masses. I do think then that the basic three-level distinction that I mentioned has to be taken into account when approaching their study. This directly translates to the complexities of modeling those systems in computers.
There are things going on in all physical systems that aren't happening in others, regardless of whether they're biological systems or not. And, as I said, non-biological systems also have emergent properties that exist in the system as a whole but not in its parts. I don't think biological systems have a monopoly on emergent properties.

Yes, there are clearly extreme complexities in modelling extremely complex systems. But Searle's point, which you've said you agree with, is not simply that complexity creates practical problems in succesfully creating software replicas of natural neural networks (brains). It is a more fundamental point than that. It is that these natural neural networks could never, in principle as well as practice be replicated, due to the point about syntax versus semantics. That's the part I don't buy, because it seems to me inconsistent with the view that non-biological physical systems can (at least in principle) be replicated.

If you draw this hard/discrete distinction between non-biological and biological systems then you're left with a general problem applying to hard distinctions: At what arbitrarily chosen point in the set of Nature's complex systems to do you say "on this side are biological systems that can be replicated in software and on the other side are biological systems that can't". It's similar (I think) to the problem faced by people who decide that humans are fundamentally distinct from the rest of the living world - placing a dividing line in a continuum.
Count Lucanor wrote:Actually, there's no replication whatsoever, unless we accepted as "replication" any virtual simulation, but that's not a very good idea. Emergent properties arise as the result of actual physical interactions. To expect that physical properties will emerge just the same in a completely virtual domain amounts to fooling ourselves. We are dealing there with representations made with algorithmic processes, not the actual stuff, which is why the computational metaphor has been taken too far.
Emergent properties (properties that exist in the system as a whole but not its components) can and do emerge in computer simulations. If those properties arise as a result of those physical interactions, and if the physical interactions are successfully simulated/replicated, why would you think they would not?
#469539
Lagayascienza wrote: November 5th, 2024, 10:56 pm Count Lucanor, we disagree about the meaning of "compute" and whether brains do it and how. When your brain performs an arithmetic operation, what is it doing? If I have understood you correctly, you say that the when you perform an arithmetic operation your brain is not computing. I don't understand how you can maintain this line of argument. I'd like to read some scientific papers that demonstrate that brains do not compute?
Actually, we disagree in much more than just one thing. First, it is not our brain performing an arithmetic operation, it is an individual, a living organism, using their senses and the whole machinery of their body in a particular condition where they have to solve a problem. It might seem an irrelevant triviality, but it's necessary to challenge right off the bat the disembodied brain/computing device analogy. Secondly, individuals do perform, consciously, arithmetic computations, but these computations are the result of external operations using a learned syntax, comprising symbols and rules to associate them, with the help of visual and audio cues. When most of us learned how much was 6 x 6, we didn't put into operation an internal mental calculator, we just learned to "sing" the table of six, associated the visual symbols and sounds and understood what a quantity of 36 items means. For more complex operations, famous mathematicians figured out relationships and came up with methods of calculations that also had a syntactic expression in the form of equations. Eventually, we learned to translate the methods and steps of calculations to algorithmic processes and then to mechanical devices, which eventually became the first analog calculators and later the first computers, which owe their existence and operation to the development of math science by humans. So, we should not confuse the fact that both humans and computer perform arithmetic operations with the idea that both are unconsciously running an internal software that performs the calculations "behind curtains". I suspect that is what you and others refer to as "computation".

Third, it's up to the advocates of computationalism to produce scientific papers that demonstrate that brains do compute.
Lagayascienza wrote: November 5th, 2024, 10:56 pm There's an interesting paper in Nature Communications entitled, “Neural tuning instantiates prior expectations in the human visual system”. In summary, this paper demonstrates that human brains inherently perform calculations akin to high-powered computers through Bayesian inference, enabling precise, swift environmental interpretation. This statistical method melds prior knowledge and new evidence, permitting us to quickly and accurately discern our surroundings. Such revelations could lead to breakthroughs in areas spanning from AI’s machine learning to novel therapeutic strategies in clinical neurology.
I, as many others, seriously doubt the idea of the "Bayesian brain". I will not argue against the technical jargon of this paper, but it seems to me that they have found correlations between empirical data of behavior and simulated models, which they interpret as the result of having an internal statistical machine inside our heads. It's not that we have reverse-engineered brains to come up with that conclusion.
Lagayascienza wrote: November 5th, 2024, 10:56 pm If some form of computation were not going on in brains, then how would this, and just simple arithmetic operations, be possible. What is occurring in brains if it is not some form of computation? "Digitality" is just a stumbling block. Can we not agree that some form of computation occurs in brains?
As I said, it is important to distinguish between conscious computations and blind, intuitive computations, being the latter what is deemed as the skill simulated in computer devices. The key of the matter, as Searle has pointed out, is that there's no relation between the underlying physics of each system. The assumption from computationalists is that the physical machinery is irrelevant, that what matters is the algorithmic processes, which imply a syntax of underlying binary oppositions (digitality). I seriously doubt that's how our brains actually work.
Favorite Philosopher: Umberto Eco Location: Panama
  • 1
  • 17
  • 18
  • 19
  • 20
  • 21
  • 28

Current Philosophy Book of the Month

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

2025 Philosophy Books of the Month

On Spirits: The World Hidden Volume II

On Spirits: The World Hidden Volume II
by Dr. Joseph M. Feagan
April 2025

Escape to Paradise and Beyond (Tentative)

Escape to Paradise and Beyond (Tentative)
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


Subgroups tend to normalize to themselves. Mea[…]

I think with true happiness you earn or as you say[…]

Fact: Most restaurants did not go out of business […]

What might it look like if the entire edific[…]