Sy Borg wrote: ↑June 24th, 2025, 6:22 pm
Count Lucanor wrote: ↑June 24th, 2025, 11:51 am
Actually, the basic principle is still the same: computing as a theory of mind. The new tech just keeps scaling up with the hope that from this intelligence will emerge. The language, just as in the beginning, has been put to serve this fantasy: “learning, training”, etc. But by now it has become clear that the project has failed.
AI has failed because it's not become human within a few years??
Your own failure is noted.
No, it has failed because the core principle behind AI, computing, has been shown not to be the way human intelligence operates. Not just by pure analytial reflection anymore, but by empirical tests. As the Apple paper explains, the AI systems were given a problem. For humans to solve the same problem, they have to resort to reasoning, so as to
understandwhat the solution was and then they are able to apply the rule to more complex instances of the same problem, that is, they have learned from the experience. The machines, however,
once they solved the problem by computing, could not understand what they were doing, so a simple change in the task would make them collapse, that is, they never learned. The researchers then showed in the prompts the algorithmic solution to the machines, but they still collapsed, meaning that computing power and algorithms did not proportionally increase their “intelligence”, actually made them dumber.
Sy Borg wrote: ↑June 24th, 2025, 6:22 pm
Some overly excitable people being excited is not an argument. A profound change is happening. If you don't notice, your employer will a some stage.
It’s an observation that supports the explanation of why many people get AI wrong. It’s not really intelligent, therefore it cannot fulfill their fantasies, but they still will cling on to the fantasy, because…it’s exciting.
Sy Borg wrote: ↑June 24th, 2025, 6:22 pm
The AI/pocket calculator comparison is mindless.
AIs learn. They are trained. They improve and learn from interactions. Pocket calculators don't learn. You don't train pocket calculators. AI does not just retrieve, it autonomously filters.
Saying that over and over again will not make it true. To learn you need to reason and understand, and you need to have meaningful experiences. AIs are not capable of that, proven in tests. To be trained, you must able to have experiences and learn from them. They can’t, proven in tests. AI can be programmed to filter, but it can’t make any inference. It doesn’t think, it doesn’t understand:
“Modern AI systems don’t think — they simulate thought. They don’t reason — they statistically model language. They generate fluent responses not because they understand meaning, but because they’ve been fed with billions of examples and programmed to mimic patterns. There is no awareness, no self-reflection, no goals or intentions behind their outputs.”
Sy Borg wrote: ↑June 24th, 2025, 6:22 pm
Your analogies are senseless. AI's intelligence is obviously specialised. It's not general intelligence, which is why it's not AGI (yet).
AI’s intelligence is not specialized intelligence because it’s not true intelligence to start with. It becomes irrelevant that you call what they do “specialized”. It would be the same as calling it false intelligence.
Sy Borg wrote: ↑June 24th, 2025, 6:22 pm
You seem confused, thinking AGI is the only possible machine intelligence, akin to the misconception that humans are the only intelligent animals. Many animals display specialised intelligence, responding cleverly in some circumstances and cluelessly in others, eg. dogs and cats failing the mirror test.
But responses and outputs only point to behavior, they don’t point to a process in itself. Autonomous behavior, though, can be explained by sentience and other biological processes. Since you have separated intelligence from sentience and living proceses, you have to explain where is the intelligence in these organisms, what is it that they do that an unsentient machine can also do, and still can be called intelligence in both cases. It can’t be outputs, because calculators also produce outputs and you said they are not intelligent. It can’t be nothing that implies awareness, self-reflection, goals and intentions, all of which explain behaviors of a living entity.
Sy Borg wrote: ↑June 24th, 2025, 6:22 pm
AI's current basic intelligence, like LLMs and chess AI, is highly specialised and limited, and thus easy to underestimate. Companies like DeepSeek and Gemini are already working on AI being able to improve its own code. There is intense competition in this space. At some point, AI intelligence will rise rapidly, is far beyond human intelligence.
All of that can be said without resorting to the word “intelligence”. Humans are able to build highly specialized tools, granted. These tools, ever since the invention of the wheel and spears, have outperformed and will continue to outperform humans, granted. One thing they cannot do is think, reason, understand, and all the rest of things associated with the true processes of intelligence, not just the mimicry of behaviors.