To the first question in the title of this topic: “Is AI ‘intelligent’?, the answer is: you might define intelligence in so many ways that it is possible to answer that question affirmatively, but also negatively, however, if it is defined in the terms of the key stakeholders of the AI field, then NO, it is evidently not intelligent. The key stakeholders in the AI world define intelligence using human intelligence as the frame of reference.
The second question in the title of this topic: “What is ‘intelligence’ anyway?”, is implicitly answered in the previous paragraph.
What are, for me, the most important implications of the above?
First, that we have been deceived by the tech lords and the media echo chamber of their narratives, on the real possibilities of their accomplishments. In 2023, hundreds of “AI luminaries” signed an open letter warning that artificial intelligence poses a serious risk of human extinction. But no, no singularity is in progress. No, machines are not going to become a social, autonomous force, that will take over the world sooner or later. If dangers are arising from the developments of AI, they all come from whatever use humans make of this technology, as all technologies in the past, which have been instruments of human goals.
Why would the tech lords lie or distort the facts over this? Several hypothesis:
1. Such announcements increase shareholder value.
2. They are themselves deceived by the narratives of futurologists that have bought on the idea that scaled-up computation can lead to the emergence of consciousness.
3. Both of the above.
Second implication: the computational theory of mind is proven wrong again. If traditional computation did not produce the kind of autonomy that could emulate artificially human-level agency and intelligence, the key stakeholders in the AI business were betting that the new language models would break the barrier. There was much hype, but now that the waters have settled and researchers are looking at it calmly and reasonably, LLMs and LRMs are not delivering as expected, in fact, they are failing spectacularly in the key parameters of human-level intelligence.
The third implication is that these developments are what has brought up the distinction between “narrow AI” and “strong AI”, or AI and AGI. That is, they are just moving the goalpost. A way to keep expectations running and feeding the hype.
― Marcus Tullius Cicero