Leading AI researchers have concluded that the current trajectory of AI development is unlikely to lead to artificial general intelligence (AGI), an ambitious goal that the industry is striving for. write gizmode.
This is stated in a large-scale report by the Association for the Advancement of Artificial Intelligence (AAAI) on the future of AI research in 2025. The report, which brings together the conclusions of 24 experts and more than 450 surveyed researchers, addresses issues from technical infrastructure to social aspects of AI implementation.
One of the report’s key messages is that the industry is being driven by hype. In a section on the gap between AI perception and reality, 79% of respondents agreed that public perceptions of AI’s capabilities are unrealistic. Another 90% believe that this gap is hindering further research, with 74% saying that hype is driving research.
"Many people are too eager to believe in exaggerated expectations for AI," — said MIT researcher Rodney Brooks, who led the relevant section, adding that the “Gartner Cycle of Expectations” is also evident here: excessive enthusiasm is often followed by disappointment.
Artificial general intelligence (AGI) is a hypothetical level of intelligence for machines that can learn and think like humans. This goal is considered the “holy grail” of AI, with the potential to radically change the way we work in a wide range of industries, from travel planning to healthcare and education.
However, 76% of researchers surveyed agreed that simply scaling up current AI approaches will not lead to AGI.
“Overall, the responses suggest a cautious but consistent approach: researchers emphasize safety, ethical governance, fair benefit sharing, and incremental innovation. They advocate collaboration rather than a race to AGI,” - says the report.
Henry Kautz, a computer scientist at the University of Virginia who led the report's section on facts and credibility, pointed out how far the field has come, but also how far it still has to go.
"Five years ago, artificial intelligence was mostly limited to narrow tasks," notes Kautz. "Today, it can run chatbots that interact with the public, but even the best models still only give correct answers to half of the test questions."
He added that one promising way forward could be to create teams of AI agents that would collaborate with each other and check each other for reliability. "This way we can increase trust and reliability," – says Kautz.
The report makes clear that the AI community is struggling to address fundamental questions—not just about how to build better systems, but also about how to govern and use them responsibly. While the public debate is often dominated by flashy headlines and corporate announcements, researchers are quietly charting a more cautious course.
The AI boom shows no signs of slowing down, but AAAI’s analysis suggests that for the industry to deliver on its promise, it may need to rethink both its methods and its messaging.
Thank you for being with us! Monobank for the support of the ElitExpert editorial office.
