SUPERINTELLIGENCE INSTEAD OF THE HOLY SPIRIT: a new hero emerges in the philosophy of history

Source of the photo: wealthandfinance-news.com
Albert Einstein asserted, «All religions, arts, and sciences are branches of the same tree». Since the decline of interest in the concept of the Trinity during the Enlightenment, culture in the 20th and 21st centuries is increasingly perceived as a triune synthesis of science, art, and religion.
We invite you to consider the philosophy of AI as an embodiment of the triadic archetype fundamental to knowledge and culture. The «scientific-religious» idea of successive eras of Weak AI, Strong AI, and Superintelligence — is it purely coincidental in contemporary culture?
IF INTELLIGENCE, THEN ONLY HUMAN
The term «artificial intelligence» was proposed in 1956 by John McCarthy at the Dartmouth Conference. However, the concept of computational machine intelligence was first discussed by English mathematician Alan Turing, a key figure in breaking German codes during WWII and one of the founding fathers of IT and AI.
Turing laid the groundwork for theoretical computer science, steering scientific thought towards artificial intelligence.
He introduced the Turing Test, suggesting that a system should be considered intelligent if it could communicate indistinguishably from a human. Since then, computer science has aimed to mimic and surpass human intellectual capabilities.
THE GOLDEN TWENTY YEARS OF AI
Between the 1950s and 1970s, the first barriers separating human and artificial intelligence fell. The first AI program was created in 1951 by British scientist Christopher Strachey, a pioneer in programming languages. AI quickly competed with humans in checkers and chess, learning to predict moves.
In 1965, Joseph Weizenbaum from MIT developed «Eliza», a predecessor to Siri. In 1973, the idea of a driverless vehicle became a reality with the «Stanford Cart», the first computer-controlled car.
As we can see, prototypes of many AI wonders that are now part of our everyday life were created during this «golden twenty years». Essentially, all the main ideas of a world where humans coexist with AI were conceived back then. Today, scientific thought only «clarifies» and modernizes these discoveries.
VODKA IS GOOD, BUT THE MEAT IS SPOILED
Unfortunately, scientific progress does not follow a straight line toward the desired goal. It encounters dead ends, setbacks, and periods of stagnation, followed by rapid leaps forward. The 1970s brought disappointment in AI to the scientific world and investors, as reality did not meet very high expectations. Many were fueled by Noam Chomsky’s work on translation algorithms.
Over ten years, the program failed to translate a test phrase from the Gospel accurately, resulting in «the spirit is willing, but the flesh is weak» being translated as «vodka is good, but the meat is spoiled». However, the mid-90s saw another boom in digital information and AI, which continues today, significantly improving AI’s translation capabilities.
HISTORY PHILOSOPHY HAS A NEW HERO
Progress happens in leaps but inevitably: some intellectual abilities of artificial systems, once unique to humans, no longer surprise anyone. These successes likely lead to inflated expectations that AI may not meet at some point.
However, patience is vital: a stall does not mean the era of strong AI, equal to human intellect, is fundamentally impossible. It simply requires a temporal lag. Meanwhile, AI is becoming the main character in archetypal representations of history, fitting into the cultural narrative rather than being an anomaly.
THREE TECHNOLOGICAL PRINCIPLES AND THREE TYPES OF AI
Today, AI progress is based on three critical technological principles: machine learning algorithms, neural networks modeled after human nerve cells, and deep learning designed to find patterns in vast data sets.
AI systems are used in various fields, such as the Internet, banking, logistics, transport, medicine, security, industry, and agriculture. They are primarily Narrow AI systems with a practical focus.
The second type of AI, General AI, is on the verge of being created. This AI matches human cognitive abilities, understanding information rather than just processing it.
The third type is Superintelligence.
PEEKING BEYOND THE HORIZON?
Strong AI should be self-aware, capable of self-learning, setting its own goals, and ultimately thinking, feeling, and communicating like a human. While Siri, current virtual assistants, and chatbots have yet to arrive, scientific and civilizational progress in this direction seems unstoppable.
Can we look beyond this horizon? The philosophy of AI, which aligns with the philosophy of history, suggests we can, but only up to the creation of Superintelligence. What lies beyond remains unknown, venturing into the realms of singularity, utopias, dystopias, and various scientific and speculative fiction.
TRIADIC ARCHETYPE
The historiographical and cultural potential of the «image of the future» featuring AI is virtually inexhaustible. This concept aligns seamlessly with the teachings about the ultimate state of the world and humanity, which existed long before AI’s advent.
Joachim of Fiore, a key figure in European culture, is notably mentioned by Dante in his «Divine Comedy» for his prophetic visions, including a predicted apocalypse in 1260. Despite the world not ending, his progressive tripartite historiographical scheme, where the Holy Trinity successively manifests in history, profoundly influenced thinkers such as Norman Cohn.
The idea of triadic structures is seen across various domains, including Eastern philosophy, world religions, and the works of philosophers like Plato, Proclus, Comenius, Vico, Fichte, Hegel, Schelling, and Spengler. Today, culture increasingly recognizes a trinitarian synthesis of science, art, and religion. This systemic embodiment of the triadic archetype is evident in mathematics, biology, physics, art, and, naturally, in AI.
SUPERINTELLIGENCE INSTEAD OF THE HOLY SPIRIT?
Joachim of Fiore spoke of successive world states: the era of the Father, the era of the Son, and the era of the Holy Spirit. The Church condemned his teachings at the Fourth Lateran Council in 1215, deeming them an «Antichrist deception» and a «falsification of the future kingdom» because, according to the Church, messianic hope could only be fulfilled beyond history.
The notion of artificial weak, strong, and Superintelligence typologically mirrors these concepts. Despite its «scientific» nature, this idea remains within a religious paradigm. How it is evaluated depends on your worldview and beliefs. Thus, among AI theorists, there are three groups of «believers» aligned with this triadic idea.
THREE GROUPS OF «BELIEVERS»
The first group believes that within the next 50 years, scientists will develop highly advanced AI with human-like consciousness. This group includes optimists at companies like Google, OpenCog, and Microsoft, who significantly contribute to AI development.
Trevor Sands from Lockheed Martin, associated with the second group, expects substantial AI advancements in 5–15 years but doubts AI will surpass human consciousness and versatility until neural networks match the human brain’s 86 billion neurons. Current technology caps at one million artificial neurons.
The third group predicts that after reaching a technological «plateau», AI progress will stagnate for decades, similar to patterns in electronics and aerospace, where significant innovations are rare and incremental improvements are challenging.
ХХХ
The future will determine who among these groups is correct. For now, we can only marvel at Joachim of Fiore’s visions of the final kingdom of the Third Testament, where humans will have spiritual bodies free from physical needs, leading to ultimate freedom and the obsolescence of all authority.
Isn’t this reminiscent of singularity futurology and the era of Superintelligence? However, even if human consciousness learns to exist in virtual reality apart from the body, there are doubts about humanity reclaiming a «lost paradise» within historical time. Stephen Hawking warned that computer goals might not align with human objectives.