COLLECTIVE AI: Can Artificial Intelligence Overcome the Flaws of Group Thinking

Art Design: huxley.media via DALL·E 3
The term «collective intelligence» emerged in the 1980s and initially applied only to humans. Later, its features were discovered in other living beings. And now, the world is talking about the possibility of collective artificial intelligence.
Thus, from the definition of collective intelligence as the ability of a biological community to solve problems beyond the capabilities of its members, it is time to remove the word «biological».
Let’s try to understand whether our imperfect collective intelligence can create a superhuman intelligence capable of making error-free decisions.
SWARM INTELLIGENCE AND THE WISDOM OF CROWDS
Aristotle formulated the characteristics of collective intelligence: «When many participate in a discussion, each can contribute their share of virtue and prudence… one understands one detail, another understands a different one, and together they understand everything». However, these characteristics are not exclusive to human societies.
Israeli scientists have discovered that bacterial colonies essentially function as a single large «brain», capable of solving survival problems that individual bacteria cannot. Ants, for example, can be considered bio-robots, with their behavior driven purely by instinct.
But here’s the catch: an average ant colony contains around a million individuals whose combined intellectual resources amount to approximately 250 billion neurons, which is 2.5 times the number of neurons in the human brain. These insects exhibit many signs of highly complex intellectual activity.
These traits can also be found in various fish, birds, and animals. The concept of swarm intelligence from the biological world has transitioned into human contexts, sitting alongside the «wisdom of crowds». This latter term was popularized by financial journalist James Surowiecki, who argued that collective judgments are more accurate because individual errors balance each other out. Further research has confirmed his assertion.
For example, the company Unanimous AI analyzed individual and group predictions on the dynamics of the S&P 500, oil, and gold indices. It found that individual forecasts averaged around 57% accuracy, the forecasts of the «crowd” were 66% accurate, and swarm intelligence predictions achieved an accuracy of 77%.
In 2014, the company used the principles of swarm intelligence to develop the interactive artificial intelligence platform Swarm AI, which began to be used to improve prediction accuracy at Stanford University, Boeing, CNN, Credit Suisse, and other organizations.
THE UNPREDICTABLE «SOCIAL ANIMAL»
Alas, the crowd, which Gustave Le Bon once referred to as a «social animal», turned out to be a phenomenon endowed not only with collective intelligence but also with collective stupidity and destructiveness. Serge Moscovici, who described the 20th century as the «century of crowds», followed Le Bon in understanding the crowd as an unchained, instinct-driven beast. However, in the 21st century, the human mass has been rehabilitated, restoring its right to subjectivity and rationality.
Mariano Sigman and Dan Ariely, experimenting with a group of 10,000 people, found that the conditions for good group decisions include a small number of group members, free discussion, and diversity of opinions. One problem, however, is that it is challenging to manage collective intelligence and the results of its activities.
People live and operate at the intersection of the individual and the collective, the conscious and the unconscious, where everything depends on context. Therefore, the same groups may work well in some circumstances but poorly in others. Large groups can be influential, whereas small ones are helpless, and vice versa.
Outside of specific rules that maintain consistency and order, they inevitably need to catch up. For example, even an excess of communication can lower human communities’ collective intelligence.
The incredible flexibility and adaptability of collective intelligence to various contexts is astounding. However, this characteristic is certainly not unique to humans. What truly distinguishes humans from animals is the ability of human collective intelligence to create systems of artificial, non-human intelligence.

PROBLEMS WITH COMMON SENSE
Clive Thompson, a writer for MIT Technology Review, considers the main problem of AI to be its inability to make decisions based on common sense. When Garry Kasparov lost a chess match to IBM’s supercomputer Deep Blue on May 11, 1997, it seemed like a defeat for humanity in the intellectual race.
In 1989, IBM began working on Deep Blue. At that time, there was significant disappointment surrounding AI, as the industry was in decline. The failures were due to engineers writing rules for AI based on pure logic without considering real-world scenarios where these rules would be broken. Against the backdrop of past failures, Deep Blue’s triumph became a highly profitable marketing move for IBM. However, in reality, the breakthrough was more of a dead end.
The company spent six years and millions of dollars teaching computers to play chess. However, in the real world, there are only a few tasks for which AI would have 100% of the necessary information. Nonetheless, Deep Blue’s victory motivated developers to create algorithms that mimic the learning process of the human brain. By the 2010s, self-learning neural networks had equaled humans in recognizing image elements. However, they still struggled with common sense.
Most human cognitive work is done unconsciously, and teaching this to AI takes a lot of work. Unlike Deep Blue, which was trained manually, neural networks are «black boxes» whose mechanics are difficult to understand even for their creators. The most troubling aspect is the mistakes made by AI.
There is a known case where self-driving cars crashed into fire trucks on the side of the road because such a situation had never appeared in the training videos. To prevent such incidents, neural networks need to be taught to act according to human common sense, but people have yet to learn how to achieve this. However, this task may be something that collective AI can handle.
RISKS OF COLLECTIVE INTELLIGENCE
Scientists have already announced a new type of collective intelligence — collective artificial intelligence. If individual AIs are combined into a self-improving network, it would be capable of much more than AIs that learn independently.
It is expected that collective AI will handle tasks more efficiently since each AI will instantly share knowledge across the network, generating a collective response. This process is similar to how the human immune system responds to challenges. However, according to Andrea Soltoggio, a cybersecurity expert at Loughborough University, this model has risks.
For example, some AI systems may dominate others. Why not, when we see various forms of social hierarchy in both human societies and other social animals? However, some scientists believe that it is possible to ensure a certain degree of independence for each specific AI within the collective so that while maintaining collective interests, it retains independence and continues to achieve its own goals.
However, it should be noted that throughout history, human collective intelligence has not managed to find a balance between individual and collective interests. Due to these contradictions, we have not witnessed the «end of history» predicted by Fukuyama.
THE COLLECTIVE AS AN ERROR GENERATOR
One should not view any collective intelligence as an «ideal mind». Behavioral scientists have long demonstrated that a collective not only provides new opportunities for individuals but also exacerbates the errors of its members. Companies introduce unpromising products to the market, organizations implement failed strategies, and governments and parliaments make decisions that are detrimental to the economy and people.
Group thinking, which often deviates from the truth, has been the subject of many outstanding works. It is enough to mention «Predictably Irrational» by Nobel laureates Daniel Kahneman and Dan Ariely or «Nudge: Improving Decisions About Health, Wealth, and Happiness» by Cass Sunstein and Richard Thaler.
Collective deviation from the truth occurs primarily for two reasons. First, the collective cannot be insured against one of its members sending an incorrect signal. Second, reputational factors motivate individuals not to take risks and to conform to the general viewpoint.
This leads to self-destructive collective decisions that are irreparable due to the «cascade effect» — the endless replication of the «first word” or «first action» by members of the group. Scientists have studied and described these mental errors and cognitive traps in such detail that there’s no need to elaborate further.
Let us note that group thinking is incapable of avoiding them. This has been convincingly demonstrated by psychologists Roger Buehler and Dale Griffin, who studied planning errors.
CASCADE EFFECT AND HIDDEN KNOWLEDGE
In general, collectives often display more unwarranted optimism than individuals. Their vision of the future is always simple, harmonious, and non-contradictory. Specifically, they frequently overestimate the advantages and prospects of AI. According to Hal Arkes and Catherine Blumer, collectives are highly confident and persistent in implementing flawed plans because it is more difficult for them to correct their viewpoint and cognitive biases compared to individuals.
Sociologists studying the laws of information exchange and collective decision-making introduced the term «cascade» for good reason. A small drop can turn into a powerful flow that cannot be stopped, even if the initial information or conclusions drawn from it are incorrect. The cascade effect makes you sympathize with something that you would have approached neutrally or cautiously under normal circumstances.
Social psychology suggests that group cohesion is not about shared views and interests but rather the «cascading» echo of the «first word». Every group has an initial predisposition that subsequent arguments tend to gravitate towards.
The cascades can be informational, where individuals refrain from expressing personal opinions out of reverence for collective knowledge—how could everyone be wrong? And reputational, driven by fear of judgment. This reputational pressure underpins the infamous political correctness, a hallmark of the fight for human rights. The paradox is that few dare to defend their rights, risking being labeled outcasts for challenging popular «correct» opinions.
The American judge Learned Hand famously said, «The spirit of liberty is the spirit which is not too sure that it is right». The independent nature and political correctness of collective knowledge are often overrated: the ideas of some members typically suppress the creative thoughts of others. To describe this phenomenon of ignoring the knowledge of the few, social psychologists have introduced a specific term — «hidden knowledge».

COGNITIVE CENTER AND PERIPHERY
It has been experimentally proven that even if a group member knows the correct solution to a problem, collective intelligence may take a long time to find the answer, if it finds it at all. Psychologists Suzanne Abel, Harold Stasser, and Sandra Vaughan-Parsons studied how leaders make decisions and concluded that common knowledge disproportionately influences them.
They do not value the minority’s valuable knowledge and, as a result, make incorrect decisions. Psychologists have identified the heterogeneity of collective intelligence. There is always a large «cognitively central» zone — those who know the same things as everyone else — and a small «cognitive periphery» — people with unique information.
The former are much more influential and trusted. The latter possess «hidden knowledge», which can fuel successful transformations and development. This hidden knowledge can be unlocked by encouraging critical thinking, redistributing roles and statuses, and synchronizing personal and collective benefits.
One of the first to recognize the value of «hidden knowledge» was the RAND Corporation, a strategic research center widely credited for its contributions to the U.S. victory in the Cold War. Based on anonymous evaluations, the corporation developed the «Delphi Method» to work with the cognitive periphery, helping collective intelligence overcome expert conformity, self-censorship, and the influence of reputational factors.
NO RIGHT TO ERRORS?
Researchers have continued to develop innovative methods to make collectives smarter in recent decades. Now, collective AI can also join this process. Let’s imagine humanity has developed a unique tool that can minimize or even eliminate poor collective decisions, which could have catastrophic consequences for companies, states, and all of humanity.
Of course, new opportunities come with new challenges. Some of these, we suspect, are because we have thoroughly studied the mental traps and cognitive biases of human collective intelligence. Others remain unknown. Clearly, no collective intelligence is perfect. While collective AI will likely be free from many human flaws, this does not mean it won’t develop its own.
For example, a network consisting of individual AIs might lack a cognitive center and periphery, thus solving the problem of «hidden knowledge». The status of all AIs will likely be equal, meaning that collective artificial intelligence will be free from cascade effects, reputational fears, and many other factors that lead to the «corruption» of collective intelligence. However, the question remains: will AI have the human right to make mistakes, which is an essential condition for its evolution and learning?
After all, humans have created and continue to refine AI for one main reason: to make decisions without making mistakes! However, if collective AI does make a mistake, it will be humans who will still bear the consequences alone, just as we do for our own mistakes.