AI TRAPS AND ILLUSIONS: scientists urge not to trust artificial intelligence

Photo by Michael Dziedzic on Unsplash
Many people tend to endow artificial intelligence with superhuman qualities. But it turns out that working with AI, we risk falling into mental traps and drowning our thinking, society, and science in dangerous illusions. So, scientists are urging a closer look at our vision of AI and the risks it creates.
HALLUCINATORY AND OPAQUE
Today, there are thousands of scientists experimenting with artificial intelligence, working in a wide variety of research fields. At the same time, AI bots are increasingly replacing scientists themselves. For example, in «self-driving» laboratories, where AI algorithms design and run experiments themselves with the help of robots. Even in social experiments, where only yesterday humans were indispensable, they are being replaced by bots.
But how much can and should scientists trust generative artificial intelligence? Is it safe? More and more experts believe that the benefits of artificial intelligence systems are not as clear-cut as they seem.
ChatGPT tends to «hallucinate», i.e., simply make up things that do not exist in reality. Unfortunately, the operation of machine learning systems is opaque. Therefore, preventing AI’s «delusional fantasies» is not easy.
THE ILLUSION OF SUPERPOWERS: AI «NARROWS THE FOCUS»
The journal Nature recently devoted an extensive article to this problem. Its authors are Lisa Messeri, an anthropologist from Yale University, and Molly Crockett, a cognitive scientist from Princeton University. In it, experts decided to discuss the problem of uncontrollable risks that artificial intelligence systems pose to researchers.
Scientists pay attention to the fact that people look at them as «tools» that have superhuman intellectual abilities. To some extent, this is true, but only to a point. For example, this is true when it comes to large quantities and high-speed processing of complex data.
However, Messeri and Crockett argue that AI provides scientists not only with unique opportunities but also has some pretty serious limitations that researchers must consider in their work. The fact is that the use of AI carries risks of «narrowing the focus» of science for scholars. Therefore, you should not fully trust bots that make the user think that they understand a particular concept better than they actually do.
RISKS NEED TO BE CONSIDERED AT THE DEVELOPMENT STAGE
Messeri and Crockett studied over a hundred very different research texts that have been published over the last five years. The goal of the study was to provide a systematic view of the vision that exists in the scientific community regarding AI. How scientists perceive, evaluate, and understand the role of AI systems in the scientific process.
As a result, they conclude that the vision of AI as a means of human empowerment is not so one-sided. The researchers point out that those who plan to use AI in the future should objectively assess all the risks at hand and start taking them into account right now, during the development stage of AI tools.
Because later, when the applications are integrated into the research process, it will be much harder to fix anything. The danger is more than real, so the authors of the article urge the entire scientific community to be very cautious in evaluating the possibilities of AI.
AI HYPOSTASES: ORACLE, ARTBITER, QUANT, SURROGATE
The «superhuman» intelligence has quite a few weaknesses. And most of them are purely human, which is not surprising if we remember that AI is a product of human thinking. In one variation, which Messeri and Crockett called «oracle», AI tools have outstanding abilities related to understanding and interpreting scholarly texts. Certainly, their ability to work with scientific literature exceeds that of humans.
The next kind of AI is an «arbiter» because its systems are perceived to be more objective. They are believed to be much more correct than humans in evaluating scientific results. After all, it is much easier for AI than for us to avoid the temptation to select only those facts and expert evaluations that confirm our correctness.
The third kind of AI appears to researchers as a «quant» because its analytical capabilities exceed those of humans. In the fourth, AI tools act as a «surrogate», simulating data that, outside of modeling, is too difficult to obtain.
ILLUSIONS OF DEPTH, BREADTH AND OBJECTIVITY
By incorporating data from different areas of scientific knowledge into their study, Messeri and Crockett also articulated the significant risks that follow from the four variations above. The first is the «illusion of depth». It consists of the fact that people who rely on an algorithm for information tend to perceive the AI’s knowledge as their own, although this is certainly not the case. Its actual knowledge is actually not that extensive at all.
The next risk is «skewed» research, whose interest shifts to what AI systems are able to test. Messeri and Crockett called this phenomenon «the illusion of breadth of scientific inquiry». For example, the view of AI as a «surrogate» allows it to be widely used to model human behavior.
However, not everything in human behavior can be modeled. And if researchers prioritize work with AI, their scientific interest begins to ignore the reality unmodeled by artificial intelligence.
The third illusion is the «illusion of objectivity». Some people mistakenly believe that AI systems do not have their own point of view. That’s why they can take into account the diversity of alternative views. But even this is far from the truth — AI works with the data that is the basis of its training. Accordingly, they internalize all the typical human preferences and biases.
STRATEGIES FOR AVOIDING THE TRAPS OF AI
Messeri and Crockett recommend several strategies for scientists to avoid AI traps. One is to match the role you have assigned to AI in your research with one of the variations mentioned above. This will give you an idea of possible traps to be aware of.
Another option is to take a conscious approach to working with AI by limiting the scope of its use. For example, saving time by collecting data is less risky than generating knowledge. Experts insist that the existence of numerous pitfalls should be taken into account by everyone involved in scientific activity: academic institutions, grant organizations, sponsors, scientific journals, etc.
Scientists should not exaggerate the possibilities of AI. According to the authors of the study, artificial intelligence is not a panacea for all problems and not a universal tool to answer all questions. Therefore, a more adequate perception of AI is to treat it as a choice that may have associated risks.
These risks should be realized, evaluated, and taken into account in scientific work.
Original research: Why scientists trust AI too much — and what to do about it
When copying materials, please place an active link to www.huxley.media
Select the text and press Ctrl + Enter