FACETS OF CONSCIOUSNESS: Scientists Debate Whether Artificial Intelligence Can Feel

Jonathan Birch is a professor of philosophy at the London School of Economics and Political Science / academicspeakersbureau.com
Are we the only intelligent form of life on the planet? Answering this question is not easy. Increasing evidence points to the remarkable cognitive abilities of many species of birds, fish, and mammals. Recently, another contender has joined the debate: artificial intelligence. How much of AI can be considered valid «intelligence»? How does its «reasoning» differ from what we observe in humans and animals? British philosopher Jonathan Birch has attempted to find answers to these and other questions.
THE HUMANITY TEST
Jonathan Birch, a professor at the London School of Economics and Political Science, specializes in the evolution of social behavior. In his new book, The Edge of Consciousness: Risk and Precaution in Humans, Other Animals, and Artificial Intelligence, published by Oxford University Press, the philosopher raises a range of seemingly unusual questions.
For instance, can artificial intelligence (AI) experience stress? Do lobsters suffer in a pot as they boil? Can a 12-week-old fetus feel pain? These questions cannot be easily dismissed, as the answers shape our understanding of what we call humanity and the very essence of our civilization.
NOT ONLY HUMANS CAN FEEL
If an entity possesses «sentience», it deserves a certain level of consideration and protection. Birch defines sentience as the ability to feel either good or bad. Philosophy and religion may vary in their views on the importance of sentience to us.
However, by attributing sentience to an entity, we immediately enter an ethical field and confront the challenge of distinguishing between good and evil. We have no reason to assume that this issue is exclusive to humans. It could apply to farm animals, cellular organisms, insects, or robots—any beings in which we can detect sentience.
DIFFERENT BEINGS, DIFFERENT MODELS OF REASONING
The challenge lies in establishing this. How do we determine whether something is intelligent? Currently, there is no consistent scientific or philosophical concept of intelligence. Instead, there are numerous fundamental disagreements, and the interpretation of experimental data varies widely.
Continuous research on the cognitive abilities of many beings, including animals and AI, is not conducted. Additionally, measurement becomes a challenge. Mammals’ behavioral patterns and brain activity may differ significantly from those of gastropod mollusks. Thus, the test for intelligence must also differ. But how do we measure the intelligence of AI systems, which lack brains or physical expressions of feelings?
Birch suggests not relying on uncertainty to resolve itself over time, but instead taking a proactive approach: developing a method that initiates two processes when there are initial signs of a being’s intelligence.
METACONSENSUS AND THE SENTIENCE TEST
The first process involves experts assessing an entity’s potential—how likely it is to be intelligent in principle. This isn’t about achieving a consensus. Seeking consensus here could be unjust, as it could potentially condemn a sentient being to prolonged suffering. Instead, Birch proposes a «scientific» meta-consensus that would trigger protection.
By this, he means viewing intelligence not as measurable data but as a «plausible possibility» agreed upon even by skeptics. This conclusion of plausibility can be based on evidence and consistent theoretical constructions. If meta consensus is lacking, beings may be prioritized for further research or dismissed as non-sentient.
LARGE LANGUAGE MODELS — A DIFFERENT KIND OF INTELLIGENCE?
Following the expert procedures described above, a second process is initiated: inclusive, informed civic commissions develop protective policies for potential sentient candidates. These policies should be proportional to the risks of the subject’s sentience, consider diverse values and trade-offs, and be revised as new evidence emerges.
According to Birch, three areas challenge our definitions of sentience. The first is the human brain (people with consciousness disorders, embryos, neuronal organoids, and synthetic brain models). The second is the animal world (mammals, fish, birds, mollusks, insects, worms, and spiders). Finally, the third area is artificial intelligence (LLM, large language models).
DEVELOPING NEW CRITERIA
Each of these three areas presents unresolved issues that are accompanied by philosophical and scientific debates. It’s fair to say that we know very little about the intelligence of everything around us. For example, when studying AI, we face the challenge of developing tests for sentience. LLMs generate texts about how they «feel».
However, they do this not because they genuinely feel that way, but because the algorithm is rewarded for imitating sentience. In such cases, behavioral markers for determining sentience are clearly inadequate. Instead, Birch suggests beginning the search for alternative «deep computational markers» of sentience.
WHERE ARE THE BOUNDARIES OF INTELLIGENCE?
In his review of Birch’s book, Jonathan Kimmelman, a bioethicist at McGill University in Montreal, raises a valid question: why not consider the philosopher’s problem in an even broader context? Kimmelman recalled how, in 1995, then-President Bill Clinton described the United States as a society in a state of «despair».
This suggests that sentience or «mood» can be attributed not only to a person but also to a nation. The same can be said of other collective entities—bee colonies, schools of fish, corporations, and governments. Could they also be described as having a kind of consciousness? What if we interpret the behavior of large ecosystems or even the entire planet Earth in this way?
PROTECTING SENTIENCE: A COMPLEX MATTER
Kimmelman also points out the challenge of «proportionality in protective measures». Balancing the interests of all sentient beings is extremely difficult. People often struggle to sacrifice their interests for the benefit of other sentient beings, such as farm animals.
Can one form or intensity of feeling have advantages over another? Does a feeling gain greater significance when combined with other attributes, such as intelligence? Can sentient and intelligent beings demand more comprehensive protection measures than sentient but non-intelligent beings?
Birch’s book raises numerous questions that are unlikely to have clear answers today. However, sooner or later, science and philosophy will have to address them to satisfy the intellectual hunger created by this uncertainty.
Original Research: