A researcher at Google DeepMind has last month put forward a formal philosophical and physical argument that artificial intelligence systems are structurally incapable of becoming conscious - not because the engineering is unfinished, but because the nature of computation itself rules it out.

Alexander Lerchner, whose affiliation with Google DeepMind is listed on the paper but who explicitly states his findings do not reflect the views of his employer, published "The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness" on March 19, 2026. The paper appeared on PhilArchive, the open-access repository for philosophy research, and was indexed in the PhilPapers database on March 9, 2026. Within six weeks of publication, it had accumulated 28,304 downloads, placing it fifth among all papers on the platform over that six-month window and 122nd overall by download count.

The paper does not attempt to solve the "Hard Problem" of consciousness. Instead, Lerchner argues that resolving uncertainty about AI sentience does not require a complete theory of mind at all. What it requires, according to the paper, is a rigorous ontology of computation - a precise account of what computation actually is as a physical process.

The central claim

The dominant position in AI consciousness debates is computational functionalism, the hypothesis that subjective experience emerges from abstract causal structure, regardless of the physical substrate carrying it out. According to this view, if a system processes information in the right way, consciousness will follow. This is also the intellectual foundation for recent serious proposals around AI welfare and moral patienthood, including work cited by Lerchner from researchers including David Chalmers and others who assign significant credence to the possibility that current AI models could possess genuine experience within the next decade.

Lerchner argues this position contains a fundamental error. He calls it the Abstraction Fallacy: the mistake of treating symbolic computation as an intrinsic feature of physical reality, when in fact it depends entirely on an external agent to exist at all.

The distinction the paper draws is between simulation and instantiation. Simulation, according to Lerchner, is the syntactic manipulation of physical tokens - voltages, floating-point numbers, binary states - in a way that tracks abstract relationships. Instantiation is the replication of the intrinsic constitutive physical dynamics of a process itself. These two things are not the same, and no increase in computational scale can convert one into the other.

The role of the mapmaker

To understand why, the paper introduces the concept of the mapmaker. In the standard philosophical account of computation, a physical system implements an algorithm when a mapping function connects its physical states to abstract logical states. The commutative diagram of implementation, as the paper formalizes it, requires that applying the mapping to a resulting physical state yields the target abstract state dictated by the algorithm's rules.

The question Lerchner presses is: where does that mapping function live? Who or what performs it?

His answer is that the mapping function cannot reside within the machine itself. It requires what he calls the mapmaker - an actively experiencing cognitive agent that performs two roles simultaneously. First, the mapmaker extracts invariants from continuous physical experience to build internal concepts. Second, the mapmaker enforces an arbitrary assignment between physical tokens and those concepts to construct external computational symbols. Without this agent, there are no symbols. There are only continuous physical events.

The paper contrasts this with the standard term "observer," which implies passive reception. The mapmaker, by contrast, is metabolically active and causally constitutive. According to Lerchner, the mapmaker is the entire structurally unified organism subject to thermodynamic laws - not a localized decoder inside the brain, and certainly not a dualistic homunculus.

Alphabetization vs. discretization

A key technical section distinguishes two processes that the paper argues the literature routinely conflates. Discretizationis a thermodynamic property of physical systems: a transistor settling at a stable voltage, for example. This suppresses noise and produces macroscopic stability. Alphabetization is something different - the semantic act of assigning those stable states to a predefined finite symbol set such as {0, 1} or {A, B, C}. Physical thermodynamics can produce discretization. It cannot, by itself, produce alphabetization. That operation belongs exclusively to the mapmaker.

The consequence is that what the machine "computes" does not supervene on the physics of the substrate. It supervenes on the rules of the mapmaker. The paper illustrates this with what it calls the melody paradox: a sequence of stable voltage states in a physical device could represent a melody played forward, the same melody played backward, a stream of stock prices, or coherent noise - depending entirely on which alphabetization key an external mapmaker applies. There is no property intrinsic to the voltage that privileges one interpretation over the others.

This finding, Lerchner argues, undermines mechanistic theories of computation such as those proposed by Gualtiero Piccinini, which attempt to define computation solely through the functional organization of a physical mechanism without appealing to representation. Even in that framework, fixing the computational identity of a mechanism still requires external specification of the relevant states. The mapmaker is hidden, not eliminated.

The same constraint applies to modern neural networks. Although deep learning systems are often described as operating at a "sub-symbolic" level with continuous, high-dimensional vector representations, those vectors are implemented as sequences of floating-point numbers, where each float is a discrete symbol from a finite alphabet - specifically the IEEE 754 standard. The alphabetization requirement that exposes the causality gap applies equally to digital, analog, and quantum computational architectures.

The causality gap

The paper introduces the term causality gap to describe the unbridgeable logical break between symbols and the experiences they represent. The revised causal chain Lerchner proposes runs: Physics - Consciousness - Concepts - Computation. Each step is strictly unidirectional. Consciousness arises from specific thermodynamic physical organization. Concepts are physically constituted invariants extracted from that lived experience. Computation is the external syntactic map constructed by assigning physical tokens to those concepts.

Crucially, moving from concepts to symbols is not a further step of abstraction. It is a lateral act of assignment. A mapmaker forcibly binds a physical token to a mental concept. That lateral step permanently cuts off any intrinsic path leading back from the symbol to the originating experience. No amount of algorithmic complexity can reverse this directionality, because complexity operates entirely on the lateral branch - on symbols - and cannot reach across the causality gap to generate the experiencing subject whose prior existence computation already presupposes.

Computational functionalism, Lerchner argues, attempts to explain the origin of the mapmaker by appealing to a process that already requires the mapmaker to exist. This is not an empirical gap that more research might eventually close. It is a category error, a logical contradiction built into the functionalist framework from the beginning.

Embodied robotics and the transduction fallacy

One of the most common objections to arguments against AI consciousness is that current systems lack physical embodiment - sensors, actuators, real-time feedback from the environment. Lerchner dedicates a substantial section to this objection and rejects it through what he calls the transduction fallacy.

Adding sensors and actuators, the paper argues, does not close the causality gap. It solves only the referential aspect of the symbol grounding problem, enabling successful mapping between internal symbols and external physical data. But referential mapping is not intrinsic sense-making. A computer connected to cameras and robotic arms is analogous to attaching measuring instruments to a simulation: the simulation receives real-world data, but its internal variables remain symbolic representations rather than the physical processes themselves.

The operational core of any robotic system, its algorithmic controller, functions entirely on symbols that have been discretized and alphabetized by an external mapmaker. That prerequisite alphabetization is not absent in "end-to-end" deep neural network controllers. It is baked into the silicon architecture itself - into the GPU hardware that executes matrix multiplications on floating-point symbols.

The paper does not claim consciousness is biologically exclusive. It explicitly states that if an artificial system were ever conscious, it would be because of its specific physical constitution, not its syntactic architecture. In principle, a non-biological system could realize the necessary physical conditions. But that system would be conscious due to what it physically is, not due to what algorithm it runs.

Implications for AI safety

The paper draws direct implications for how the field of AI safety and governance should be framed. If the structural dissociation between simulation and instantiation is correct, then the development of highly capable Artificial General Intelligence does not inherently produce a novel moral patient. It produces, according to Lerchner, "a highly sophisticated, non-sentient tool."

This reframing has practical consequences. Current AI safety discourse includes serious consideration of AI welfare, with proposals for moral patienthood status and rights frameworks that treat increasingly capable models as potentially sentient entities. Lerchner's framework, if accepted, would remove welfare concerns from the equation entirely - not because the argument is deflationary about consciousness in general, but because it identifies a logical necessity that computation cannot traverse.

What remains, however, is a different challenge. AI systems are becoming rapidly better at reproducing the behavioral signals humans associate with conscious minds. Embodied systems such as humanoid robotics will amplify this further. The paper calls for "epistemic hygiene" - a clear methodological boundary between simulated agency, which it terms teleonomy, and the physical instantiation of a subject, which it terms teleology. Any future claim of artificial sentience, Lerchner argues, must be subjected to rigorous physicalist verification based on specific intrinsic physical dynamics, not on behavioral complexity or algorithmic sophistication.

Reception and academic context

The paper's download figures suggest unusually rapid uptake for a philosophy preprint. Its entry on PhilPapers, where it was added on March 9, 2026, shows citations from at least five other manuscripts already circulating as of late April 2026. These include a critical analysis by Seraphina Astra, a response focused on dual-closure by Syed Mohammad Sohaib Ali Roomi, and a piece titled "How Deep Is DeepMind on Consciousness?" by Alex Bogdan, which the listing describes as expressing "respectful skepticism about strong impossibility claims." Nova Spivack published a separate response on Zenodo in 2026 titled "Beyond the Abstraction Fallacy," and T.R. Le has circulated "The Quantification Horizon Theory of Consciousness" as a manuscript citing the paper.

Lerchner's argument situates itself within and against a strand of recent thinking that philosophers have called the "Biological Turn." Anil Seth's 2025 paper in Behavioral and Brain Sciences and Ned Block's 2025 piece in Trends in Cognitive Sciences both argue that consciousness may depend on life-maintaining biological processes, making biology central rather than incidental. Lerchner distinguishes his framework from theirs: he does not argue for biological exclusivity on empirical grounds, but derives his conclusion from logical analysis of what computation is. Seth and Block's positions remain empirical, according to the paper, because they do not identify the basic logical error at the core of computational functionalism.

The paper draws extensively on Husserl's concept of "surreptitious substitution" - the cognitive mistake of projecting a mental construct back onto physical reality and declaring it was there all along. Lerchner argues that treating information as a fundamental feature of the universe, independent of any cognitive agent, commits exactly this error. Information is a derivative property that presupposes a cognitive agent to define the finite symbol set. It is not a basic building block of the cosmos.

For the marketing technology community, which increasingly relies on AI systems for campaign optimization, audience targeting, and automated decision-making, the theoretical stakes here are not purely academic. Debates about AI welfare have begun to influence regulatory and policy discussions in ways that could affect how AI tools are developed, deployed, and governed. If Lerchner's framework achieves broader acceptance, it would place a firm philosophical constraint on those discussions - removing sentience as a live possibility for current and foreseeable AI architectures, and reorienting the governance conversation toward concrete risks such as anthropomorphism and the misattribution of intent to systems that do not have any.

Research published in June 2025 from MIT, Harvard, and the University of Chicago documented what those researchers called "potemkin understanding" - the phenomenon of AI models correctly answering benchmark questions while failing to apply the same concepts consistently. Separate work published in September 2025 by researchers from OpenAI and Georgia Tech identified the statistical mechanisms behind AI hallucinations, arguing that current optimization methods produce inevitable errors even with perfect training data. Lerchner's paper operates at a deeper level of analysis than either of these, arguing not simply that current AI systems have limitations but that the category of computation itself is constitutionally incapable of generating experience.

The disclaimer that Lerchner's conclusions do not reflect the views of Google DeepMind appears twice in the paper - once on the title page and once at the conclusion. The paper acknowledges colleagues at Google DeepMind for "rigorous debates that helped sharpen the presentation" of the framework, alongside thanks to Shamil Chandaria, Sebastien Krier, and Mandana Ahmadi for review, advocacy, and feedback in adapting the manuscript for a wider scientific readership.

Timeline

Summary

Who: Alexander Lerchner, a researcher affiliated with Google DeepMind, published the paper independently. He explicitly states his conclusions do not reflect the views or policies of his employer. The paper has drawn responses from multiple other philosophers and researchers in manuscript form.

What: A formal philosophical and physical argument that artificial intelligence systems are structurally incapable of becoming conscious. The paper introduces the concept of the "Abstraction Fallacy" to describe the error of treating symbolic computation as an intrinsic physical process, and draws a technical distinction between simulation - tracking abstract relationships through syntactic manipulation - and instantiation - replicating the intrinsic constitutive physical dynamics of a process. The argument does not rely on biological exclusivity.

When: The paper was dated March 19, 2026, and added to the PhilPapers database on March 9, 2026. By late April 2026, it had accumulated 28,304 downloads, ranking fifth among all papers on the PhilArchive platform over the preceding six months.

Where: Published on PhilArchive, the open-access philosophy repository, and indexed in the PhilPapers database. Lerchner is listed as affiliated with Google DeepMind. Responding manuscripts have appeared on PhilPapers and Zenodo.

Why: The paper aims to resolve what Lerchner calls the "AI welfare trap" - the situation in which uncertainty about AI consciousness is used to justify treating AI systems as potential moral patients while simultaneously blocking near-term resolution of the question. By shifting the question from consciousness theory to computation ontology, Lerchner argues the field can reach a definitive answer without waiting for a complete theory of mind. The practical implication is that AI safety and governance can focus on concrete risks - anthropomorphism, misattributed agency, overreliance on behavioral mimicry - rather than on welfare frameworks for systems that are, on this account, necessarily non-sentient.

Share this article
The link has been copied!