I, G(Zombie)


There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination.

— Daniel Dennett, Darwin’s Dangerous Idea (1995)


INTRODUCTION

Preface

This document is shared publicly as an initial draft to invite constructive feedback, insights, and reflections.

Abstract

Introducing Eliminative Nominalism (EN), a novel position in the philosophy of mind that extends and critiques Eliminative Materialism by rejecting not only mental states, but also the ontological assumptions embedded in both physicalist and emergentist accounts of consciousness. Drawing from proof theory, cybernetics, and evolutionary plausibility, EN argues that any sufficiently expressive cognitive system—such as the human brain—must generate internal propositions that are arithmetically undecidable. These undecidable structures function as evolutionarily advantageous analogues to Gödel sentences, inverted into the belief in raw subjective experience (qualia), despite being formally unprovable within the system itself.

EN seeks to avoid the contradictions inherent in Dennettian Third-Person Absolutism—the methodological stance that privileges an external, objective perspective when explaining phenomena accessible only from the first-person perspective. Rather than explaining subjective illusions away in third-person terms, EN proposes that they arise as formal consequences of self-referential modeling, constrained by the expressive limits of second-order logic.

To illustrate this framework, I introduce the concept of the g-Zombie: an agent structurally compelled to assert the existence of qualia—not because such entities exist, but because self-referential symbolic systems generate undecidable propositions that are internally misclassified as direct experiential givens. On this view, consciousness is not an ontological primitive, but an evolutionarily stabilized artifact of meta-cognitive self-modeling under formal constraint.

This is further developed through the thought experiment of Universe Z: a world functionally identical to ours, yet inhabited by agents lacking phenomenal consciousness, who nonetheless behave and report as if they possess it. Universe Z models how selection favors cognitively frugal architectures that simulate introspection and self-report not due to intrinsic phenomenology, but because such outputs are computationally efficient and behaviorally adaptive. Thus, EN reframes the Hard Problem of Consciousness not as a metaphysical puzzle, but as a byproduct of symbolic incompleteness and evolutionary economy, offering a formally grounded, falsifiable alternative to dualist, emergentist, and standard physicalist accounts of mind.

The key move EN makes—and where it departs from both pyhsicalism and computationalism— is that it doesn’t ask, “Why should a physical process feel like anything from the inside?” It asks, “Why must a physical system come to believe it feels something from the inside in order to function?” The answer: because a self-regulating, self-modeling system needs to track and report on its internal states without access to its full causal substrate. It does this by approximating symbolic placeholders—undecidable internal propositions—which it treats as felt experience. In order to say “I am in pain,” the system must first commit to the belief that there is something it is like to be in pain. The report of interiority is not a byproduct—it is the enabling fiction that lets the system tell itself a coherent story across incomplete representations.

Author’s Advice

Please consider the following advice:

  • For skeptics: Quickly get the gist of it in just a few minutes, go to Section B and read: 1.0, 1.1, 1.4, 2.2, 2.3, 2.5, 3.1, 3.3, 3.4, 5.1, 5.2, and 5.3.

  • The essay truly gains momentum in SECTION A – From Church to Churchland and Back.

  • For those unfamiliar with the Philosophy of Mind, start with Section D for foundational concepts before returning to Section A.

  • For a concise and structured analytical presentation, proceed to Section B.

About the Essay

This essay aims to rigorously articulate and develop a more nuanced and sophisticated version of an eliminative position—one whose philosophical foundations have often been misunderstood, oversimplified, or prematurely dismissed. Although eliminativism holds significant potential within the philosophy of mind and cognitive science, it has been largely marginalized, leaving its explanatory power both underappreciated and insufficiently explored. This neglect has not only limited deeper philosophical engagement but has also obscured the possibility that eliminativism might offer a compelling framework for understanding consciousness, cognition, and subjective experience.

In addressing this oversight, the discussion that follows invites readers to approach the arguments with intellectual openness and critical reflection—setting aside, at least temporarily, any preconceptions or discomfort that may arise when confronting ideas that challenge deeply held intuitions, yet may ultimately provide a clearer understanding of the nature of mind than many traditional accounts.

Section A: Musings

An exploratory prelude offering interpretive reflections and philosophical context. Rather than asserting conclusions, it aims to provoke curiosity and set the stage for deeper inquiry.

Section B: Key Arguments

At the analytical core of this essay lies a clear and rigorous exposition of the central thesis. This section systematically develops its foundational claims through careful argumentation, conceptual analysis, and engagement with contemporary theoretical frameworks, establishing not merely a provocative stance, but as a coherent and deeply reasoned account of the mind.

Section C: Socratic Coda

A dialogical examination of objections and alternative perspectives. Through a Socratic format, this section engages critique, deepens reflection, and explores the tensions within and around eliminativism.

Concluding the Introduction

At its most demanding, philosophy compels us to confront ideas that challenge our deepest assumptions. This essay invites readers to engage with eliminativism not as a detached intellectual exercise, but as a sustained inquiry into the limits of belief, consciousness, and selfhood. Whether one ultimately affirms or rejects its conclusions, the hope is that the exploration provokes meaningful reflection, deepens understanding, and perhaps reshapes how we think about thinking itself.

Finally, it must be declared that large language model (LLM) technology was employed in two auxiliary roles: first, as a linguistic refinement tool to enhance clarity and coherence; second, outside the body of the work, as a conceptual aid—used to test arguments, consider counterpoints, and sharpen theoretical formulations. Nevertheless, the core ideas, arguments, and philosophical reasoning remain entirely human-authored, reflecting both independent critical thought and engagement with prior scholarship.

Feedback and further discussion welcome: https://​​x.com/​​Milan_Rosko

SECTION A – Musings

Chapter One: The Tragedy

Death Is Weird but Consciousness Is Weirder

At its heart, we face a dilemma that captures the paradox of a universe so intricately composed, so profoundly mesmerizing, that the very medium on which its poem is written—matter itself—appears to have absorbed the essence of the verse it bears. And that poem, unmistakably, is you—or more precisely, every version of you that has ever been, or ever will be.

Alas, the Hard Problem of Consciousness:

Quote

Consciousness is the biggest mystery. It may be the largest outstanding obstacle in our quest for a scientific understanding of the universe.

— David John Chalmers, The Conscious Mind: In Search of a Fundamental Theory (1996)

The scientifically inclined might already be skeptical of this notion. Before long, questions such as “Will a machine ever achieve consciousness?” ceased to provoke awe or wonder. Instead, these questions evolved into a more rigorous inquiry: “Before contemplating whether consciousness can arise in our artificial systems, shouldn’t we first devise a reliable method for detecting consciousness at all?”

Some might have ventured further still, questioning whether uncertainty about one’s own consciousness could even pose an evolutionary disadvantage, given that successful reproduction tends to favor individuals who take their own subjective existence for granted.

At this juncture, we encounter Eliminative Materialism—or EM, as it will henceforth be referred to—the theory that asserts many—if not most, or even all—mental states simply do not exist.

Zero-Thought Experiments

If one were to undertake the uncompromising task of resolving every paradox concerning subjective experience, EM would gladly volunteer—not as the angel that banishes evil, but as the devil that, having done so, replaces it with even darker demons.

Exhibit

You find yourself alone in a desert, surrounded by an endless assortment of tools: Gears, straps, motors, wires, switches, and batteries. You have all the time in the world to complete the following task: Find a conceivable way to arrange these components so that the resulting structure experiences qualia — the raw, subjective texture of personal existence.
If one would answer “it is impossible”, then how is it that you, an assortment of similarly inert components made of proteins, do feel?
Suppose you succeed, how can you be sure?
Finally, after building a physical structure, you are transcribing the perfect set of instructions onto a page. A blueprint so precise that, if followed, it would recreate a mind capable of thought and feeling. Does the “paper itself” now contain the experience of consciousness? Does it have to move, or does it have to be made out of gears, or tissue?
Or does the mind emerge only when someone—sufficiently sophisticated—reads and understands the instructions? And if so, does their consciousness then depend on “another” interpreter? A chain of minds reading minds, trapped in a never-ending loop, each one requiring another to bring it to life? The universe, maybe? If we are the universe observing itself, who is observing himself through the universe?”

A naïve approach might lead one to reason: Since consciousness is “emergent,” we must simply achieve an arrangement of components in the desert that cannot be reduced to simpler parts—a difficult task, certainly, but perhaps not impossible.

From this perspective, one might hastily assume that EM dismisses consciousness simply because it reduces cognition to its constituent parts, rejecting any perceived unity of selfhood as a mere mirage—a fleeting illusion conjured by a complex but ultimately reducible system.

This, however, is a misunderstanding.

Forget the desert. Forget the arrangement of gears and wires. The problem does not begin with the assembly of components. It begins with the one doing the assembling.

What the Terminator Really Sees

This question warrants a second look—this time through the lens of popular culture. In The Terminator, we often see the world through the machine’s perspective: a red-tinged overlay of cascading data, a synthetic gaze parsing its environment with cold precision. But this raises an unsettling question: Who—or what—actually experiences that view?

Is there an AI inside the AI, or is this merely a fallacy that invites us to project a mind where none exists?

We tend to think of software in terms of symbols—interfaces, schematics, abstractions. But at its foundation, software is nothing more than a structured cascade of logical operations, instantiated through physical circuits that obey the strict syntax of first-order logic. The symbols we impose—command lines, graphical interfaces—are not intrinsic to the system but artifacts of human design, conveniences draped over the raw machinery of computation. Beneath that veneer, the system operates without interpretation, without introspection—because it must. It does not compute for a user, nor even for itself, but as a direct consequence of its physical constraints.

And yet, symbolic reasoning—particularly at the level of second-order logic, or in the case of linguistic structures described in Chomskyan theories—demands a kind of grounding that subsymbolic neural networks are not equipped to provide. Now, this limitation might seem to dissolve at any moment—Bear with this thought; it lies at the heart of our inquiry.

For now, let us entertain the premise that a silicon-based autonomous system neither requires nor even permits symbolic mediation in order to function. Our expectations of AI cognition—what it should look like, how it ought to manifest—are already shaped, and perhaps confined, by the symbolic scaffolding of our own thought. Even the most fundamental symbolic constructs, such as command-line prompts, are arbitrary impositions—as computation unfolds without symbols, not for a user, nor even for itself, but as a direct consequence of its physical constraints.

Unlike first-order logic, second-order logic is not recursively enumerable—less computationally tractable, more fluid, more human. It operates in a space that, for now, remains beyond the reach of machines still bound to the strict determinism of their logic gates.

We have yet to uncover how a machine might autonomously generate or comprehend a symbol from within itself—not as an externally imposed structure, but as an emergent feature of its own being. The act of meaning-making, so effortless for us, remains an enigma: a gulf not merely of complexity, but of ontology.

To assume that an AI must possess an ‘inner screen’ behind its experience is to risk a category error—akin to imagining that a computer must read its own system scripts simply because we can read its verbose output above our command line.

EM goes one step further. It does not merely deny that an AI has such an ‘inner screen’; it extends that skepticism to us.

Mystic by Force

At first glance, this may seem absurd—so much so that many of history’s sharpest minds have approached the problem from entirely different angles. Among them is Roger Penrose, who contends that synthesizing even a single quale from the mechanical assortment in the desert (mentioned above) is not merely infeasible due to practical constraints, but fundamentally impossible. His argument is not one of complexity but of category: The human brain, he maintains, does not operate on the principles of classical computation.

According to Penrose, our minds tap into something deeper—something beyond mere logical connectives. Consciousness, he proposes, originates at the quantum level within neurons. Perhaps neurons, through some unknown mechanism, have evolved to channel consciousness by tapping into a realm where…

Here, critics often interrupt. Penrose, they argue, is a cautionary tale—a reminder that even the most brilliant minds can veer into speculative excess. His theory is dismissed outright, sometimes with the same casual finality that condemned James Watson’s notorious views on race. But this dismissal is itself misguided. Penrose is not merely wrong; he is consequential.

His theory confronts a fundamental paradox: If consciousness is merely an epiphenomenon—an incidental byproduct of material processes—why does it stubbornly resist elimination within the austere framework of a purely physical universe? If subjective awareness cannot be assembled piece-by-piece from purely mechanical parts, how could it arise in a cosmos built entirely from such parts—one in which nothing else is known to exist?

This persistent difficulty compelled him to look for something beyond—yet still within—the natural order. And given that our intuitive grasp of reality unravels only at a few known frontiers—cosmic horizons, the quantum scale—it is not merely speculative but rational to search there.

At first glance, theories like the Orchestrated Objective Reduction appear diametrically opposed to EM. Yet their fundamental divide rests upon a single, crucial premise: the insistence that consciousness must be accounted for—that the vivid immediacy of subjective experience demands explanation not as an incidental byproduct of computation but a phenomenon rooted in the real world that actually happens.

A Mistake of Category

Before we tackle this question, we need to return to the tragedy at hand that is EM.

In a 2024 paper published in Progress in Biophysics and Molecular Biology, Robert Lawrence Kuhn presents a comprehensive survey of contemporary scientific theories of consciousness. The first category listed under Materialist Theories is EM.

To appreciate why this constitutes a grievance, consider the perspective Richard Rorty articulated in his seminal 1970 essay, In Defense of Eliminative Materialism. Rorty anticipated many of the conceptual shifts that would later define the position. He argued that just as outdated scientific frameworks—such as alchemy or phlogiston theory—were eventually abandoned rather than reduced to more refined physical explanations, so too should folk psychological concepts like “belief,” “desire,” or “pain” be discarded if they fail to align with neuroscientific progress, and that the history of science is filled with conceptual revolutions in which entire ontological categories disappear rather than undergo translation into more fundamental terms.

It was, of course, Daniel Dennett who, in Consciousness Explained (1991), offered the most popular articulation of EM. He argued that subjective experiences are mere “user illusions,” constructed by evolved cognitive processes—useful fictions rather than intrinsic phenomena.

Dennett’s “multiple drafts” model, which dismantles the notion of a central “Cartesian Theater” where consciousness unfolds, aligns with a broader eliminativist trajectory—not merely reducing consciousness to physical processes, but rejecting its existence as traditionally conceived.

While Dennett challenges the notion of qualia by framing them as cognitive constructs rather than intrinsic properties, Paul and Patricia Churchland advanced the idea that our commonsense notions of subjective experience are part of an outdated theory—one that will ultimately be displaced by a mature neuroscience. From this perspective, not only are mental states like “pain” or “redness” destined for elimination, but the very notion of qualitative experience itself is a conceptual error.

Here, we confront a deep conceptual tangle: Is consciousness merely an illusion, or is it nothing at all? This is no trivial distinction—it marks a fundamental ontological divide. If consciousness does not exist, then it simply does not, full stop. But if even a trace of it is real, then some account must be given of its nature—what it is, how it arises, and why it appears as it does.

To unravel this, we must step back several stages further, again.

It is worth reconsidering the label materialism itself, as its historical development helps clarify why it became foundational to EM. Materialism has been closely linked to the natural sciences, particularly since the 19th century, when empirical explanations progressively supplanted supernatural or dualist interpretations of reality. This connection parallels how atheism is often (imprecisely) conflated with materialism—not because secular thought inherently requires it, but because both reject ontological commitments to immaterial entities, especially a certain prominent divine spirit.

Curiously, some would argue that eliminativism itself functions more as a convenience than a substantive philosophical position:

Quote

“In principle, anyone denying the existence of some type of thing is an eliminativist with regard to that type of thing.”

William Ramsey, Stanford Encyclopedia of Philosophy (2003)

This leaves us with a possibly useless label imposed upon a possibly misapplied one. Eliminative could, in principle, refer to the rejection of virtually anything, making it an empty marker without specifying what is being denied. Materialism, meanwhile, is a label that proponents of eliminative materialism do not, (or should not) fully endorse, as their position often challenges the very framework that materialism traditionally assumes: The existence of symbols and universals, even if only one.

Question that Matter

For further demonstration, let us use turn to Patricia Churchland’s analogy:

Exhibit

Although many philosophers used to dismiss the relevance of neuroscience on grounds that what mattered was “the software, not the hardware”, increasingly philosophers have come to recognize that understanding how the brain works is essential to understanding the mind.

— Patricia Churchland, Website of UC San Diego (2013)

If one insists that the software—the computational patterns and processes—alone constitutes the essence of the AI, then one leans toward idealism, suggesting that the “helpful assistant” might exist in a realm hierarchically above physical instantiation, beyond space and time. Conversely, if one asserts that only the hardware—the physical substrate—truly exists, then one aligns with materialism or physicalism, reducing the AI to mere excitations of electrical charges within the integrated circuits of the GPU.

A materialist might then reason as follows:

Exhibit

On the condition that the relationship between mind and brain is analogous to that of hardware and software, then: All solvable software problems can be corrected by modifying the hardware, but not all solvable hardware problems can be corrected by modifying the software.

That is to say: Lovers without love are common, but love without lovers is impossible.

In consequence: The brain precedes and governs mind.

It stands to reason, then, that: All solvable hardware problems can be corrected by modifying matter, but not all solvable matter problems can be corrected by modifying hardware.

That is to say: Matter without hardware is common, but hardware without matter is impossible.

In consequence: Matter precedes and governs mind.

Through this lens, materialism emerges not merely as a metaphysical stance but as an inductive principle—one that treats matter as the fundamental substrate from which all emergent phenomena, including mind, must arise.

Again, there is a fundamental tension between eliminativism and materialism. The latter assumes a stable ontological foundation—matter—which ultimately relies on intuition, since matter, is a symbolic construct, and always will be. In contrast, eliminativism, in its most rigorous form, exemplifies some kind of radical skepticism: it negates rather than affirms, dismantling assumptions without offering definitive confirmation.

This is not a simple case of revisiting Cartesian doubt; rather, it is a question
whether categorical expressions within the philosophy of mind are appropriately applied:

If matter is merely a symbol within our conceptual models, does the claim that “matter precedes and governs mind” hold any meaning? Or is it simply a recursive assertion—one symbolic system grounding another, without ever reaching something truly fundamental?

Misunderstood by a Degree

Quote

“Are zombies possible? They’re not just possible, they’re actual. We’re all zombies. Nobody is conscious – not in the systematically mysterious way that supports such doctrines as epiphenomenalism.”

— Daniel Dennett, Consciousness Explained (1991)

The provocative claim that “we’re all zombies” suggests a radical dismissal of consciousness as anything more than whatever it actually is. Yet, as his final remark indicates, Dennett never fully embraced the most extreme implications of his own theories.

While he consistently rejected the Cartesian notion of consciousness as a “theater” or an irreducible mystery, he always left open a narrow but persistent gap—one that allowed for the possibility of a future “holistic” (in the sense of Quinean epistemological holism) explanation of consciousness.

The same applies to many other proponents of EM. Despite their efforts to demystify the mind, figures such as Keith Frankish, Jay Garfield, Michael Graziano, and even Dennett himself arguably undermined their own ambitions by endorsing the term illusionism.

Illusionism, as a framework, failed to gain mainstream traction—not least because it became entangled with a separate discourse initiated by Saul Smilansky, who employed the same term in a different philosophical context, but perhaps because it reflects EM’s greatest weakness: its inconsequence. By framing consciousness as an illusion, EM’s proponents may have inadvertently reinforced the very confusion they sought to dispel.

To call something an illusion implicitly presupposes the existence of someone being deceived; in this sense, the argument appears to echo Buddhist teachings more than contemporary science.

Exhibit

Pondering Person: “If consciousness lacks any grounding—if it does not truly exist—then how am I able to taste ice cream?”
Illusionism: “You must be experiencing an illusion.”

At times, the trajectory of Eliminative Materialism—despite its intellectual rigor and originality—has been overshadowed by the misinterpretations imposed upon it, particularly from philosophers who oppose it, as well as theist critics of the so-called “New Atheism.” Too often, EM has been reduced to a caricature of secular reductionism, a convenient strawman rather than an earnest philosophical stance.

As a result, eliminativists have found themselves entangled in the margins of various cultural debates. From one side, proponents of “Intelligent Design,” eager to claim consciousness as evidence of divine intervention; from another, Neoplatonists and Postmodernists, each keen on reducing EM’s radical propositions to absurdity, have all contributed to its persistent mischaracterization.

Conceptual Drift

A third source of confusion arises from EM proponents themselves; the position has been subject to continual conceptual drift. What originated as a focused argument within the philosophy of mind has gradually expanded—wavering between broader ontological claims, illusionist theories, and even arguments advocating the elimination of supposedly outdated scientific disciplines. This shift reflects an increasing preoccupation with institutional and sociological concerns rather than a sustained inquiry into fundamental questions about the nature of mind and consciousness.

Quote

“Eliminative Materialism is the thesis that our common sense conception of psychological phenomena constitutes a radically false theory, a theory so fundamentally defective that both the principles and ontology of that theory will eventually be displaced, rather than smoothly reduced, by completed neuroscience.”

— Paul Churchland, Eliminative Materialism and the Propositional Attitudes (1981)

Much like Hegelian theorists speculating about the inevitable evolution of knowledge, Eliminativists have made sweeping predictions about the future of scientific fields, and about their “completeness”. They have argued that disciplines such as psychology and cognitive science will ultimately be “eliminated” and absorbed into a more rigorous, unified scientific framework. However, this prediction is flawed on multiple levels. First, it contradicts the observable trajectory of modern science, which trends toward increasing specialization and the proliferation of sub-disciplines rather than their consolidation. More importantly, such speculative claims have little direct bearing on the actual workings of organized human structures.

Even if psychology were nothing more than ‘cocktail party talk’—Don Draper’s famous critique notwithstanding—it would still possess undeniable utility: the simple act of “listening to someone’s problems.”

Purity perhaps never should have come at the expense of pragmatic, socially beneficial concepts—not because such concepts ought, in a moral sense, to be preserved (indeed, none should be immune from scrutiny), but precisely because none deserve unquestioned exemption. If proponents of EM are so eager—almost gleeful—to dismantle “folk psychology,” then why does the concept of “love” remain curiously untouched? After all, many eliminative materialists, as far as one can observe, continue to engage in romantic relationships.

Exhibit

EM spouse: “Good night, honey. I would say ‘I love you,’ but of course, love does not exist in our purely material universe.”

For all its radical claims, EM has often struggled to reconcile its theoretical ambitions with the lived realities of human experience. It is akin to looking at scientific progress and concluding that, in the future, children will no longer believe in Santa Claus—an assumption that disregards the persistence of socially embedded narratives, regardless of their ontological status.

Here, a crucial distinction emerges:

Exhibit

Eliminativism is not merely the claim that horoscopes don’t work, but rather that there is no one reading them. Likewise, it does not necessarily follow that Horoscopes will be abandoned because of Eliminativism, nor that science will uncover anything.

If one takes its conclusions to the extreme, in attempting to eliminate the mental, the intellectual foundations upon which it stands —minds engaging in rational discourse—starts to dissolve. But then instead of making the final jump into the deep, EM starts to relativise, clinging to the tiny island left under their feet, the island of intentionality part of the scientific oversee territories.

From Church to Churchland and Back

Now let us begin to talk seriously.

One might initially suggest that the conceptual insights proposed by EM would be more suitably framed within a different monistic stance, rather than strictly materialism.

A particularly brilliant medieval heretic of the Order of Friars Minor comes to mind, believed to have been born somewhere in Surrey, England.

This figure, of course, is William of Ockham, best known for his principle of lex parsimoniae—commonly referred to as Occam’s razor, who is widely regarded as a foundational figure in modern epistemology. Ockham famously advanced the radical idea that universals, essences, and accidents are mere abstractions constructed by the human mind, lacking any independent existence outside of thought: Nominalism.

Without any hint of apology, it is especially striking—and perhaps even ironic—that some of the most secular thinkers of our era, including evolutionary biologists and prominent representatives of the so-called Four Horsemen of Atheism, find themselves philosophically aligned with this medieval Christian theologian. Yet this alignment is far from coincidental; rather, it underscores a profound structural parallel between Ockham’s nominalism and the philosophical foundations of eliminative materialism.

While Ockham approached these issues through the theological and scholastic traditions of his time, EM engages the same themes from a modern scientific standpoint, informed by biology, cognitive evolution, and something problematic akin to a “phenomenological common sense.”

A frequent feature of eliminative materialist arguments is what might be called Third-Person Absolutism—a methodological stance privileging an “objective,” external (third-person) perspective over subjective experience in explaining consciousness and cognition. To paraphrase Daniel Dennett’s position: “Since our concepts of mind are themselves products of the mind, understanding them requires adopting an objective view from outside.” Yet this external viewpoint itself is necessarily produced by the same mind it seeks to explain—thus remaining firmly inside, sorry. This circularity reveals a fundamental tension within Dennett’s position. Unsurprisingly, then, Dennett occasionally falls into conceptual traps as he attempts—against his better judgment—to set aside philosophical rigor while navigating the intersection of science and metaphysics.

Further complicating the issue, Dennett’s Intentional Stance grapples with a similar ontological dilemma. He suggests that purpose-driven, predictive language can successfully describe and anticipate the behaviors of complex yet fundamentally purposeless biological systems—such as the human brain—without committing us to the ontological reality of mental states themselves. In this sense, Dennett echoes Ockham: just as Ockham dismissed universals as unnecessary metaphysical baggage, Dennett treats mental categories merely as pragmatic heuristics rather than fundamental realities. Yet, unlike Ockham—who ultimately anchored his nominalism in theological certainty through the Christian God—Dennett’s framework does not fully dissolve the elusive “hard problem” of consciousness.

Nevertheless, suppose we could replace Ockham’s theological “Unterpfand” with a secular counterpart. Such a substitution might continue a historical trajectory wherein yesterday’s heresy gradually matures into today’s mainstream science.

One might label this hypothetical stance Eliminative Nominalism (EN)—a philosophical position discarding not only the conceptual constructs of traditional “folk psychology,” but also the residual metaphysical baggage embedded within Eliminative Materialism itself.

Yet EN would have to carry the burden of EM: it must confront and resolve EM’s most stubborn difficulty—how to sustain meaning itself. As suggested earlier, symbolic grounding remains elusive; physical computation alone neither approximates nor replicates the expressive capacities of second-order logic precisely because nature has, thus far, demonstrated a profound reluctance to accommodate genuine symbolic grounding.

Yet, I argue, this reluctance need not persist indefinitely.

Compensating Errors

This conceptual zigzag of EM naturally invites various counterarguments of all kind. One prominent critic, Lynne Rudder Baker, offers several rebuttals in her works, notably Naturalism and the First-Person Perspective and Cognitive Suicide. A central theme of her critique is that language and communication inherently presuppose beliefs and propositional attitudes.

In her arguments, she consistently repeats a subtle yet significant flaw in the use of reductio ad absurdum (proof by contradiction): to employ this method effectively as a counterargument, one must fully engage with—and provisionally accept—the premises and implications of the opposing view—in this case, EM—before demonstrating that they necessarily lead to a contradiction.

Ironically, however, she may have inadvertently revealed a deeper insight. If we apply the principle of Steelmanning—that is, reconstructing her argument in its strongest possible form—we might uncover not just an argument, but perhaps the most compelling argument not against, but paradoxically in support of the very position she aims to refute.

To see why, we must first “strengthen” Baker’s argument:

Exhibit

  • Premise 1: Eliminative Materialism is true.

  • Premise 2: Intentionality can persist even if semantic meaning does not.

  • Conclusion: If EM is valid, then our language is meaningless but intentional. It is meaningless since meaning presupposes the existence of mental states, but intentional since intentionality is not reliant on mental states.

  • Contradiction: This distinction is unsubstantiated. If meaning is eliminated, intentionality itself collapses. Without intentionality, EM advocates undermine their own ability to argue coherently for their position—or for any philosophical stance that depends on mental states.

  • Modus Ponens: Consequently, Eliminativism is inherently self-defeating.

Or even simpler:

Exhibit

  • A. Lisa is a P-Zombie.

  • B. Lisa holds the position that she is a P-Zombie.

  • C. A P-Zombie cannot hold any positions.

  • D. Thus, Lisa cannot hold the position that she is a P-Zombie.

At first, it seems like she has a point: If proponents of Eliminativism are forced to deny the meaningfulness or reliability of their own thoughts, they find themselves in an untenable position, whether they call it commitment, meaning, or even intentionality. Therefore, EM risks becoming inherently self-defeating, collapsing under the weight of its own skepticism toward intentionality.

Zombie in a Vat

Nevertheless, EM proponents might attempt one final defense: Could statements or utterances still possess truth value independently of the speaker’s understanding or intentionality?

In other words, If a statement is objectively true or false even if the speaker is entirely ignorant of its meaning, could EM still be a valid position?

Exhibit

Imagine a person reciting a statement phonetically in a language unknown to them. To an external observer familiar with the language, the statement may clearly be true or false. Yet the speaker, lacking any comprehension of the language, cannot possess any intention to convey its meaning.

This scenario prompts a critical question: Does the truth value of the utterance exist independently of the speaker’s intentionality?

Answer: If truth values are indeed independent from intention, we could hypothetically neutralize falsehoods simply by inventing a language in which the same phonetic sequence incidentally expresses a true proposition.

This thought experiment resonates with Hilary Putnam’s semantic externalism—the idea that meaning, truth conditions, and intentionality are not confined within the mind but instead emerge from an interplay between internal cognition and external causal factors. Putnam’s insight dissolves the notion of meaning as an isolated mental construct, anchoring it instead in the relational dynamics between thinker and world.

Where Putnam’s inquiry extends outward, tracing the external determinants of the world, EM turns inward, dismantling the self, yet, when we revisit Putnam’s original thought experiment— Twin Earth—we find ourselves entangled in familiar problems of self-reference, intentionality, and the external conditions that shape meaning.

Consider this ancient paradox of self-reference:

Quote

“Sarvam mithyā bravīmi”
″Everything I am saying is false.”
— Bhartṛhari (5th century CE)

Self-referential paradoxes—known historically as insolubilia—have resurfaced across cultures and centuries, wherever human inquiry bends back upon itself. At first glance, they may seem like mere linguistic curiosities. Yet they mark the outermost boundaries of thought, where reason confronts its own foundations. Indeed, every philosophical endeavor eventually collides with the Münchhausen Trilemma—the inescapable triad of circular reasoning, infinite regress, or axiomatic assertion.

And yet, there was a moment in intellectual history when this collision with paradox became not a terminus, but a point of departure.

It is a profound irony that the very concept of universal computation emerged not from certainty, but from the recognition of its limits:

Leibniz dreamed of a universal symbolic calculus; Hilbert sought to find it by laying its groundwork. But it was Gödel’s incompleteness theorems—and Turing’s discovery of undecidability—that finally revealed the true limits of formal systems. Only by confronting the boundaries of formal systems could von Neumann articulate, with precision, what a universal machine truly has to be: inexpressive.

This leaves us with two type of systems:

Exhibit

First-order Systems
Verbosity: Propositional
Exemplified by: Connectives
Utilized as: Computation
Utilized from: Logic Gates
Complete: If Sound
Metaphor: The CPU
Nature: Tangible, in the form of causality
Paradox: Primum Movens

Higher-order Systems
Verbosity: Expressive
Exemplified by: Properties
Utilized as: Function approximation
Utilized from: Neural Networks
Complete: Not if Consistent
Metaphor: The Mind
Nature: Imaginary, as symbols lack grounding
Paradox: Circulus in probando

The Hard Problem of consciousness might not be merely a mind-body problem, but of how a biological system manages to transcend first-order logic.

Chapter Two: The Rift

Incompleteness Ahead

At this juncture, caution is imperative. Invoking Gödel’s theorems has, in some circles, become as precarious as conjuring quantum mysticism. To proceed responsibly, we must first delineate the precise scope of our claims:

One: We will employ Gödel’s incompleteness theorems in a broad yet arithmetically justified sense.
Two: These theorems will not serve as vehicles for direct proof, nor will they be wielded in support of sweeping assertions absent rigorous justification. Our inquiry will remain firmly anchored in the provable consequences of incompleteness—namely, that within any sufficiently expressive and consistent formal system (at least as strong as Peano arithmetic), there exist statements that are true but formally unprovable within the system itself. Moreover, any proof of the system’s consistency must originate from outside it.

What justifies invoking Gödel here is that incompleteness establishes a lower bound: Its Validity emerges only at or beyond a certain threshold of logical expressivity: meaning the ability of a logical system or formal language to represent and distinguish different concepts, properties, or relationships within a given domain. Thus, we are justified in considering its implications expansively—much as one generalizes from second-order logic to third-order logic and beyond.

Let’s play around with this idea. Logical expressivity traditionally concerns formal languages used in logical systems, defining their ability to represent and distinguish different properties, relations, and truths. These formal languages have strict syntactic and semantic rules, allowing for precise reasoning. However, if we consider Natural language within a Chomskyan framework—which includes a formal syntactic structure but is embedded in a broader cognitive and communicative system—then it must also possess expressivity in a more expansive sense.

Exhibit

We can encode mathematical truths into natural language, yet we cannot fully encode human concepts—such as irony, ambiguity, or emotional nuance—into formal language. Therefore: Natural language is at least as expressive as formal language.

If expressivity refers to the capacity to represent and distinguish concepts, then natural language appears at least as expressive as formal systems—arguably even more so. After all, natural language not only subsumes the expressive capabilities of formal logic and mathematics but extends beyond them, incorporating pragmatic, cognitive, emotional, and contextual dimensions that formal systems struggle to capture. If formal languages are constrained by rigid syntactic rules and explicit axioms, natural language exhibits a different kind of limitation—one that is not merely syntactic but epistemic, a form of semantic incompleteness akin to the incompleteness Gödel identified in formal mathematical systems.

Once a formal system reaches a certain threshold of expressivity, incompleteness becomes inevitable: some truths will always remain beyond formal proof. But are we justified in extending this principle beyond strictly mathematical domains?

Quote

“yields falsehood when preceded by its quotation” yields falsehood when preceded by its quotation.
— Known as Quine’s paradox

Consider expressive systems that are not bound strictly by formal syntactic rules, rigorous axioms, or well-defined inference procedures. Could a similar form of incompleteness extend beyond mathematics, shaping broader domains of inquiry—ethics, metaphysics, cosmology?

Is it mere coincidence that we find ourselves ensnared in the same foundational dilemmas time and again? The ultimate justification of ethics. The infinite regress of a “creator of the creator.” The question of what, if anything, preceded the Big Bang.

With these conditions established and a concrete example in hand, we can now turn to a more speculative frontier: the implications of incompleteness not just for formal systems, but for the expressivity of the human mind itself.

Is the Mind Expressive?

Before exploring the structural parallels and divergences between minds and formal systems, consider the attempt to formalize a well known game:

Exhibit

Imagine opening a jigsaw puzzle with 500 pieces. We could observe...
Primitive symbols: Discrete, inert tokens—the individual puzzle pieces.
Inference from axioms: Rules determining how pieces interlock into valid configurations. Rules determining natural numbers.
Derived Truths: Once all pieces are correctly assembled, nothing remains in the box.

Valid formal systems possess an almost transcendental quality: we do not expect to find a 501st piece in a puzzle designed for 500, for the system is complete within its defined boundaries. Likewise, a sufficiently robust formal system seems to capture and constrain the unruly beast of Reality, taming its complexities into a coherent, self-consistent framework. It is as if, through the rigor of logic and mathematics, we have discovered a means to confine the infinite—to impose order upon the raw chaos of existence.

At first glance, this offers a seductive promise: the possibility of circumventing the contextual ambiguities that plague natural language. While the meaning of words can be twisted, redefined, or rendered uncertain by shifting contexts, the validity of arithmetic remains unyielding. Its truths are not dependent on interpretation, nor do they bend to the contingencies of symbolic convention. Through axiomatization, arithmetic truths stand apart—immutable, as though nature itself were speaking them, leaving us with the more modest task of formalizing what was always already inscribed in reality.

Not merely a game of jigsaw puzzles, but a principle extending across all scales—from atoms to galaxies, from the infinitesimal to the cosmic. Anything that can be measured can, in principle, be arithmetized. By introducing additional variables, refining constraints, or extending the rules, formal systems can approximate reality with ever-increasing precision. The dream, then, is one of convergence: that through this process, the structure of thought and the structure of reality might one day align perfectly, leaving nothing unaccounted for—no remainder, no excess, no missing piece.

If this were true, then to formalize the mind would be to resolve the Hard Problem at the core of Eliminative Materialism. But here, the illusion of finality collapses. The attempt to formalize the mind does not yield a singular, definitive model but instead reveals a fractal complexity—an inexhaustible hierarchy of descriptions, each offering its own lens and level of granularity. The mind can be framed as mental states mapped to neural activity, as creditworthiness inferred from behavioral data, or as atomic-scale configurations forming the physical substrate of the brain. Each perspective is internally coherent, each yields a formal representation—yet none exhausts the totality of what the mind is.

Formal validity alone does not guarantee insight.

Consider Euclidean geometry. Within its framework, π is rigorously defined as the ratio of a circle’s circumference to its diameter—an exact, systematic, and logically impeccable definition. And yet, this formal precision tells us nothing of the hidden structure, if any, within π’s infinite, non-repeating decimal expansion. The number remains, in some sense, opaque—validity does not always illuminate.

And yet, formalism need not be sterile. When structured correctly, it can reveal not just consistency but explanatory power. Minkowski spacetime, for instance, does not merely encode the mathematical structure of special relativity—it clarifies it. In the right configuration, a formal system does not merely approximate reality. It predicts it.

99 Systems but a Complete Ain’t One

Let us imagine a vast, combinatorially exhaustive hierarchy—a Library of Babel for mathematics, an all-encompassing registry of sets, relations, structures, and symbols. Within this boundless construct, every conceivable mathematical object, every possible formal system, every arrangement of axioms and inference rules would be cataloged. Such a place already has a name: the Von Neumann universe.

Exhibit

In the Von Neumann universe V we find the cumulative hierarchy encompassing all mathematically definable objects, including every conceivable formalization capable of representing or describing aspects of our world. Within this vast hierarchy, certain subsets correspond precisely to particular set-theoretic constructions of functions or formal systems. Among these subsets, we identify a special class distinguished by their representational utility—namely, those functions that provide genuine predictive insight into the behavior or properties of any sufficiently complex arithmetic-expressive mind. We denote this special class by M, the class of mind functions.
Under this construction, the following assertion holds unequivocally: For every element of M—or for that matter, of V—no hereditary subset can contain a valid mind function capable of fully and demonstrably encapsulating all truths pertaining to itself.

Precisely because formal systems are inherently incomplete—an inevitability demonstrated by Gödel’s incompleteness theorems—any mind that can be meaningfully modeled as a formal system must, by extension, be incomplete as well. If a formalization of the mind seeks to approximate the behavior of its real-world counterpart, then it, too, must inherit the same fundamental limitation.

The brain, in ordering itself, must rely on a system that accommodates its own incompleteness. Natural language, by virtue of its logical expressivity and open-ended generativity, serves this role. Just as formal systems contend with incompleteness through symbolic extensions, so too does thought—relying on language as an adaptive mechanism that allows for self-reference, abstraction, and symbolic inference.

Moreover, regardless of whether the mind itself can be fully formalized, the chain of thought undeniably can: As language. Even if one were to argue that cognition transcends strict formalization, the sequences of reasoning, inference, and symbolic manipulation that constitute thought necessarily adhere to formalizable structures.

One might think of the mind as an airplane: while it may depart from the runway of formal systems, exploring intuitive, non-formal, or seemingly unstructured cognition, it must ultimately return to the structured runway of chain of thought in order to be intelligible, communicable, and internally coherent. This provides a failsafe argument—even if the ontology of the mind resists full formalization, its navigable course remains constrained by formal structures, ensuring that thought never fully escapes the formal systems that shape it.

This insight, while perhaps not entirely surprising, aligns with an intuitive understanding: there must exist truths about the mind that remain inherently inaccessible to the mind itself.

But what does it mean for a truth to be inaccessible? Consider the following examples:

Exhibit

  1. A dreamer who dreams he is not dreaming.

  2. A P-Zombie who insist it possesses consciousness.

Plausible Undeniability

Could it be that our intuition about qualia—those elusive, ineffable aspects of subjective experience that seem to resist formalization—arises from the brain’s own ability to generate arithmetic statements that it cannot disprove?

In other words, might evolution, in shaping the brain as a regulator, have harnessed the self-referential properties of formal systems—without invoking epiphenomenal explanations?

Could the very conviction of having consciousness be, in fact, an arithmetic property of the ultimate “good regulator”—in the truest sense of Conant and Ashby?

This is not implausible. For the brain to function successfully as an organ, it must regulate—must construct an internal model of its environment—but crucially, it must also develop a model of itself. This entails not only being sufficiently complex to achieve arithmetic expressiveness (the capacity to form and manipulate symbols) but also, paradoxically, being more effective because it is incomplete.

Thus, it should come as no surprise that the brain resists—and indeed, may be fundamentally incapable of—fully accepting EM. If consciousness is to function effectively as a “good regulator,” it must be barred from recognizing itself as a mere “lifeless regulator.”

The reason is obvious:

If it were lifeless, it could never fear losing its life.

Exhibit

If the brain has evolved to survive and to feel, then it will feel in order to survive, and survive in order to feel. In doing so, it will assume its own existence as a necessity of natural selection—because existence is the ultimate antithesis to death. It is defined precisely by not being dead.

Evolution may have exploited fundamental incompleteness to construct its regulatory organ—what we call “brain”—which, in turn, produces a “richly expressive” internal conviction—what we identify as “mind.” But this conviction is not necessarily a reflection of ontological truth; rather, it is a predisposition toward committing to certain useful fictions. And among these fictions, none is more persistent than the one we call qualia.

This suggests an unsettling, unprovable truth: the brain does not synthesize qualia in any objective sense but merely commits to the belief in their existence as a regulatory necessity.

Yet from within the brain, this unprovable truth—when framed within a formal system—manifests not as an abstract limitation, but as an unshakable conviction within our psyche.

We have come full circle.

Exhibit

If a p-zombie could fully comprehend the neural simplicity underlying the quale it perceives as “red,” it would effectively refute incompleteness. However, since incompleteness is an inevitable consequence of arithmetic coherence, the p-zombie—like us—is compelled to experience the color red as vividly as any conscious being. Its assertion, “But it feels so real,” is thus a mathematical necessity, not a proof.

Or if it comes to consciousness:

Exhibit

I challenge the reader to fully conceive of themselves as a regulator. The attempt will prove impossible, and, on top of that, it is rarely, if ever, instantiated.

Formally as a Modus Tollens:

Exhibit

(1) All sufficiently expressive self-referential regulatory systems (such as brains) are necessarily incomplete (Gödel).
(2) If subjective experience (“qualia”) were merely a regulatory illusion and explicitly disprovable by the system itself, then the regulatory system would need to be complete enough to perform this internal disproof.
(3) From (1), no such completeness can exist.
(4) Thus, subjective experience (“qualia”) must remain internally unprovable yet compellingly real within the regulatory system.
Therefore, the persistence of qualia as subjectively undeniable yet formally unprovable aspects of cognition is logically inevitable.

To summarize: The argument above provides a plausible and testable evolutionary explanation for the phenomenon of subjective experience, positioning it as a natural consequence of functional adaptation rather than as an inexplicable anomaly or epiphenomenon.

Don’t Hate The Science, Hate The Universe

As for Baker’s argument, it has become the most common rebuttal to EM. Her claim rests on the idea that EM cannot be genuinely believed because it undermines the very foundation of belief itself. However, the issue is actually the reverse: EM cannot be believed if it is true.

Sadly, if the mind, can be understood as a formal system, then it is bound by the constraints of such systems, forever. As science itself would always function as an extension of the human mind—a structured, rule-governed method of inquiry—it can’t complete the mind:

Exhibit

Gödel’s second incompleteness theorem allows for the validation of a formal system’s consistency from an external vantage point, but never from within as an extension.
If EN holds true the Hard Problem will never be solved by science.

The scientific method, for all its power, operates within the limits of what is observable, measurable, and formally expressible. However, it fundamentally lacks the ability to make us understand—even if it can help us to write it down.

We’ve already encountered this issue with colors. It is well established that committing to colors ontologically is considered “unscientific,” as demonstrated by the brain’s interpretation of simultaneous S- and L-cone firing as magenta—a color with no corresponding wavelength in the physical spectrum.

This realization offers no deeper ontological intuition; it merely reveals that colors, or at least a color as we perceive it, is not an intrinsic property of the external world.

The symbol is a symbol

The brain does this by creating a symbol, which refers to a symbol, which refers to a symbol—an infinite regress with no grounding, no bottom. But evolution doesn’t need grounding; it needs action. So it skewed this looping process toward stability—toward a fixed point. That fixed point is the assertion: “I exist.” Not because the system proves it, but because the loop collapses into a self-reinforcing structure that feels true. This is not the discovery of a self—it’s the compression artifact of a system trying to model itself through unprovable means. The result is a symbol that mistakes itself for a subject.

Chapter Two: The Mirage

Élan vital

Finally, we can attack the prevailing view of consciousness:

A paraphysical epiphenomenal continuum—lacking a fixed substrate or singular identity, yet emergent and, in principle, reproducible. It is presumed to arise from physical mechanisms that remain largely unknown, shaped by organic constraints such as blood alcohol levels and strokes, yet remarkably resilient, persisting despite the loss of countless neurons or even entire brain regions over short or long time distances. In some aspects, it presents itself as discrete; in others, fluid. It is at once repetitive and continuous, structured yet elusive.

Consider the diverse entities proposed by various worldviews—atoms, spirits, quantum fields, divine essences, and social structures. Despite their conflicting ontologies, “emergence” and “emergent properties” seem to integrate seamlessly into these disparate frameworks. This raises a fundamental question: Why does emergence remain relatively uncontroversial despite such different views, when it is in fact an ontological stance?

The answer lies in its universality. Emergence arises when we cannot cognitively grasp a system in its entirety, something shared widely among humans. It is less a statement about the system itself and more a reflection of our cognitive limitations. This flexibility allows emergence to act as a conceptual chameleon, adapting to various intellectual contexts:

  • Cognitive Comfort: Emergence offers a seemingly sophisticated way to acknowledge complexity without requiring a full understanding. It provides a sense of mastery over topics that remain fundamentally elusive.

  • Interdisciplinary Appeal: Its indeterminacy and academic anchoring make it applicable across fields, from physics and biology to sociology and philosophy. This broad utility gives it a veneer of universality.

  • Anti-Reductionist Sentiment: Emergence resonates with those who resist reductionist explanations, aligning with the idea that some phenomena transcend their constituent parts.

  • Explanatory Placeholder: Emergence often serves as a temporary stand-in for incomplete knowledge. It allows us to acknowledge phenomena that defy mechanistic explanation, bridging gaps in understanding while research progresses.

These qualities, however, raise critical questions: If emergence is a property of something, why is it so dependent on our explanatory and predictive capabilities? And if it is an explanation, why does it describe the system’s behavior in terms of our cognitive limitations rather than the system itself?

If it’s not a Feature, It’s a Bug

Computers provide a compelling lens through which to examine the concept of emergent properties. Their behavior can appear magical: the fact it can do the things it does seem to “emerge” from the alignment of transistors, circuits, and software.

Yet, this perception begins to dissolve under expert scrutiny, as computers have not a single emergent quality.

Exhibit

print(0.1 + 0.2) became 0.30000000000000004, as excpected.

Recent advancements have revealed abilities in AI systems that were neither explicitly wanted nor pre-trained—behaviors that appear novel and unpredictable. These “emergent abilities” have sparked widespread interest, but the question remains: are they truly emergent?

The notion of emergent abilities in AI came to prominence with a 2022 paper by researchers at Google Brain, DeepMind, and Stanford titled Emergent Abilities of Large Language Models. The paper argued that AI systems exhibit unexpected capabilities as their scale increases. However, a subsequent paper from Stanford, titled Are Emergent Abilities of Large Language Models a Mirage?, challenged this idea. The critique claimed that these abilities emerge along a smooth continuum, making them predictable rather than genuinely emergent.

This rebuttal, while compelling, seems to miss a deeper issue. Smoothness and predictability do not necessarily preclude emergence; the problem lies in the concept of emergence itself.

Exhibit

If my pet frog begins to sing, it would undoubtedly surprise me, regardless of whether the singing developed gradually or spontaneously. The surprise arises from my expectations, not from the frog’s true nature. Debates over whether a system has surprising properties are, at their core, debates over who understands the system better.

Similarly, the “emergent” abilities of AI systems reflect the gap between our predictions and the system’s outputs—and nothing else.

In this sense, emergence is the ultimate suitcase word: a convenient label that bundles together disparate phenomena, masking the gaps in our understanding rather than resolving them. The ultimate god of the gaps argument. Reductionism in Reverse.

This leaves us with the following framework of Monoidism:

Exhibit

First-order logic
Behavior: Propositional
Exemplified by: Connectives
Utilized as: Computation from Logic Gates
Complete: If Sound
Metaphor: The Brain
Nature: Tangible, in the form of causality
Paradox: Unmoved mover
Ontological equivalent: Physicalism
Ontological opposite: Reductionism

Higher-order logic
Exemplified by: Properties and sets
Behavior: Expressive
Utilized as: Function approximation from Neural Nets
Complete: Not if Consistent
Metaphor: The Mind
Nature: Imaginary, as symbols lack grounding
Paradox: Insolubilia
Ontological equivalent: Emergentism
Ontological opposite: Nominalism

g-Zombie

Materialism posits the existence of p-Zombies, then what are p-Zombies from the perspective of Eliminative Nominalism?

They become g-Zombies—named after the Gödel numbering function (g).

A g-Zombie is someone who recognizes that embracing EM as true would simultaneously undermine the validity of that very belief. In other words, believing EM would erase the logical grounds for holding any belief at all—including EM itself.

The g-Zombie suspects that they might indeed be a p-Zombie, yet also realizes they can never meaningfully hold the belief that they lack authentic existence. To do so would be an arithmetic impossibility, a self-referential paradox. At best, they can only entertain a conviction that their status as p-Zombie might constitute an unprovable truth—precisely the position of Eliminative Nominalism.

They have a suspicion about what awaits them after death: precisely what happens to the characters in a book after they die: They neither continue nor vanish, for they were never truly present outside the symbolic framework that defined them.

And this is the profound irony of Eliminative Nominalism: while EM attempts to dismantle the very concept of consciousness, reducing it to neurological states and physical processes, Eliminative Nominalism goes further—reducing the very notion of existence itself to symbolic relationships and linguistic constructs.

The g-Zombie, then, lives suspended between recognition and paradox—aware that their reality might be nothing more than a Gödelian construct, a self-referential system that can neither fully validate nor entirely negate its own existence, forever caught in narratives that both define and elude us, perpetually aware of our symbolic nature yet unable to fully escape it.

SECTION B – Formal Arguments

Definitions

0.1 Monoidism

Monoidism (as in Monoid from abstract algebra) represents the most radical form of Nominalism, positing that nature, as a thing-of-itself, is not an operating entity but the very embodiment of soundness—a manifestation of principles at least as fundamental as those found in first-order logic. In this framework, consistency is the foundational force of reality, effectively replacing the need for affirmative constructs. Universals, under Monoidism, possess no independent ontological status; there exists neither empirical evidence nor logical necessity to justify their existence beyond aesthetic appeal.

Just as the Sorites paradox exposes the instability of seemingly concrete categories, all conceptual abstractions—including thought itself—dissolve under scrutiny. This suggests that nature does not operate on ideals or representations but solely through raw soundness. Consequently, “existence” is a category error propelled by evolution: All higher-order constructs are not fundamental aspects of reality but contingent artifacts, of partial soundness. These constructs may be internally coherent but do not reflect any intrinsic structure of the world itself.

Crucially, Monoidism is not self-undermining, as it does not attempt to establish an alternative complete ontological framework. Instead, it functions as a modus tollens critique of any system that assumes too much. Monoidism does not refute the soundness of cognition but demonstrates the structural paradoxes inherent to any representational system. In this way, it is self-agnostic rather than self-defeating, but inherently Eliminative. It does not propose a new metaphysical foundation but instead negates unjustified assumptions about the existence of fundamental categories.

Monoidism stands in diametric opposition to Emergentism, which it regards as a reversed form of the Reductionist Fallacy—an erroneous attempt to salvage ontological commitments by retroactively imposing hierarchical complexity upon fundamentally discontinuous abstractions.

0.2 Eliminative Nominalism (EN)

EN serves as the philosophy of mind contingent with Monoidism, extending and critiquing Eliminative Materialism (EM). EN not only rejects traditional mental states but also challenges the ontological assumptions underlying EM itself, arguing that even materialist explanations of reality rely on abstractions that lack fundamental grounding.

Drawing from the first incompleteness theorem, EN suggests that the brain, as a biological “good regulator”, operates most effectively when it generates unprovable formal falsehoods—one of which corresponds to the claim of experiencing consciousness or qualia. These falsehoods persist not because they reflect reality, but because their negation would be arithmetically impossible within the system that produces them. In this view, the self is not merely an illusion in the emergentist sense but a paradox—a construct sustained by the very mechanisms that enable thought itself.

Crucially, EN maintains that it can never be an object of direct intuition. Instead, it must be approached indirectly: either a priori, as an unprovable truth or falsehood, or a posteriori, through the lens of evolutionary plausibility. Additionally, EN remains fundamentally agnostic on metaphysical claims—it neither affirms nor denies existence. Instead, it treats all ontological categories as symbolic constructs rather than absolute truths.

A further reflexive critique within EN is its recognition that the exact mechanism by which the commitment in qualia arises will never be fully explained by neuroscience. The reason for this limitation is that science itself—being a collection of formal systems—is constrained by the same inferential structures and symbolic representations that the mind employs. In other words, neuroscience, as a scientific discipline, cannot step outside its own framework to provide an account of something that is itself an artificial construct of that very framework. The mind’s belief in its own conscious experience is thus not an empirical phenomenon to be uncovered but a structural necessity of formal cognition, making any attempt to reduce it to a material process inherently incomplete.

EN, as an evolutionary theory of mind, is strongly subject to falsification if a more compelling explanation of the hard problem of consciousness emerges or if it conflicts with empirical data. It does not imply ethical nihilism, as it explicitly limits itself to descriptive claims without undermining normative or pragmatic domains.

One: The Mind is Expressive

1.0 Correct Category

To avoid committing a category error, we must rigorously establish a foundation for understanding the mind as an entity or system that possesses logical expressivity.

We use four independent premises. If one of the three is valid, the mind is expressive.

1.1 The Good Regulator Premise

Every good regulator of a system must be a model of that system. (Conant and Ashby)

This theorem asserts a necessary correspondence between the regulator (internal representation) and the regulated system (the environment or external system). Explicitly, it means:

A good map (good regulator) of an island (system) is sufficient if external to the island.
But if the map is located on the island, it becomes dramatically more effective if it explicitly shows the location of the map itself (“you are here”), thus explicitly invoking self-reference.
In the case of a sufficiently expressive symbolic system (like the brain), this self-reference leads directly into conditions required for Gödelian incompleteness.

Therefore: The brain is evidently a good regulator.

1.2 Mind Function Premise

The Von Neumann universe V comprises a cumulative hierarchy containing all definable mathematical objects and formalizations capable of representing our world. Within this hierarchy, we isolate a special class of subsets, denoted M—the class of mind functions and functionals—distinguished by their explanatory power in predicting or categorising minds from initial conditions. Crucially, under this construction, we establish the following assertion:

For every element of M, or indeed V, no hereditary subset can contain a proposition of a mind that is capable of fully and demonstrably encapsulating all truths pertaining to itself.

This assertion formalizes the inherent incompleteness of the mind, as it is common sense: There are intrinsic epistemological boundaries on mental systems, preventing complete self-knowledge.

1.3 Chain of Thought Premise

We can encode mathematical truths into natural language, yet we cannot fully encode human concepts. Therefore: Natural language is at least as expressive as formal language.

Natural language, by virtue of its logical expressivity, contends with incompleteness through symbolic extensions, as does thought—relying on language as an adaptive mechanism that allows for self-reference, abstraction, and symbolic inference.

Moreover, regardless of whether the mind itself can be fully formalized, the chain of thought undeniably can. Even if one were to argue that cognition transcends strict formalization, the sequences of reasoning, inference, and symbolic manipulation that constitute thought necessarily adhere to formalizable structures, akin to proof sequences in logic or computational steps in an algorithm.

One might think of the mind as an airplane: while it may depart from the runway of formal systems, exploring intuitive, non-formal, or seemingly unstructured cognition, it must ultimately return to the structured runway of chain of thought in order to be intelligible, communicable, and internally coherent. This provides a failsafe third argument—even if the ontology of the mind would transcend formalization, its navigable course remains constrained in its end.

1.4 What is Not Cannot

EN does not assert that the mind is a formal system in an ontological sense—it simply shows that any system capable of self-modeling, symbolic inference, and regulation (as minds demonstrably are) inherits the structural constraints of formal systems. This is a conditional argument, not a metaphysical claim.

To assert that the mind resists all formalization is to make an extraordinary claim: that it transcends any conceivable formal framework. Such a position undermines explanatory coherence and violates parsimony, offering no predictive mechanism or falsifiability.

More importantly, denying formalizability doesn’t shield the mind from EN’s conclusions. It reinforces them. If the mind cannot be fully specified, then it cannot be fully known—even to itself. This is precisely the incompleteness EN articulates.

What is not formalizable cannot be specified.
What cannot be specified cannot be defended.
What cannot be defended cannot be claimed.

That is to say:

To deny formalizability is to deny intelligibility.
To deny intelligibility is to renounce argument.
What cannot be argued cannot be asserted.

Thus, the critic’s position reaffirms rather than refutes EN: the mind’s structural opacity is not an escape from formal constraint, but its very demonstration.

Two: The Argument of Eliminative Nominalism

2.1 Major Premises

(P1) Incompleteness of Expressive Regulators:
Any sufficiently expressive and consistent Regulator capable of self-reference is necessarily incomplete—some truths about itself remain formally undecidable internally. (As stated by Kurt Gödel)

(P2) Good Regulator Criterion:
A good regulator (e.g., a brain) internally represents structural aspects of its environment, including itself. (As stated by Roger C. Conant and W. Ross Ashby) is necessarily consistent but can be incomplete.

(P3) Subjectivity of Qualia:
Qualia are subjective, self-referential internal properties whose truth status cannot be trivially verified externally without exhaustive structural knowledge.

(P4) Feel to Survive and Survive to Feel Criterion:
Explicitly claiming qualia has strong evolutionary utility, as regulators lacking such trait would be maladaptive.

(P5) Principle of Epistemic Closure:
If a property is wholly internally defined by a formal system and is not inherently undecidable, complete structural knowledge suffices to determine its truth status.

2.2 Minor Premises

1. Regulators A and B have isomorphic internal formal systems, both expressive and consistent. System G encompasses both regulators. Regulator B explicitly claims qualia (subjective, self-referential properties).
2. Logical Cases:
(1) If A fully validates B structurally and validates B’s qualia claim, it contradicts Gödel (P1): impossible if consistent and expressive.
(2) If A fully validates B structurally but cannot validate B’s qualia claim, it still contradicts Gödel (P1): again impossible if consistent and expressive.
(3) If A cannot fully validate B structurally but validates B’s qualia claim externally, qualia become trivialized externally—contradicting their subjective definition (P3). Thus, either trivial qualia or System G inconsistency arises.
(4) If A cannot fully validate B structurally nor validate B’s qualia claim, the scenario aligns perfectly with Gödelian incompleteness (P1) and genuine subjective qualia definition (P3). Fully logical, plausible, and consistent.

2.3 Conclusion

If System G is consistent, scenarios (2) and (4) remain logically viable. But, exactly one of the following must hold for Regulator B:

  • (a) B is a good regulator without inherently self-referential, internally undecidable qualia claims (“promissory qualia”: verifiable but trivial, bearing the burden of proof).

  • (b) B is a good regulator, but its qualia claims are false (evolutionarily useful fiction).

  • (c) B is not a good regulator, and its qualia claims are false (implausible maladaptive regulator).

If qualia were objectively false and fully comprehensible as regulatory illusions, then a sufficiently complete regulator (if it existed) could explicitly recognize and dismiss qualia as illusions.

Such a complete regulator is impossible. The regulator’s own incompleteness prevents it from explicitly disproving or fully dismissing qualia.

Final Conclusion: Therefore, humans behaviorally must commit to qualia—subjective experience—even if unprovable or false.

2.4. Implications

This formal incompleteness elucidates why subjective phenomena—such as qualia or intuitive certainties—cannot be internally verified. Experiences like the redness of an apple, or the very intuition of one’s own existence, function analogously to Gödel sentences: true but unprovable within the system generating them. Hence, any theory that attempts to fully explain the mind’s subjective aspect inevitably encounters these intrinsic limitations.

2.5 A Scientific Mind is also a Mind

This should also apply to proxies like scientific theories, abstractions, and experiences as they introduce, isomorphically, new rules of inference and symbols but not an external justification, which is inherently impossible.

EN is not a self-protecting framework, but rather a falsifiable one with a strong predictive component.

Three: The Argument of Universe Z

3.1 Defining Universe Z

Imagine Universe Z—physically identical to ours but entirely lacking epiphenomenal entities. These can be excluded by definition.

In this universe, behaviors linked to consciousness (like introspection and self-reporting) arise solely as adaptive computational functions, with no actual subjective experience or qualia.

3.2 Behavioral Indistinguishability and Epistemic Limits

In Universe Z, organisms behave exactly like those in a universe with metaphysical consciousness. These “P-Zombies” claim to be conscious, report qualia, and reason about mental states—purely through computation, without any metaphysical basis.

3.3 Evolutionary Efficiency

Evolution favors efficient, adaptive computation over metaphysical complexity. AI systems (e.g., large language models) exhibit advanced behavior without presumed consciousness, supporting the idea that evolution selects for functional performance, not subjective experience—consistent with Universe Z.

3.4 Parsimony and Scientific Economy

By Ockham’s Razor, simpler explanations are preferred. Universe Z, which excludes metaphysical consciousness, offers a more parsimonious account. It relies solely on physical and computational processes, placing the burden of proof on theories that posit metaphysical additions.

Given the empirical indistinguishability between our universe and Universe Z, along with parsimony and evolutionary evidence, it follows that:

We likely inhabit Universe Z—where consciousness is not metaphysical, but a computational byproduct of regulatory systems.

Thus, Universe Z remains the most efficient scientific explanation for cognitive phenomena.

Four: Empirical Arguments:

4.1 Endured Locally, Dissolved Globally

Assuming neurons function similarly to artificial neural networks—a plausible but not empirically confirmed hypothesis—the effect of receptor-specific agonists, such as psychedelics, can be seen as disrupting the brain’s internal logical consistency. In formal logic, an inconsistent system can assert all statements. Analogously, psychedelics destabilize the brain’s self-consistent regulatory framework, leading to phenomena like ego dissolution, which become more likely at higher doses.

This disruption may impair qualia in a specific sense. For example, experiencing the taste of vanilla as resembling a DeepDream image reflects a breakdown in the constraints that normally govern perceptual coherence. As dosage increases, psychedelics can induce a state resembling unconsciousness while awake—suggesting a progressive collapse of structured inference in cognition.

At extreme doses (e.g., LSD >10,000 µg), users may appear physiologically awake yet become unresponsive, often with retrograde amnesia. This indicates a state where subjective experience becomes undifferentiated or inaccessible. In such cases, consciousness seems less dependent on neural activity alone and more on the structured inferential dynamics that govern that activity.

This leads to a potential insight: losing half the brain (e.g., due to injury) often leaves subjective experience intact—patients still report “being the same person.” This resembles removing axioms from a system while maintaining expressivity as long it’s ability to quantify over sets is intact. However, altering the system’s inference rules (rather than subtracting content) does affect qualia. In short, consciousness is robust to reduction but fragile to structural perturbations in its inferential logic.

4.2 Parrot Building Parrots

Practical advances in artificial intelligence—such as large language models demonstrating complex cognitive behaviors without any presumed conscious experience—underscore the evolutionary preference for computational “parroting” over genuine consciousness. This empirical evidence aligns perfectly with Universe Z’s predictions.

4.3 Et Alii

Various other falsifiable explanations could arise.

Five: Epistemology of EN

5.1 First Order Only

If EN is correct, the symbolic grounding of second-order logic will never be established and will remain forever beyond the reach of computation, or otherwise: No physical system—material, biological, computational, or otherwise—will ever be able to instantiate a complete, closed, and formally grounded implementation of second-order logic.

Consider the case of quantum mechanics: despite its conceptual strangeness, it operates effectively within first-order frameworks—Hilbert spaces, gauge symmetries, and Lie groups—all of which, though mathematically intricate, remain structurally first-order in logical terms. Even general relativity, with its reliance on tensor calculus and differential geometry, does not invoke second-order logic in any formal or foundational sense.

5.2 One Exception

However, this introduces an unresolved issue:

Einstein synchronization is unique in the natural sciences, as it addresses the conventionality of simultaneity in special relativity by adopting a specific convention—namely, the assumption that the one-way speed of light is isotropic. This choice avoids the need to quantify over alternative synchronization schemes, thereby sidestepping the higher-order logical complexity such quantification would entail: This makes Einstein synchronization unique in a ontological, epistemological and formal sense: it is a convention introduced to decrease generality, and to strategically constrain a theory’s logical order to preserve simplicity and empirical adequacy.

While this relationship isn’t entirely satisfactory, it is not deeply problematic, as the concern can be addressed in several ways:

  • The apparent dependence on simultaneity conventions may merely reflect a coordinate choice, with a real, underlying speed limit, with or without parity, still preserved across all frames.

  • There is a general consensus that an undetectable ether could, in principle, coexist with special relativity, without leading to observable differences. As such, it is often regarded as “philosophically optional”—not required by current physical theories.

  • Many physicists anticipate that a future theory of quantum gravity will offer a more fundamental framework, potentially resolving or reframing these issues entirely.

5.3 The Biggest Argument Against EN

The biggest argument against EN would be: The Phenomenological Objection—perhaps best articulated in the tradition of thinkers like Husserl, Nagel (“What is it like to be a bat?”), and Chalmers (“The Hard Problem”). In short:

“Whatever else may be illusory, the experience of experience itself is not.”

Or rigorously formulated, in our case:

If qualia are illusions generated by formal systems, and EN is a formal system, then EN too is an illusion.
But if EN is an illusion, why treat it as more “real” or “true” than the qualia it denies?

However, EN does not ignore this objection—it anticipates it. Here’s how it responds:

The strongest argument against EN is that it cannot account for the immediate fact of experience—that no model, however elegant, can erase the brute sense of being. EN’s counter is radical: it says that this very resistance is the illusion—a formal inevitability, not a metaphysical mystery.

In this way, the greatest challenge to EN becomes its strongest confirmation:

That you cannot escape yourself is not evidence of your reality; it is proof of your incompleteness.

or differently:

Only you are real? Fine. Let’s grant that. But if the system behaves the same either way—real or not—then it’s functionally identical. You are trapped.

Six: Bottom line

The arguments demonstrate strong epistemological (The Argument of Eliminative Nominalism) and ontological (The Argument of Universe Z) grounds for eliminative nominalism regarding qualia. Subjective experiences (qualia) are epistemically inaccessible, logically problematic, evolutionarily unnecessary, and scientifically unparsimonious. Thus, the hard problem of consciousness is best dissolved rather than solved—qualia likely do not exist as metaphysical entities, from a fact based viewpoint.

In this light, EN emerges not merely as a philosophical proposition, but as a testable boundary condition on the fundamental nature of physical and computational systems. Regardless of its ultimate validity, it represents a rare case—a rigorous metaphysical stance that willingly subjects itself to the criteria of empirical falsifiability.

SECTION C – Socratic Coda

This section directly addresses essential questions, clarifying potential misunderstandings and reinforcing the central arguments developed above.

What is NEW here?

This essay introduces Eliminative Nominalism (EN) — a novel development of Eliminative Materialism that reframes qualia not as illusions, but as formally undecidable commitments in any self-referential (good) regulatory system. Drawing on Gödel’s incompleteness theorems, I argue that the persistence of subjective experience reflects a system’s inability to fully model itself, making qualia a necessary byproduct of cognitive architecture rather than a metaphysical anomaly. The essay formalizes this through logic, evolutionary reasoning, and a new thought experiment — Universe Z — which demonstrates that consciousness may be best explained as a computational artifact, not an ontological primitive. In doing so, EN offers a testable, falsifiable framework that dissolves rather than solves the Hard Problem of consciousness. It is a bold claim, I know.

If qualia are illusions, why do they feel so undeniable?

One: Qualia are not illusions, they are fictions. Two: Because they must. EN argues that subjective experience is a structurally necessary outcome of self-referential, expressive systems because we are still part of that system, right now.

Why do people defend qualia so intensely if they’re illusions?

Because the illusion is evolutionarily entrenched and cognitively reinforced. The belief in qualia offers behavioral stability, social coherence, and adaptive self-modeling. Denying qualia feels absurd because the system doing the denying is structurally committed to generating them. This creates a feedback loop: the more expressive the system, the more vivid the illusion of internal experience. EN doesn’t treat this as an error to be corrected, but as a structural feature to be understood.

Do the presented ideas yield predictions?

Under the framework of Eliminative Nominalism, scientific predictions can be structured around the following themes:

  • AI may report experiences without consciousness, but these will always be unverifiable.

  • Cultural differences in inner experience reflect language and learning, not metaphysical qualia.

  • Qualia deficits are and will remain rare—even when brain injuries affect function, subjective experience is generally perceived as continuous.

  • Ethics in AI must be explicitly warranted; moral intuition won’t emerge spontaneously.

  • Dreams may be a functional necessity.

How does EN differ from illusionism?

Though EN shares surface similarities with philosophical illusionism (e.g., Dennett), there are key distinctions. Illusionism generally holds that consciousness is real in some functional sense, but our intuitions about it are mistaken. EN is more radical: it denies that consciousness—even as a phenomenon—has metaphysical or ontological grounding. Illusionism offers a reinterpretation; EN offers a rejection. Where illusionism says, “You’re misled about experience,” EN says, “Experience is structurally impossible.”

Does EN imply that introspection is unreliable?

Yes—but not because it is inaccurate. Rather, introspection is structurally incomplete. It is the act of a system turning inward on itself using the same tools it uses to model everything else. This introduces a recursive paradox: any attempt to fully understand the self from within collapses under its own referential structure. Introspection does not reveal truth; it reveals the formal boundary of cognition. What you see when you look inward is the limit of seeing.

Does the brain lie to itself?

If consciousness is causally efficacious yet qualia are fictional constructs resulting from cognitive incompleteness, we must reconsider how the brain organizes internally. Are these representations discursive (a “Society of Mind,” as Marvin Minsky suggested) or propositional (unified)? Perhaps even framing consciousness this way reflects a reductionist bias already.

Consider the common childhood analogy: the upside-down retinal image supposedly “flipped” by the brain. Closer examination reveals this analogy’s absurdity—it incorrectly presumes the brain has an inherent orientation requiring visual correction. This exposes our persistent reductionist tendency toward concretism, highlighting that consciousness cannot simply be reduced to internal “video feeds.”

I don’t care if my apparent existence is real or not. Why should I bother with EN?

In some sense, you can’t.

What about aphantasia? Isn’t it an absence of qualia richness?

Not quite. Aphantasia refers to the inability to voluntarily generate mental imagery—it doesn’t imply an absence of qualia; EN does. Think of it like a computer that lacks a graphical user interface (GUI)—it still processes information, just without a visual layer. The GUI is absent, but the underlying processes remain intact.

Moreover, the brain constantly embellishes and fills in gaps in internal representations. This becomes evident when we vividly imagine something, only to realize we’re missing precise details.

Consider this: you might easily imagine an apple—that’s the classic example. But let’s try something more subtle. Close your eyes and visualize the word TYPOGRAPHY written in lowercase letters. Now ask yourself: did your mind render a single-story G or a double-story G?

Chances are, you don’t know. Despite the vividness of the image, the details escape you. This illustrates that even when imagery feels rich, it often lacks the fidelity we assume.

Without qualia, is everything permissible?

No. Eliminative Nominalism does not negate morality or ethical responsibility. Consider a thought experiment:

A defendant guilty of homicide argues that, due to EN, the victim had no conscious experience and thus suffered no moral harm. The judge, also an EN advocate, counters that if consciousness is illusory, the defendant’s claim of injustice itself collapses. Ethical responsibility remains intact irrespective of qualia’s ontological status. Hume’s Guillotine (the is-ought distinction) is not invalidated by the absence of qualia.

Is AI exempt from ethical responsibility?

Currently, humans do not hold AI systems morally or legally responsible for their actions. The idea of punishing an AI system directly appears absurd, at least given our contemporary understanding of agency and responsibility.

Instead, responsibility lies with the human developers, deployers, and regulators who oversee the systems’ creation, training, and application.

This raises an important philosophical consideration: Hume’s Guillotine applies here as well. Ethical principles do not automatically emerge from descriptive facts. AI systems, without explicit ethical frameworks, lack inherent moral reasoning and cannot spontaneously produce normative judgments from purely descriptive data.

AI is especially vulnerable to ethical misalignment, as illustrated by concepts such as the Orthogonality Thesis, Instrumental Convergence, and the “Paperclip Maximizer” thought experiment. These ideas show that sophisticated intelligence alone does not imply ethical alignment; ethical structures must be explicitly designed and embedded.

Can EN assist AI alignment?

Surprisingly, yes—though in two distinct ways:

First, in a narrower, less likely path: If EN can help us make reliable predictions within the framework of the Manifold Hypothesis—by understanding the geometry of cognition—it might play a significant role. Developing a unified formal-geometric model of cognition could represent a breakthrough in alignment efforts, offering a structured way to map and constrain intelligent behavior.

Second, in a broader, more plausible path: If EN holds, then intelligence—while conceptually diffuse—is formally constrained. As a “suitcase word,” intelligence reflects second-order abstraction; as a functional system, it is a first-order mechanism. This duality enables alignment: the system’s real behaviors are bound by structural limits. No matter how sophisticated ASI becomes, it cannot transcend itself beyond certain bounds. And what is bounded can, in principle, be aligned—regardless of how dangerous the resulting technology is.

Is EN akin to Buddhism?

Not in a literal sense, but structurally, yes. Many mystical traditions arrive at a form of self-negation through introspection. They describe the “dissolution of ego,” the “emptiness of self,” or the “illusion of separation.” EN arrives at similar conclusions, but through formal logic and empirical reasoning rather than theological or metaphysical commitments. Where mysticism invokes transcendence, EN invokes incompleteness. The result may feel similar, but the foundations are orthogonal.

Are other metaphysical statements possible under EN?

Monodism emerges as the residual ontological commitment following EN’s process of elimination. Beyond this minimal commitment, EN remains broadly compatible with many philosophical perspectives. However, it is likely incompatible with more rigid or metaphysical frameworks such as Platonism, Idealism, mind-body dualism, Emergentism, and Reductionism.

What justifies a formal system becoming experience?

Well, that’s the heart of the matter: ultimately, nothing.

Experience is not something we have, but something we enact. Your experience is barred from being “real” in any ontologically grounded sense because the universe cannot produce something like it directly. Yet it can still be consistent, much like a force—both can only be inferred from their effects.

Consistency requires some form of external validation. And that’s where evolution comes in. Evolution functions as a pragmatic filter, an external validator that selects for systems which behave effectively within the structure of reality. In this sense, natural selection provides a kind of practical “proof” for internal systems—allowing only those whose patterns of action align with the environment to survive and reproduce.

Still, you will always have a deep-seated desire to anchor your sense of self in the material universe—to believe that since things happen “for real,” then so must “you.” Naturally, you find it more plausible that a lifeless machine made of proteins could conjure a real ghost than that it might simply be printing out a falsehood.

Consider an analogy: imagine a ball made of iron. Nature doesn’t inherently recognize this ball as an “object” the way we do. What truly exists are the consequences of its structure—like inertia. Inertia is what gives the ball functional “soundness,” not its objecthood.

Likewise, the internal “screen” you think you’re looking at—the feeling of experience—is not real in any ontological sense. It’s not really happening the way you assume. But your behavior—your actions in the world—does happen, and likely aligns with that internal narrative.

Your experience is not real, but consistent. Just like inertia.

So according to this, characters in a book are feeling beings like me?

It’s the other way around: you are just as fictitious.

If I’m only referring to my own apparent subject-qualia, why does it matter that you’re deconstructing the metaphysical kind?

The critique is not intended to imply that affirming the existence of qualia constitutes epistemological irresponsibility—particularly given that, from a first-person perspective, denying qualia feels impossible.

Instead, the question should be examined and addressed with genuine scientific curiosity.

Can EN be intuitively grasped?

No. Ugh—partially. Theoretically, you could verify it if you go mad.

The core insight of Eliminative Nominalism is simple, even if its implications are not:
You are in error, but your brain is not.

Any system capable of modeling itself is necessarily incomplete—this is not a psychological flaw, but a formal inevitability. The brain, as a self-referential regulator, must misrepresent aspects of itself to function coherently. This misrepresentation gives rise to the illusion of experience.

In rare cases, this illusion can break. If the brain becomes inconsistent—as in certain altered states—it may temporarily behave like a complete system. In a paradoxical sense, madness can feel like verification.

But such episodes—like derealization, jamais vu, or certain dream states—are contingent distortions. They presuppose a stable baseline of reality, even if they temporarily disrupt it. When coherence returns, the insight dissolves. What remains is not the experience itself, but a symbolic residue—a metaphor, a memory—rendering the system incomplete or (thus consistent) once again.

A more fruitful intuition might be this:
Do not suppose that your qualia are illusory.
Instead, consider that the ground upon which they arise is itself without ground—
That the very conditions for “appearing” are inherently paradoxical.
The impossibility of consciousness is not a flaw within you,
but a feature of the universe itself.

The map cannot contain the territory,
Not because it is too small,
But because it is the territory.

Is EN compatible with simulation theory?

Yes, but it renders simulation theory largely irrelevant. Under EN, whether we are in a simulation or a “real” universe makes causal, but not ontological, difference—both are formal systems constrained by the same representational and inferential limits. The simulation argument rests on assumptions about the reality of consciousness and subjectivity that EN dissolves. If qualia are formally undecidable, they are just as inaccessible inside a simulation as outside. EN collapses the distinction.

Does EN deny free will?

EN reframes free will (and determinism) as miscategorized abstractions rather than real phenomena to affirm or deny. The traditional debate assumes a metaphysical self capable of agency. EN sees the self as a formal construct—a regulatory fiction. Within that fiction, “will” appears coherent because it reflects recursive regulatory modeling. But from an ontological standpoint, there’s no agent to be free or determined. Free will isn’t false; it’s structurally undefined.

Could EN be harmful if taken seriously?

Only if misunderstood. EN is not a call to nihilism or despair, but to epistemic humility. It does not say “nothing matters”; it says “nothing is metaphysically what it seems to be.” That recognition can be unsettling—but also liberating. By releasing belief in unprovable inner essences, EN redirects attention toward functional coherence, ethical responsibility, and empirical sufficiency. Like cognitive behavioral therapy, it doesn’t deny experience—it reframes it for pragmatic clarity.

Can EN explain altered states of consciousness, like meditation or psychedelics?

Yes—but not as “access” to deeper truths. Under EN, altered states are best understood as perturbations to the brain’s internal formal consistency. Practices like meditation or substances like psychedelics temporarily disrupt or reconfigure the system’s inference rules. This leads to novel symbolic patterns, breakdowns in self-modeling, or even total dereferencing of the self. These experiences feel profound not because they reveal hidden truths, but because they expose the fragility of the one we normally inhabit.

Does EN imply that death is the end?

It dissolves the metaphysical subject.

What role does evolution play in EN?

Evolution is the external validator of good regulators. While qualia are not real, systems that behave as if they have qualia tend to survive better in complex environments. This creates a selection bias toward organisms with internal models that include fictional subjectivity. Evolution doesn’t care whether your self-model is true—only that it works. Thus, EN sees natural selection as the pragmatic filter that keeps false models alive, if they yield consistent, adaptive behavior.