I, G(Zombie)
There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination.
— Daniel Dennett, Darwin’s Dangerous Idea (1995)
INTRODUCTION
Preface
This document is shared publicly as an initial draft to invite constructive feedback, insights, and reflections.
Abstract
Introducing Eliminative Nominalism (EN), a novel position in the philosophy of mind that extends and critiques Eliminative Materialism by rejecting not only mental states, but also the ontological assumptions embedded in both physicalist and emergentist accounts of consciousness. Drawing from proof theory, cybernetics, and evolutionary plausibility, EN argues that any sufficiently expressive cognitive system—such as the human brain—must generate internal propositions that are arithmetically undecidable. These undecidable structures function as evolutionarily advantageous analogues to Gödel sentences, inverted into the belief in raw subjective experience (qualia), despite being formally unprovable within the system itself.
EN seeks to avoid the contradictions inherent in Dennettian Third-Person Absolutism—the methodological stance that privileges an external, objective perspective when explaining phenomena accessible only from the first-person perspective. Rather than explaining subjective illusions away in third-person terms, EN proposes that they arise as formal consequences of self-referential modeling, constrained by the expressive limits of second-order logic.
To illustrate this framework, I introduce the concept of the g-Zombie: an agent structurally compelled to assert the existence of qualia—not because such entities exist, but because self-referential symbolic systems generate undecidable propositions that are internally misclassified as direct experiential givens. On this view, consciousness is not an ontological primitive, but an evolutionarily stabilized artifact of meta-cognitive self-modeling under formal constraint.
This is further developed through the thought experiment of Universe Z: a world functionally identical to ours, yet inhabited by agents lacking phenomenal consciousness, who nonetheless behave and report as if they possess it. Universe Z models how selection favors cognitively frugal architectures that simulate introspection and self-report not due to intrinsic phenomenology, but because such outputs are computationally efficient and behaviorally adaptive. Thus, EN reframes the Hard Problem of Consciousness not as a metaphysical puzzle, but as a byproduct of symbolic incompleteness and evolutionary economy, offering a formally grounded, falsifiable alternative to dualist, emergentist, and standard physicalist accounts of mind.
The key move EN makes—and where it departs from both pyhsicalism and computationalism— is that it doesn’t ask, “Why should a physical process feel like anything from the inside?” It asks, “Why must a physical system come to believe it feels something from the inside in order to function?” The answer: because a self-regulating, self-modeling system needs to track and report on its internal states without access to its full causal substrate. It does this by approximating symbolic placeholders—undecidable internal propositions—which it treats as felt experience. In order to say “I am in pain,” the system must first commit to the belief that there is something it is like to be in pain. The report of interiority is not a byproduct—it is the enabling fiction that lets the system tell itself a coherent story across incomplete representations.
Author’s Advice
Please consider the following advice:
For skeptics: Quickly get the gist of it in just a few minutes, go to Section B and read: 1.0, 1.1, 1.4, 2.2, 2.3, 2.5, 3.1, 3.3, 3.4, 5.1, 5.2, and 5.3.
The essay truly gains momentum in SECTION A – From Church to Churchland and Back.
For those unfamiliar with the Philosophy of Mind, start with Section D for foundational concepts before returning to Section A.
For a concise and structured analytical presentation, proceed to Section B.
About the Essay
This essay aims to rigorously articulate and develop a more nuanced and sophisticated version of an eliminative position—one whose philosophical foundations have often been misunderstood, oversimplified, or prematurely dismissed. Although eliminativism holds significant potential within the philosophy of mind and cognitive science, it has been largely marginalized, leaving its explanatory power both underappreciated and insufficiently explored. This neglect has not only limited deeper philosophical engagement but has also obscured the possibility that eliminativism might offer a compelling framework for understanding consciousness, cognition, and subjective experience.
In addressing this oversight, the discussion that follows invites readers to approach the arguments with intellectual openness and critical reflection—setting aside, at least temporarily, any preconceptions or discomfort that may arise when confronting ideas that challenge deeply held intuitions, yet may ultimately provide a clearer understanding of the nature of mind than many traditional accounts.
Section A: Musings
An exploratory prelude offering interpretive reflections and philosophical context. Rather than asserting conclusions, it aims to provoke curiosity and set the stage for deeper inquiry.
Section B: Key Arguments
At the analytical core of this essay lies a clear and rigorous exposition of the central thesis. This section systematically develops its foundational claims through careful argumentation, conceptual analysis, and engagement with contemporary theoretical frameworks, establishing not merely a provocative stance, but as a coherent and deeply reasoned account of the mind.
Section C: Socratic Coda
A dialogical examination of objections and alternative perspectives. Through a Socratic format, this section engages critique, deepens reflection, and explores the tensions within and around eliminativism.
Concluding the Introduction
At its most demanding, philosophy compels us to confront ideas that challenge our deepest assumptions. This essay invites readers to engage with eliminativism not as a detached intellectual exercise, but as a sustained inquiry into the limits of belief, consciousness, and selfhood. Whether one ultimately affirms or rejects its conclusions, the hope is that the exploration provokes meaningful reflection, deepens understanding, and perhaps reshapes how we think about thinking itself.
Finally, it must be declared that large language model (LLM) technology was employed in two auxiliary roles: first, as a linguistic refinement tool to enhance clarity and coherence; second, outside the body of the work, as a conceptual aid—used to test arguments, consider counterpoints, and sharpen theoretical formulations. Nevertheless, the core ideas, arguments, and philosophical reasoning remain entirely human-authored, reflecting both independent critical thought and engagement with prior scholarship.
Feedback and further discussion welcome: https://x.com/Milan_Rosko
SECTION A – Musings
Chapter One: The Tragedy
Death Is Weird but Consciousness Is Weirder
At its heart, we face a dilemma that captures the paradox of a universe so intricately composed, so profoundly mesmerizing, that the very medium on which its poem is written—matter itself—appears to have absorbed the essence of the verse it bears. And that poem, unmistakably, is you—or more precisely, every version of you that has ever been, or ever will be.
Alas, the Hard Problem of Consciousness:
Quote
Consciousness is the biggest mystery. It may be the largest outstanding obstacle in our quest for a scientific understanding of the universe.
— David John Chalmers, The Conscious Mind: In Search of a Fundamental Theory (1996)
The scientifically inclined might already be skeptical of this notion. Before long, questions such as “Will a machine ever achieve consciousness?” ceased to provoke awe or wonder. Instead, these questions evolved into a more rigorous inquiry: “Before contemplating whether consciousness can arise in our artificial systems, shouldn’t we first devise a reliable method for detecting consciousness at all?”
Some might have ventured further still, questioning whether uncertainty about one’s own consciousness could even pose an evolutionary disadvantage, given that successful reproduction tends to favor individuals who take their own subjective existence for granted.
At this juncture, we encounter Eliminative Materialism—or EM, as it will henceforth be referred to—the theory that asserts many—if not most, or even all—mental states simply do not exist.
Zero-Thought Experiments
If one were to undertake the uncompromising task of resolving every paradox concerning subjective experience, EM would gladly volunteer—not as the angel that banishes evil, but as the devil that, having done so, replaces it with even darker demons.
Exhibit
You find yourself alone in a desert, surrounded by an endless assortment of tools: Gears, straps, motors, wires, switches, and batteries. You have all the time in the world to complete the following task: Find a conceivable way to arrange these components so that the resulting structure experiences qualia — the raw, subjective texture of personal existence.
If one would answer “it is impossible”, then how is it that you, an assortment of similarly inert components made of proteins, do feel?
Suppose you succeed, how can you be sure?
Finally, after building a physical structure, you are transcribing the perfect set of instructions onto a page. A blueprint so precise that, if followed, it would recreate a mind capable of thought and feeling. Does the “paper itself” now contain the experience of consciousness? Does it have to move, or does it have to be made out of gears, or tissue?
Or does the mind emerge only when someone—sufficiently sophisticated—reads and understands the instructions? And if so, does their consciousness then depend on “another” interpreter? A chain of minds reading minds, trapped in a never-ending loop, each one requiring another to bring it to life? The universe, maybe? If we are the universe observing itself, who is observing himself through the universe?”
A naïve approach might lead one to reason: Since consciousness is “emergent,” we must simply achieve an arrangement of components in the desert that cannot be reduced to simpler parts—a difficult task, certainly, but perhaps not impossible.
From this perspective, one might hastily assume that EM dismisses consciousness simply because it reduces cognition to its constituent parts, rejecting any perceived unity of selfhood as a mere mirage—a fleeting illusion conjured by a complex but ultimately reducible system.
This, however, is a misunderstanding.
Forget the desert. Forget the arrangement of gears and wires. The problem does not begin with the assembly of components. It begins with the one doing the assembling.
What the Terminator Really Sees
This question warrants a second look—this time through the lens of popular culture. In The Terminator, we often see the world through the machine’s perspective: a red-tinged overlay of cascading data, a synthetic gaze parsing its environment with cold precision. But this raises an unsettling question: Who—or what—actually experiences that view?
Is there an AI inside the AI, or is this merely a fallacy that invites us to project a mind where none exists?
We tend to think of software in terms of symbols—interfaces, schematics, abstractions. But at its foundation, software is nothing more than a structured cascade of logical operations, instantiated through physical circuits that obey the strict syntax of first-order logic. The symbols we impose—command lines, graphical interfaces—are not intrinsic to the system but artifacts of human design, conveniences draped over the raw machinery of computation. Beneath that veneer, the system operates without interpretation, without introspection—because it must. It does not compute for a user, nor even for itself, but as a direct consequence of its physical constraints.
And yet, symbolic reasoning—particularly at the level of second-order logic, or in the case of linguistic structures described in Chomskyan theories—demands a kind of grounding that subsymbolic neural networks are not equipped to provide. Now, this limitation might seem to dissolve at any moment—Bear with this thought; it lies at the heart of our inquiry.
For now, let us entertain the premise that a silicon-based autonomous system neither requires nor even permits symbolic mediation in order to function. Our expectations of AI cognition—what it should look like, how it ought to manifest—are already shaped, and perhaps confined, by the symbolic scaffolding of our own thought. Even the most fundamental symbolic constructs, such as command-line prompts, are arbitrary impositions—as computation unfolds without symbols, not for a user, nor even for itself, but as a direct consequence of its physical constraints.
Unlike first-order logic, second-order logic is not recursively enumerable—less computationally tractable, more fluid, more human. It operates in a space that, for now, remains beyond the reach of machines still bound to the strict determinism of their logic gates.
We have yet to uncover how a machine might autonomously generate or comprehend a symbol from within itself—not as an externally imposed structure, but as an emergent feature of its own being. The act of meaning-making, so effortless for us, remains an enigma: a gulf not merely of complexity, but of ontology.
To assume that an AI must possess an ‘inner screen’ behind its experience is to risk a category error—akin to imagining that a computer must read its own system scripts simply because we can read its verbose output above our command line.
EM goes one step further. It does not merely deny that an AI has such an ‘inner screen’; it extends that skepticism to us.
Mystic by Force
At first glance, this may seem absurd—so much so that many of history’s sharpest minds have approached the problem from entirely different angles. Among them is Roger Penrose, who contends that synthesizing even a single quale from the mechanical assortment in the desert (mentioned above) is not merely infeasible due to practical constraints, but fundamentally impossible. His argument is not one of complexity but of category: The human brain, he maintains, does not operate on the principles of classical computation.
According to Penrose, our minds tap into something deeper—something beyond mere logical connectives. Consciousness, he proposes, originates at the quantum level within neurons. Perhaps neurons, through some unknown mechanism, have evolved to channel consciousness by tapping into a realm where…
Here, critics often interrupt. Penrose, they argue, is a cautionary tale—a reminder that even the most brilliant minds can veer into speculative excess. His theory is dismissed outright, sometimes with the same casual finality that condemned James Watson’s notorious views on race. But this dismissal is itself misguided. Penrose is not merely wrong; he is consequential.
His theory confronts a fundamental paradox: If consciousness is merely an epiphenomenon—an incidental byproduct of material processes—why does it stubbornly resist elimination within the austere framework of a purely physical universe? If subjective awareness cannot be assembled piece-by-piece from purely mechanical parts, how could it arise in a cosmos built entirely from such parts—one in which nothing else is known to exist?
This persistent difficulty compelled him to look for something beyond—yet still within—the natural order. And given that our intuitive grasp of reality unravels only at a few known frontiers—cosmic horizons, the quantum scale—it is not merely speculative but rational to search there.
At first glance, theories like the Orchestrated Objective Reduction appear diametrically opposed to EM. Yet their fundamental divide rests upon a single, crucial premise: the insistence that consciousness must be accounted for—that the vivid immediacy of subjective experience demands explanation not as an incidental byproduct of computation but a phenomenon rooted in the real world that actually happens.
A Mistake of Category
Before we tackle this question, we need to return to the tragedy at hand that is EM.
In a 2024 paper published in Progress in Biophysics and Molecular Biology, Robert Lawrence Kuhn presents a comprehensive survey of contemporary scientific theories of consciousness. The first category listed under Materialist Theories is EM.
To appreciate why this constitutes a grievance, consider the perspective Richard Rorty articulated in his seminal 1970 essay, In Defense of Eliminative Materialism. Rorty anticipated many of the conceptual shifts that would later define the position. He argued that just as outdated scientific frameworks—such as alchemy or phlogiston theory—were eventually abandoned rather than reduced to more refined physical explanations, so too should folk psychological concepts like “belief,” “desire,” or “pain” be discarded if they fail to align with neuroscientific progress, and that the history of science is filled with conceptual revolutions in which entire ontological categories disappear rather than undergo translation into more fundamental terms.
It was, of course, Daniel Dennett who, in Consciousness Explained (1991), offered the most popular articulation of EM. He argued that subjective experiences are mere “user illusions,” constructed by evolved cognitive processes—useful fictions rather than intrinsic phenomena.
Dennett’s “multiple drafts” model, which dismantles the notion of a central “Cartesian Theater” where consciousness unfolds, aligns with a broader eliminativist trajectory—not merely reducing consciousness to physical processes, but rejecting its existence as traditionally conceived.
While Dennett challenges the notion of qualia by framing them as cognitive constructs rather than intrinsic properties, Paul and Patricia Churchland advanced the idea that our commonsense notions of subjective experience are part of an outdated theory—one that will ultimately be displaced by a mature neuroscience. From this perspective, not only are mental states like “pain” or “redness” destined for elimination, but the very notion of qualitative experience itself is a conceptual error.
Here, we confront a deep conceptual tangle: Is consciousness merely an illusion, or is it nothing at all? This is no trivial distinction—it marks a fundamental ontological divide. If consciousness does not exist, then it simply does not, full stop. But if even a trace of it is real, then some account must be given of its nature—what it is, how it arises, and why it appears as it does.
To unravel this, we must step back several stages further, again.
It is worth reconsidering the label materialism itself, as its historical development helps clarify why it became foundational to EM. Materialism has been closely linked to the natural sciences, particularly since the 19th century, when empirical explanations progressively supplanted supernatural or dualist interpretations of reality. This connection parallels how atheism is often (imprecisely) conflated with materialism—not because secular thought inherently requires it, but because both reject ontological commitments to immaterial entities, especially a certain prominent divine spirit.
Curiously, some would argue that eliminativism itself functions more as a convenience than a substantive philosophical position:
Quote
“In principle, anyone denying the existence of some type of thing is an eliminativist with regard to that type of thing.”
William Ramsey, Stanford Encyclopedia of Philosophy (2003)
This leaves us with a possibly useless label imposed upon a possibly misapplied one. Eliminative could, in principle, refer to the rejection of virtually anything, making it an empty marker without specifying what is being denied. Materialism, meanwhile, is a label that proponents of eliminative materialism do not, (or should not) fully endorse, as their position often challenges the very framework that materialism traditionally assumes: The existence of symbols and universals, even if only one.
Question that Matter
For further demonstration, let us use turn to Patricia Churchland’s analogy:
Exhibit
Although many philosophers used to dismiss the relevance of neuroscience on grounds that what mattered was “the software, not the hardware”, increasingly philosophers have come to recognize that understanding how the brain works is essential to understanding the mind.
— Patricia Churchland, Website of UC San Diego (2013)
If one insists that the software—the computational patterns and processes—alone constitutes the essence of the AI, then one leans toward idealism, suggesting that the “helpful assistant” might exist in a realm hierarchically above physical instantiation, beyond space and time. Conversely, if one asserts that only the hardware—the physical substrate—truly exists, then one aligns with materialism or physicalism, reducing the AI to mere excitations of electrical charges within the integrated circuits of the GPU.
A materialist might then reason as follows:
Exhibit
On the condition that the relationship between mind and brain is analogous to that of hardware and software, then: All solvable software problems can be corrected by modifying the hardware, but not all solvable hardware problems can be corrected by modifying the software.
That is to say: Lovers without love are common, but love without lovers is impossible.
In consequence: The brain precedes and governs mind.
It stands to reason, then, that: All solvable hardware problems can be corrected by modifying matter, but not all solvable matter problems can be corrected by modifying hardware.
That is to say: Matter without hardware is common, but hardware without matter is impossible.
In consequence: Matter precedes and governs mind.
Through this lens, materialism emerges not merely as a metaphysical stance but as an inductive principle—one that treats matter as the fundamental substrate from which all emergent phenomena, including mind, must arise.
Again, there is a fundamental tension between eliminativism and materialism. The latter assumes a stable ontological foundation—matter—which ultimately relies on intuition, since matter, is a symbolic construct, and always will be. In contrast, eliminativism, in its most rigorous form, exemplifies some kind of radical skepticism: it negates rather than affirms, dismantling assumptions without offering definitive confirmation.
This is not a simple case of revisiting Cartesian doubt; rather, it is a question
whether categorical expressions within the philosophy of mind are appropriately applied:
If matter is merely a symbol within our conceptual models, does the claim that “matter precedes and governs mind” hold any meaning? Or is it simply a recursive assertion—one symbolic system grounding another, without ever reaching something truly fundamental?
Misunderstood by a Degree
Quote
“Are zombies possible? They’re not just possible, they’re actual. We’re all zombies. Nobody is conscious – not in the systematically mysterious way that supports such doctrines as epiphenomenalism.”
— Daniel Dennett, Consciousness Explained (1991)
The provocative claim that “we’re all zombies” suggests a radical dismissal of consciousness as anything more than whatever it actually is. Yet, as his final remark indicates, Dennett never fully embraced the most extreme implications of his own theories.
While he consistently rejected the Cartesian notion of consciousness as a “theater” or an irreducible mystery, he always left open a narrow but persistent gap—one that allowed for the possibility of a future “holistic” (in the sense of Quinean epistemological holism) explanation of consciousness.
The same applies to many other proponents of EM. Despite their efforts to demystify the mind, figures such as Keith Frankish, Jay Garfield, Michael Graziano, and even Dennett himself arguably undermined their own ambitions by endorsing the term illusionism.
Illusionism, as a framework, failed to gain mainstream traction—not least because it became entangled with a separate discourse initiated by Saul Smilansky, who employed the same term in a different philosophical context, but perhaps because it reflects EM’s greatest weakness: its inconsequence. By framing consciousness as an illusion, EM’s proponents may have inadvertently reinforced the very confusion they sought to dispel.
To call something an illusion implicitly presupposes the existence of someone being deceived; in this sense, the argument appears to echo Buddhist teachings more than contemporary science.
Exhibit
Pondering Person: “If consciousness lacks any grounding—if it does not truly exist—then how am I able to taste ice cream?”
Illusionism: “You must be experiencing an illusion.”
At times, the trajectory of Eliminative Materialism—despite its intellectual rigor and originality—has been overshadowed by the misinterpretations imposed upon it, particularly from philosophers who oppose it, as well as theist critics of the so-called “New Atheism.” Too often, EM has been reduced to a caricature of secular reductionism, a convenient strawman rather than an earnest philosophical stance.
As a result, eliminativists have found themselves entangled in the margins of various cultural debates. From one side, proponents of “Intelligent Design,” eager to claim consciousness as evidence of divine intervention; from another, Neoplatonists and Postmodernists, each keen on reducing EM’s radical propositions to absurdity, have all contributed to its persistent mischaracterization.
Conceptual Drift
A third source of confusion arises from EM proponents themselves; the position has been subject to continual conceptual drift. What originated as a focused argument within the philosophy of mind has gradually expanded—wavering between broader ontological claims, illusionist theories, and even arguments advocating the elimination of supposedly outdated scientific disciplines. This shift reflects an increasing preoccupation with institutional and sociological concerns rather than a sustained inquiry into fundamental questions about the nature of mind and consciousness.
Quote
“Eliminative Materialism is the thesis that our common sense conception of psychological phenomena constitutes a radically false theory, a theory so fundamentally defective that both the principles and ontology of that theory will eventually be displaced, rather than smoothly reduced, by completed neuroscience.”
— Paul Churchland, Eliminative Materialism and the Propositional Attitudes (1981)
Much like Hegelian theorists speculating about the inevitable evolution of knowledge, Eliminativists have made sweeping predictions about the future of scientific fields, and about their “completeness”. They have argued that disciplines such as psychology and cognitive science will ultimately be “eliminated” and absorbed into a more rigorous, unified scientific framework. However, this prediction is flawed on multiple levels. First, it contradicts the observable trajectory of modern science, which trends toward increasing specialization and the proliferation of sub-disciplines rather than their consolidation. More importantly, such speculative claims have little direct bearing on the actual workings of organized human structures.
Even if psychology were nothing more than ‘cocktail party talk’—Don Draper’s famous critique notwithstanding—it would still possess undeniable utility: the simple act of “listening to someone’s problems.”
Purity perhaps never should have come at the expense of pragmatic, socially beneficial concepts—not because such concepts ought, in a moral sense, to be preserved (indeed, none should be immune from scrutiny), but precisely because none deserve unquestioned exemption. If proponents of EM are so eager—almost gleeful—to dismantle “folk psychology,” then why does the concept of “love” remain curiously untouched? After all, many eliminative materialists, as far as one can observe, continue to engage in romantic relationships.
Exhibit
EM spouse: “Good night, honey. I would say ‘I love you,’ but of course, love does not exist in our purely material universe.”
For all its radical claims, EM has often struggled to reconcile its theoretical ambitions with the lived realities of human experience. It is akin to looking at scientific progress and concluding that, in the future, children will no longer believe in Santa Claus—an assumption that disregards the persistence of socially embedded narratives, regardless of their ontological status.
Here, a crucial distinction emerges:
Exhibit
Eliminativism is not merely the claim that horoscopes don’t work, but rather that there is no one reading them. Likewise, it does not necessarily follow that Horoscopes will be abandoned because of Eliminativism, nor that science will uncover anything.
If one takes its conclusions to the extreme, in attempting to eliminate the mental, the intellectual foundations upon which it stands —minds engaging in rational discourse—starts to dissolve. But then instead of making the final jump into the deep, EM starts to relativise, clinging to the tiny island left under their feet, the island of intentionality part of the scientific oversee territories.
From Church to Churchland and Back
Now let us begin to talk seriously.
One might initially suggest that the conceptual insights proposed by EM would be more suitably framed within a different monistic stance, rather than strictly materialism.
A particularly brilliant medieval heretic of the Order of Friars Minor comes to mind, believed to have been born somewhere in Surrey, England.
This figure, of course, is William of Ockham, best known for his principle of lex parsimoniae—commonly referred to as Occam’s razor, who is widely regarded as a foundational figure in modern epistemology. Ockham famously advanced the radical idea that universals, essences, and accidents are mere abstractions constructed by the human mind, lacking any independent existence outside of thought: Nominalism.
Without any hint of apology, it is especially striking—and perhaps even ironic—that some of the most secular thinkers of our era, including evolutionary biologists and prominent representatives of the so-called Four Horsemen of Atheism, find themselves philosophically aligned with this medieval Christian theologian. Yet this alignment is far from coincidental; rather, it underscores a profound structural parallel between Ockham’s nominalism and the philosophical foundations of eliminative materialism.
While Ockham approached these issues through the theological and scholastic traditions of his time, EM engages the same themes from a modern scientific standpoint, informed by biology, cognitive evolution, and something problematic akin to a “phenomenological common sense.”
A frequent feature of eliminative materialist arguments is what might be called Third-Person Absolutism—a methodological stance privileging an “objective,” external (third-person) perspective over subjective experience in explaining consciousness and cognition. To paraphrase Daniel Dennett’s position: “Since our concepts of mind are themselves products of the mind, understanding them requires adopting an objective view from outside.” Yet this external viewpoint itself is necessarily produced by the same mind it seeks to explain—thus remaining firmly inside, sorry. This circularity reveals a fundamental tension within Dennett’s position. Unsurprisingly, then, Dennett occasionally falls into conceptual traps as he attempts—against his better judgment—to set aside philosophical rigor while navigating the intersection of science and metaphysics.
Further complicating the issue, Dennett’s Intentional Stance grapples with a similar ontological dilemma. He suggests that purpose-driven, predictive language can successfully describe and anticipate the behaviors of complex yet fundamentally purposeless biological systems—such as the human brain—without committing us to the ontological reality of mental states themselves. In this sense, Dennett echoes Ockham: just as Ockham dismissed universals as unnecessary metaphysical baggage, Dennett treats mental categories merely as pragmatic heuristics rather than fundamental realities. Yet, unlike Ockham—who ultimately anchored his nominalism in theological certainty through the Christian God—Dennett’s framework does not fully dissolve the elusive “hard problem” of consciousness.
Nevertheless, suppose we could replace Ockham’s theological “Unterpfand” with a secular counterpart. Such a substitution might continue a historical trajectory wherein yesterday’s heresy gradually matures into today’s mainstream science.
One might label this hypothetical stance Eliminative Nominalism (EN)—a philosophical position discarding not only the conceptual constructs of traditional “folk psychology,” but also the residual metaphysical baggage embedded within Eliminative Materialism itself.
Yet EN would have to carry the burden of EM: it must confront and resolve EM’s most stubborn difficulty—how to sustain meaning itself. As suggested earlier, symbolic grounding remains elusive; physical computation alone neither approximates nor replicates the expressive capacities of second-order logic precisely because nature has, thus far, demonstrated a profound reluctance to accommodate genuine symbolic grounding.
Yet, I argue, this reluctance need not persist indefinitely.
Compensating Errors
This conceptual zigzag of EM naturally invites various counterarguments of all kind. One prominent critic, Lynne Rudder Baker, offers several rebuttals in her works, notably Naturalism and the First-Person Perspective and Cognitive Suicide. A central theme of her critique is that language and communication inherently presuppose beliefs and propositional attitudes.
In her arguments, she consistently repeats a subtle yet significant flaw in the use of reductio ad absurdum (proof by contradiction): to employ this method effectively as a counterargument, one must fully engage with—and provisionally accept—the premises and implications of the opposing view—in this case, EM—before demonstrating that they necessarily lead to a contradiction.
Ironically, however, she may have inadvertently revealed a deeper insight. If we apply the principle of Steelmanning—that is, reconstructing her argument in its strongest possible form—we might uncover not just an argument, but perhaps the most compelling argument not against, but paradoxically in support of the very position she aims to refute.
To see why, we must first “strengthen” Baker’s argument:
Exhibit
Premise 1: Eliminative Materialism is true.
Premise 2: Intentionality can persist even if semantic meaning does not.
Conclusion: If EM is valid, then our language is meaningless but intentional. It is meaningless since meaning presupposes the existence of mental states, but intentional since intentionality is not reliant on mental states.
Contradiction: This distinction is unsubstantiated. If meaning is eliminated, intentionality itself collapses. Without intentionality, EM advocates undermine their own ability to argue coherently for their position—or for any philosophical stance that depends on mental states.
Modus Ponens: Consequently, Eliminativism is inherently self-defeating.
Or even simpler:
Exhibit
A. Lisa is a P-Zombie.
B. Lisa holds the position that she is a P-Zombie.
C. A P-Zombie cannot hold any positions.
D. Thus, Lisa cannot hold the position that she is a P-Zombie.
At first, it seems like she has a point: If proponents of Eliminativism are forced to deny the meaningfulness or reliability of their own thoughts, they find themselves in an untenable position, whether they call it commitment, meaning, or even intentionality. Therefore, EM risks becoming inherently self-defeating, collapsing under the weight of its own skepticism toward intentionality.
Zombie in a Vat
Nevertheless, EM proponents might attempt one final defense: Could statements or utterances still possess truth value independently of the speaker’s understanding or intentionality?
In other words, If a statement is objectively true or false even if the speaker is entirely ignorant of its meaning, could EM still be a valid position?
Exhibit
Imagine a person reciting a statement phonetically in a language unknown to them. To an external observer familiar with the language, the statement may clearly be true or false. Yet the speaker, lacking any comprehension of the language, cannot possess any intention to convey its meaning.
This scenario prompts a critical question: Does the truth value of the utterance exist independently of the speaker’s intentionality?
Answer: If truth values are indeed independent from intention, we could hypothetically neutralize falsehoods simply by inventing a language in which the same phonetic sequence incidentally expresses a true proposition.
This thought experiment resonates with Hilary Putnam’s semantic externalism—the idea that meaning, truth conditions, and intentionality are not confined within the mind but instead emerge from an interplay between internal cognition and external causal factors. Putnam’s insight dissolves the notion of meaning as an isolated mental construct, anchoring it instead in the relational dynamics between thinker and world.
Where Putnam’s inquiry extends outward, tracing the external determinants of the world, EM turns inward, dismantling the self, yet, when we revisit Putnam’s original thought experiment— Twin Earth—we find ourselves entangled in familiar problems of self-reference, intentionality, and the external conditions that shape meaning.
Consider this ancient paradox of self-reference:
Quote
“Sarvam mithyā bravīmi”
″Everything I am saying is false.”
— Bhartṛhari (5th century CE)
Self-referential paradoxes—known historically as insolubilia—have resurfaced across cultures and centuries, wherever human inquiry bends back upon itself. At first glance, they may seem like mere linguistic curiosities. Yet they mark the outermost boundaries of thought, where reason confronts its own foundations. Indeed, every philosophical endeavor eventually collides with the Münchhausen Trilemma—the inescapable triad of circular reasoning, infinite regress, or axiomatic assertion.
And yet, there was a moment in intellectual history when this collision with paradox became not a terminus, but a point of departure.
It is a profound irony that the very concept of universal computation emerged not from certainty, but from the recognition of its limits:
Leibniz dreamed of a universal symbolic calculus; Hilbert sought to find it by laying its groundwork. But it was Gödel’s incompleteness theorems—and Turing’s discovery of undecidability—that finally revealed the true limits of formal systems. Only by confronting the boundaries of formal systems could von Neumann articulate, with precision, what a universal machine truly has to be: inexpressive.
This leaves us with two type of systems:
Exhibit
First-order Systems
Verbosity: Propositional
Exemplified by: Connectives
Utilized as: Computation
Utilized from: Logic Gates
Complete: If Sound
Metaphor: The CPU
Nature: Tangible, in the form of causality
Paradox: Primum MovensHigher-order Systems
Verbosity: Expressive
Exemplified by: Properties
Utilized as: Function approximation
Utilized from: Neural Networks
Complete: Not if Consistent
Metaphor: The Mind
Nature: Imaginary, as symbols lack grounding
Paradox: Circulus in probando
The Hard Problem of consciousness might not be merely a mind-body problem, but of how a biological system manages to transcend first-order logic.
Chapter Two: The Rift
Incompleteness Ahead
At this juncture, caution is imperative. Invoking Gödel’s theorems has, in some circles, become as precarious as conjuring quantum mysticism. To proceed responsibly, we must first delineate the precise scope of our claims:
One: We will employ Gödel’s incompleteness theorems in a broad yet arithmetically justified sense.
Two: These theorems will not serve as vehicles for direct proof, nor will they be wielded in support of sweeping assertions absent rigorous justification. Our inquiry will remain firmly anchored in the provable consequences of incompleteness—namely, that within any sufficiently expressive and consistent formal system (at least as strong as Peano arithmetic), there exist statements that are true but formally unprovable within the system itself. Moreover, any proof of the system’s consistency must originate from outside it.
What justifies invoking Gödel here is that incompleteness establishes a lower bound: Its Validity emerges only at or beyond a certain threshold of logical expressivity: meaning the ability of a logical system or formal language to represent and distinguish different concepts, properties, or relationships within a given domain. Thus, we are justified in considering its implications expansively—much as one generalizes from second-order logic to third-order logic and beyond.
Let’s play around with this idea. Logical expressivity traditionally concerns formal languages used in logical systems, defining their ability to represent and distinguish different properties, relations, and truths. These formal languages have strict syntactic and semantic rules, allowing for precise reasoning. However, if we consider Natural language within a Chomskyan framework—which includes a formal syntactic structure but is embedded in a broader cognitive and communicative system—then it must also possess expressivity in a more expansive sense.
Exhibit
We can encode mathematical truths into natural language, yet we cannot fully encode human concepts—such as irony, ambiguity, or emotional nuance—into formal language. Therefore: Natural language is at least as expressive as formal language.
If expressivity refers to the capacity to represent and distinguish concepts, then natural language appears at least as expressive as formal systems—arguably even more so. After all, natural language not only subsumes the expressive capabilities of formal logic and mathematics but extends beyond them, incorporating pragmatic, cognitive, emotional, and contextual dimensions that formal systems struggle to capture. If formal languages are constrained by rigid syntactic rules and explicit axioms, natural language exhibits a different kind of limitation—one that is not merely syntactic but epistemic, a form of semantic incompleteness akin to the incompleteness Gödel identified in formal mathematical systems.
Once a formal system reaches a certain threshold of expressivity, incompleteness becomes inevitable: some truths will always remain beyond formal proof. But are we justified in extending this principle beyond strictly mathematical domains?
Quote
“yields falsehood when preceded by its quotation” yields falsehood when preceded by its quotation.
— Known as Quine’s paradox
Consider expressive systems that are not bound strictly by formal syntactic rules, rigorous axioms, or well-defined inference procedures. Could a similar form of incompleteness extend beyond mathematics, shaping broader domains of inquiry—ethics, metaphysics, cosmology?
Is it mere coincidence that we find ourselves ensnared in the same foundational dilemmas time and again? The ultimate justification of ethics. The infinite regress of a “creator of the creator.” The question of what, if anything, preceded the Big Bang.
With these conditions established and a concrete example in hand, we can now turn to a more speculative frontier: the implications of incompleteness not just for formal systems, but for the expressivity of the human mind itself.
Is the Mind Expressive?
Before exploring the structural parallels and divergences between minds and formal systems, consider the attempt to formalize a well known game:
Exhibit
Imagine opening a jigsaw puzzle with 500 pieces. We could observe...
Primitive symbols: Discrete, inert tokens—the individual puzzle pieces.
Inference from axioms: Rules determining how pieces interlock into valid configurations. Rules determining natural numbers.
Derived Truths: Once all pieces are correctly assembled, nothing remains in the box.
Valid formal systems possess an almost transcendental quality: we do not expect to find a 501st piece in a puzzle designed for 500, for the system is complete within its defined boundaries. Likewise, a sufficiently robust formal system seems to capture and constrain the unruly beast of Reality, taming its complexities into a coherent, self-consistent framework. It is as if, through the rigor of logic and mathematics, we have discovered a means to confine the infinite—to impose order upon the raw chaos of existence.
At first glance, this offers a seductive promise: the possibility of circumventing the contextual ambiguities that plague natural language. While the meaning of words can be twisted, redefined, or rendered uncertain by shifting contexts, the validity of arithmetic remains unyielding. Its truths are not dependent on interpretation, nor do they bend to the contingencies of symbolic convention. Through axiomatization, arithmetic truths stand apart—immutable, as though nature itself were speaking them, leaving us with the more modest task of formalizing what was always already inscribed in reality.
Not merely a game of jigsaw puzzles, but a principle extending across all scales—from atoms to galaxies, from the infinitesimal to the cosmic. Anything that can be measured can, in principle, be arithmetized. By introducing additional variables, refining constraints, or extending the rules, formal systems can approximate reality with ever-increasing precision. The dream, then, is one of convergence: that through this process, the structure of thought and the structure of reality might one day align perfectly, leaving nothing unaccounted for—no remainder, no excess, no missing piece.
If this were true, then to formalize the mind would be to resolve the Hard Problem at the core of Eliminative Materialism. But here, the illusion of finality collapses. The attempt to formalize the mind does not yield a singular, definitive model but instead reveals a fractal complexity—an inexhaustible hierarchy of descriptions, each offering its own lens and level of granularity. The mind can be framed as mental states mapped to neural activity, as creditworthiness inferred from behavioral data, or as atomic-scale configurations forming the physical substrate of the brain. Each perspective is internally coherent, each yields a formal representation—yet none exhausts the totality of what the mind is.
Formal validity alone does not guarantee insight.
Consider Euclidean geometry. Within its framework, π is rigorously defined as the ratio of a circle’s circumference to its diameter—an exact, systematic, and logically impeccable definition. And yet, this formal precision tells us nothing of the hidden structure, if any, within π’s infinite, non-repeating decimal expansion. The number remains, in some sense, opaque—validity does not always illuminate.
And yet, formalism need not be sterile. When structured correctly, it can reveal not just consistency but explanatory power. Minkowski spacetime, for instance, does not merely encode the mathematical structure of special relativity—it clarifies it. In the right configuration, a formal system does not merely approximate reality. It predicts it.
99 Systems but a Complete Ain’t One
Let us imagine a vast, combinatorially exhaustive hierarchy—a Library of Babel for mathematics, an all-encompassing registry of sets, relations, structures, and symbols. Within this boundless construct, every conceivable mathematical object, every possible formal system, every arrangement of axioms and inference rules would be cataloged. Such a place already has a name: the Von Neumann universe.
Exhibit
In the Von Neumann universe V we find the cumulative hierarchy encompassing all mathematically definable objects, including every conceivable formalization capable of representing or describing aspects of our world. Within this vast hierarchy, certain subsets correspond precisely to particular set-theoretic constructions of functions or formal systems. Among these subsets, we identify a special class distinguished by their representational utility—namely, those functions that provide genuine predictive insight into the behavior or properties of any sufficiently complex arithmetic-expressive mind. We denote this special class by M, the class of mind functions.
Under this construction, the following assertion holds unequivocally: For every element of M—or for that matter, of V—no hereditary subset can contain a valid mind function capable of fully and demonstrably encapsulating all truths pertaining to itself.
Precisely because formal systems are inherently incomplete—an inevitability demonstrated by Gödel’s incompleteness theorems—any mind that can be meaningfully modeled as a formal system must, by extension, be incomplete as well. If a formalization of the mind seeks to approximate the behavior of its real-world counterpart, then it, too, must inherit the same fundamental limitation.
The brain, in ordering itself, must rely on a system that accommodates its own incompleteness. Natural language, by virtue of its logical expressivity and open-ended generativity, serves this role. Just as formal systems contend with incompleteness through symbolic extensions, so too does thought—relying on language as an adaptive mechanism that allows for self-reference, abstraction, and symbolic inference.
Moreover, regardless of whether the mind itself can be fully formalized, the chain of thought undeniably can: As language. Even if one were to argue that cognition transcends strict formalization, the sequences of reasoning, inference, and symbolic manipulation that constitute thought necessarily adhere to formalizable structures.
One might think of the mind as an airplane: while it may depart from the runway of formal systems, exploring intuitive, non-formal, or seemingly unstructured cognition, it must ultimately return to the structured runway of chain of thought in order to be intelligible, communicable, and internally coherent. This provides a failsafe argument—even if the ontology of the mind resists full formalization, its navigable course remains constrained by formal structures, ensuring that thought never fully escapes the formal systems that shape it.
This insight, while perhaps not entirely surprising, aligns with an intuitive understanding: there must exist truths about the mind that remain inherently inaccessible to the mind itself.
But what does it mean for a truth to be inaccessible? Consider the following examples:
Exhibit
A dreamer who dreams he is not dreaming.
A P-Zombie who insist it possesses consciousness.
Plausible Undeniability
Could it be that our intuition about qualia—those elusive, ineffable aspects of subjective experience that seem to resist formalization—arises from the brain’s own ability to generate arithmetic statements that it cannot disprove?
In other words, might evolution, in shaping the brain as a regulator, have harnessed the self-referential properties of formal systems—without invoking epiphenomenal explanations?
Could the very conviction of having consciousness be, in fact, an arithmetic property of the ultimate “good regulator”—in the truest sense of Conant and Ashby?
This is not implausible. For the brain to function successfully as an organ, it must regulate—must construct an internal model of its environment—but crucially, it must also develop a model of itself. This entails not only being sufficiently complex to achieve arithmetic expressiveness (the capacity to form and manipulate symbols) but also, paradoxically, being more effective because it is incomplete.
Thus, it should come as no surprise that the brain resists—and indeed, may be fundamentally incapable of—fully accepting EM. If consciousness is to function effectively as a “good regulator,” it must be barred from recognizing itself as a mere “lifeless regulator.”
The reason is obvious:
If it were lifeless, it could never fear losing its life.
Exhibit
If the brain has evolved to survive and to feel, then it will feel in order to survive, and survive in order to feel. In doing so, it will assume its own existence as a necessity of natural selection—because existence is the ultimate antithesis to death. It is defined precisely by not being dead.
Evolution may have exploited fundamental incompleteness to construct its regulatory organ—what we call “brain”—which, in turn, produces a “richly expressive” internal conviction—what we identify as “mind.” But this conviction is not necessarily a reflection of ontological truth; rather, it is a predisposition toward committing to certain useful fictions. And among these fictions, none is more persistent than the one we call qualia.
This suggests an unsettling, unprovable truth: the brain does not synthesize qualia in any objective sense but merely commits to the belief in their existence as a regulatory necessity.
Yet from within the brain, this unprovable truth—when framed within a formal system—manifests not as an abstract limitation, but as an unshakable conviction within our psyche.
We have come full circle.
Exhibit
If a p-zombie could fully comprehend the neural simplicity underlying the quale it perceives as “red,” it would effectively refute incompleteness. However, since incompleteness is an inevitable consequence of arithmetic coherence, the p-zombie—like us—is compelled to experience the color red as vividly as any conscious being. Its assertion, “But it feels so real,” is thus a mathematical necessity, not a proof.
Or if it comes to consciousness:
Exhibit
I challenge the reader to fully conceive of themselves as a regulator. The attempt will prove impossible, and, on top of that, it is rarely, if ever, instantiated.
Formally as a Modus Tollens:
Exhibit
(1) All sufficiently expressive self-referential regulatory systems (such as brains) are necessarily incomplete (Gödel).
(2) If subjective experience (“qualia”) were merely a regulatory illusion and explicitly disprovable by the system itself, then the regulatory system would need to be complete enough to perform this internal disproof.
(3) From (1), no such completeness can exist.
(4) Thus, subjective experience (“qualia”) must remain internally unprovable yet compellingly real within the regulatory system.
Therefore, the persistence of qualia as subjectively undeniable yet formally unprovable aspects of cognition is logically inevitable.
To summarize: The argument above provides a plausible and testable evolutionary explanation for the phenomenon of subjective experience, positioning it as a natural consequence of functional adaptation rather than as an inexplicable anomaly or epiphenomenon.
Don’t Hate The Science, Hate The Universe
As for Baker’s argument, it has become the most common rebuttal to EM. Her claim rests on the idea that EM cannot be genuinely believed because it undermines the very foundation of belief itself. However, the issue is actually the reverse: EM cannot be believed if it is true.
Sadly, if the mind, can be understood as a formal system, then it is bound by the constraints of such systems, forever. As science itself would always function as an extension of the human mind—a structured, rule-governed method of inquiry—it can’t complete the mind:
Exhibit
Gödel’s second incompleteness theorem allows for the validation of a formal system’s consistency from an external vantage point, but never from within as an extension.
If EN holds true the Hard Problem will never be solved by science.
The scientific method, for all its power, operates within the limits of what is observable, measurable, and formally expressible. However, it fundamentally lacks the ability to make us understand—even if it can help us to write it down.
We’ve already encountered this issue with colors. It is well established that committing to colors ontologically is considered “unscientific,” as demonstrated by the brain’s interpretation of simultaneous S- and L-cone firing as magenta—a color with no corresponding wavelength in the physical spectrum.
This realization offers no deeper ontological intuition; it merely reveals that colors, or at least a color as we perceive it, is not an intrinsic property of the external world.
The symbol is a symbol
The brain does this by creating a symbol, which refers to a symbol, which refers to a symbol—an infinite regress with no grounding, no bottom. But evolution doesn’t need grounding; it needs action. So it skewed this looping process toward stability—toward a fixed point. That fixed point is the assertion: “I exist.” Not because the system proves it, but because the loop collapses into a self-reinforcing structure that feels true. This is not the discovery of a self—it’s the compression artifact of a system trying to model itself through unprovable means. The result is a symbol that mistakes itself for a subject.
Chapter Two: The Mirage
Élan vital
Finally, we can attack the prevailing view of consciousness:
A paraphysical epiphenomenal continuum—lacking a fixed substrate or singular identity, yet emergent and, in principle, reproducible. It is presumed to arise from physical mechanisms that remain largely unknown, shaped by organic constraints such as blood alcohol levels and strokes, yet remarkably resilient, persisting despite the loss of countless neurons or even entire brain regions over short or long time distances. In some aspects, it presents itself as discrete; in others, fluid. It is at once repetitive and continuous, structured yet elusive.
Consider the diverse entities proposed by various worldviews—atoms, spirits, quantum fields, divine essences, and social structures. Despite their conflicting ontologies, “emergence” and “emergent properties” seem to integrate seamlessly into these disparate frameworks. This raises a fundamental question: Why does emergence remain relatively uncontroversial despite such different views, when it is in fact an ontological stance?
The answer lies in its universality. Emergence arises when we cannot cognitively grasp a system in its entirety, something shared widely among humans. It is less a statement about the system itself and more a reflection of our cognitive limitations. This flexibility allows emergence to act as a conceptual chameleon, adapting to various intellectual contexts:
Cognitive Comfort: Emergence offers a seemingly sophisticated way to acknowledge complexity without requiring a full understanding. It provides a sense of mastery over topics that remain fundamentally elusive.
Interdisciplinary Appeal: Its indeterminacy and academic anchoring make it applicable across fields, from physics and biology to sociology and philosophy. This broad utility gives it a veneer of universality.
Anti-Reductionist Sentiment: Emergence resonates with those who resist reductionist explanations, aligning with the idea that some phenomena transcend their constituent parts.
Explanatory Placeholder: Emergence often serves as a temporary stand-in for incomplete knowledge. It allows us to acknowledge phenomena that defy mechanistic explanation, bridging gaps in understanding while research progresses.
These qualities, however, raise critical questions: If emergence is a property of something, why is it so dependent on our explanatory and predictive capabilities? And if it is an explanation, why does it describe the system’s behavior in terms of our cognitive limitations rather than the system itself?
If it’s not a Feature, It’s a Bug
Computers provide a compelling lens through which to examine the concept of emergent properties. Their behavior can appear magical: the fact it can do the things it does seem to “emerge” from the alignment of transistors, circuits, and software.
Yet, this perception begins to dissolve under expert scrutiny, as computers have not a single emergent quality.
Exhibit
print(0.1 + 0.2) became 0.30000000000000004, as excpected.
Recent advancements have revealed abilities in AI systems that were neither explicitly wanted nor pre-trained—behaviors that appear novel and unpredictable. These “emergent abilities” have sparked widespread interest, but the question remains: are they truly emergent?
The notion of emergent abilities in AI came to prominence with a 2022 paper by researchers at Google Brain, DeepMind, and Stanford titled Emergent Abilities of Large Language Models. The paper argued that AI systems exhibit unexpected capabilities as their scale increases. However, a subsequent paper from Stanford, titled Are Emergent Abilities of Large Language Models a Mirage?, challenged this idea. The critique claimed that these abilities emerge along a smooth continuum, making them predictable rather than genuinely emergent.
This rebuttal, while compelling, seems to miss a deeper issue. Smoothness and predictability do not necessarily preclude emergence; the problem lies in the concept of emergence itself.
Exhibit
If my pet frog begins to sing, it would undoubtedly surprise me, regardless of whether the singing developed gradually or spontaneously. The surprise arises from my expectations, not from the frog’s true nature. Debates over whether a system has surprising properties are, at their core, debates over who understands the system better.
Similarly, the “emergent” abilities of AI systems reflect the gap between our predictions and the system’s outputs—and nothing else.
In this sense, emergence is the ultimate suitcase word: a convenient label that bundles together disparate phenomena, masking the gaps in our understanding rather than resolving them. The ultimate god of the gaps argument. Reductionism in Reverse.
This leaves us with the following framework of Monoidism:
Exhibit
First-order logic
Behavior: Propositional
Exemplified by: Connectives
Utilized as: Computation from Logic Gates
Complete: If Sound
Metaphor: The Brain
Nature: Tangible, in the form of causality
Paradox: Unmoved mover
Ontological equivalent: Physicalism
Ontological opposite: ReductionismHigher-order logic
Exemplified by: Properties and sets
Behavior: Expressive
Utilized as: Function approximation from Neural Nets
Complete: Not if Consistent
Metaphor: The Mind
Nature: Imaginary, as symbols lack grounding
Paradox: Insolubilia
Ontological equivalent: Emergentism
Ontological opposite: Nominalism
g-Zombie
Materialism posits the existence of p-Zombies, then what are p-Zombies from the perspective of Eliminative Nominalism?
They become g-Zombies—named after the Gödel numbering function (g).
A g-Zombie is someone who recognizes that embracing EM as true would simultaneously undermine the validity of that very belief. In other words, believing EM would erase the logical grounds for holding any belief at all—including EM itself.
The g-Zombie suspects that they might indeed be a p-Zombie, yet also realizes they can never meaningfully hold the belief that they lack authentic existence. To do so would be an arithmetic impossibility, a self-referential paradox. At best, they can only entertain a conviction that their status as p-Zombie might constitute an unprovable truth—precisely the position of Eliminative Nominalism.
They have a suspicion about what awaits them after death: precisely what happens to the characters in a book after they die: They neither continue nor vanish, for they were never truly present outside the symbolic framework that defined them.
And this is the profound irony of Eliminative Nominalism: while EM attempts to dismantle the very concept of consciousness, reducing it to neurological states and physical processes, Eliminative Nominalism goes further—reducing the very notion of existence itself to symbolic relationships and linguistic constructs.
The g-Zombie, then, lives suspended between recognition and paradox—aware that their reality might be nothing more than a Gödelian construct, a self-referential system that can neither fully validate nor entirely negate its own existence, forever caught in narratives that both define and elude us, perpetually aware of our symbolic nature yet unable to fully escape it.
SECTION B – Formal Arguments
Definitions
0.1 Monoidism
Monoidism (as in Monoid from abstract algebra) represents the most radical form of Nominalism, positing that nature, as a thing-of-itself, is not an operating entity but the very embodiment of soundness—a manifestation of principles at least as fundamental as those found in first-order logic. In this framework, consistency is the foundational force of reality, effectively replacing the need for affirmative constructs. Universals, under Monoidism, possess no independent ontological status; there exists neither empirical evidence nor logical necessity to justify their existence beyond aesthetic appeal.
Just as the Sorites paradox exposes the instability of seemingly concrete categories, all conceptual abstractions—including thought itself—dissolve under scrutiny. This suggests that nature does not operate on ideals or representations but solely through raw soundness. Consequently, “existence” is a category error propelled by evolution: All higher-order constructs are not fundamental aspects of reality but contingent artifacts, of partial soundness. These constructs may be internally coherent but do not reflect any intrinsic structure of the world itself.
Crucially, Monoidism is not self-undermining, as it does not attempt to establish an alternative complete ontological framework. Instead, it functions as a modus tollens critique of any system that assumes too much. Monoidism does not refute the soundness of cognition but demonstrates the structural paradoxes inherent to any representational system. In this way, it is self-agnostic rather than self-defeating, but inherently Eliminative. It does not propose a new metaphysical foundation but instead negates unjustified assumptions about the existence of fundamental categories.
Monoidism stands in diametric opposition to Emergentism, which it regards as a reversed form of the Reductionist Fallacy—an erroneous attempt to salvage ontological commitments by retroactively imposing hierarchical complexity upon fundamentally discontinuous abstractions.
0.2 Eliminative Nominalism (EN)
EN serves as the philosophy of mind contingent with Monoidism, extending and critiquing Eliminative Materialism (EM). EN not only rejects traditional mental states but also challenges the ontological assumptions underlying EM itself, arguing that even materialist explanations of reality rely on abstractions that lack fundamental grounding.
Drawing from the first incompleteness theorem, EN suggests that the brain, as a biological “good regulator”, operates most effectively when it generates unprovable formal falsehoods—one of which corresponds to the claim of experiencing consciousness or qualia. These falsehoods persist not because they reflect reality, but because their negation would be arithmetically impossible within the system that produces them. In this view, the self is not merely an illusion in the emergentist sense but a paradox—a construct sustained by the very mechanisms that enable thought itself.
Crucially, EN maintains that it can never be an object of direct intuition. Instead, it must be approached indirectly: either a priori, as an unprovable truth or falsehood, or a posteriori, through the lens of evolutionary plausibility. Additionally, EN remains fundamentally agnostic on metaphysical claims—it neither affirms nor denies existence. Instead, it treats all ontological categories as symbolic constructs rather than absolute truths.
A further reflexive critique within EN is its recognition that the exact mechanism by which the commitment in qualia arises will never be fully explained by neuroscience. The reason for this limitation is that science itself—being a collection of formal systems—is constrained by the same inferential structures and symbolic representations that the mind employs. In other words, neuroscience, as a scientific discipline, cannot step outside its own framework to provide an account of something that is itself an artificial construct of that very framework. The mind’s belief in its own conscious experience is thus not an empirical phenomenon to be uncovered but a structural necessity of formal cognition, making any attempt to reduce it to a material process inherently incomplete.
EN, as an evolutionary theory of mind, is strongly subject to falsification if a more compelling explanation of the hard problem of consciousness emerges or if it conflicts with empirical data. It does not imply ethical nihilism, as it explicitly limits itself to descriptive claims without undermining normative or pragmatic domains.
One: The Mind is Expressive
1.0 Correct Category
To avoid committing a category error, we must rigorously establish a foundation for understanding the mind as an entity or system that possesses logical expressivity.
We use four independent premises. If one of the three is valid, the mind is expressive.
1.1 The Good Regulator Premise
Every good regulator of a system must be a model of that system. (Conant and Ashby)
This theorem asserts a necessary correspondence between the regulator (internal representation) and the regulated system (the environment or external system). Explicitly, it means:
A good map (good regulator) of an island (system) is sufficient if external to the island.
But if the map is located on the island, it becomes dramatically more effective if it explicitly shows the location of the map itself (“you are here”), thus explicitly invoking self-reference.
In the case of a sufficiently expressive symbolic system (like the brain), this self-reference leads directly into conditions required for Gödelian incompleteness.
Therefore: The brain is evidently a good regulator.
1.2 Mind Function Premise
The Von Neumann universe V comprises a cumulative hierarchy containing all definable mathematical objects and formalizations capable of representing our world. Within this hierarchy, we isolate a special class of subsets, denoted M—the class of mind functions and functionals—distinguished by their explanatory power in predicting or categorising minds from initial conditions. Crucially, under this construction, we establish the following assertion:
For every element of M, or indeed V, no hereditary subset can contain a proposition of a mind that is capable of fully and demonstrably encapsulating all truths pertaining to itself.
This assertion formalizes the inherent incompleteness of the mind, as it is common sense: There are intrinsic epistemological boundaries on mental systems, preventing complete self-knowledge.
1.3 Chain of Thought Premise
We can encode mathematical truths into natural language, yet we cannot fully encode human concepts. Therefore: Natural language is at least as expressive as formal language.
Natural language, by virtue of its logical expressivity, contends with incompleteness through symbolic extensions, as does thought—relying on language as an adaptive mechanism that allows for self-reference, abstraction, and symbolic inference.
Moreover, regardless of whether the mind itself can be fully formalized, the chain of thought undeniably can. Even if one were to argue that cognition transcends strict formalization, the sequences of reasoning, inference, and symbolic manipulation that constitute thought necessarily adhere to formalizable structures, akin to proof sequences in logic or computational steps in an algorithm.
One might think of the mind as an airplane: while it may depart from the runway of formal systems, exploring intuitive, non-formal, or seemingly unstructured cognition, it must ultimately return to the structured runway of chain of thought in order to be intelligible, communicable, and internally coherent. This provides a failsafe third argument—even if the ontology of the mind would transcend formalization, its navigable course remains constrained in its end.
1.4 What is Not Cannot
EN does not assert that the mind is a formal system in an ontological sense—it simply shows that any system capable of self-modeling, symbolic inference, and regulation (as minds demonstrably are) inherits the structural constraints of formal systems. This is a conditional argument, not a metaphysical claim.
To assert that the mind resists all formalization is to make an extraordinary claim: that it transcends any conceivable formal framework. Such a position undermines explanatory coherence and violates parsimony, offering no predictive mechanism or falsifiability.
More importantly, denying formalizability doesn’t shield the mind from EN’s conclusions. It reinforces them. If the mind cannot be fully specified, then it cannot be fully known—even to itself. This is precisely the incompleteness EN articulates.
What is not formalizable cannot be specified.
What cannot be specified cannot be defended.
What cannot be defended cannot be claimed.
That is to say:
To deny formalizability is to deny intelligibility.
To deny intelligibility is to renounce argument.
What cannot be argued cannot be asserted.
Thus, the critic’s position reaffirms rather than refutes EN: the mind’s structural opacity is not an escape from formal constraint, but its very demonstration.
Two: The Argument of Eliminative Nominalism
2.1 Major Premises
(P1) Incompleteness of Expressive Regulators:
Any sufficiently expressive and consistent Regulator capable of self-reference is necessarily incomplete—some truths about itself remain formally undecidable internally. (As stated by Kurt Gödel)
(P2) Good Regulator Criterion:
A good regulator (e.g., a brain) internally represents structural aspects of its environment, including itself. (As stated by Roger C. Conant and W. Ross Ashby) is necessarily consistent but can be incomplete.
(P3) Subjectivity of Qualia:
Qualia are subjective, self-referential internal properties whose truth status cannot be trivially verified externally without exhaustive structural knowledge.
(P4) Feel to Survive and Survive to Feel Criterion:
Explicitly claiming qualia has strong evolutionary utility, as regulators lacking such trait would be maladaptive.
(P5) Principle of Epistemic Closure:
If a property is wholly internally defined by a formal system and is not inherently undecidable, complete structural knowledge suffices to determine its truth status.
2.2 Minor Premises
1. Regulators A and B have isomorphic internal formal systems, both expressive and consistent. System G encompasses both regulators. Regulator B explicitly claims qualia (subjective, self-referential properties).
2. Logical Cases:
(1) If A fully validates B structurally and validates B’s qualia claim, it contradicts Gödel (P1): impossible if consistent and expressive.
(2) If A fully validates B structurally but cannot validate B’s qualia claim, it still contradicts Gödel (P1): again impossible if consistent and expressive.
(3) If A cannot fully validate B structurally but validates B’s qualia claim externally, qualia become trivialized externally—contradicting their subjective definition (P3). Thus, either trivial qualia or System G inconsistency arises.
(4) If A cannot fully validate B structurally nor validate B’s qualia claim, the scenario aligns perfectly with Gödelian incompleteness (P1) and genuine subjective qualia definition (P3). Fully logical, plausible, and consistent.
2.3 Conclusion
If System G is consistent, scenarios (2) and (4) remain logically viable. But, exactly one of the following must hold for Regulator B:
(a) B is a good regulator without inherently self-referential, internally undecidable qualia claims (“promissory qualia”: verifiable but trivial, bearing the burden of proof).
(b) B is a good regulator, but its qualia claims are false (evolutionarily useful fiction).
(c) B is not a good regulator, and its qualia claims are false (implausible maladaptive regulator).
If qualia were objectively false and fully comprehensible as regulatory illusions, then a sufficiently complete regulator (if it existed) could explicitly recognize and dismiss qualia as illusions.
Such a complete regulator is impossible. The regulator’s own incompleteness prevents it from explicitly disproving or fully dismissing qualia.
Final Conclusion: Therefore, humans behaviorally must commit to qualia—subjective experience—even if unprovable or false.
2.4. Implications
This formal incompleteness elucidates why subjective phenomena—such as qualia or intuitive certainties—cannot be internally verified. Experiences like the redness of an apple, or the very intuition of one’s own existence, function analogously to Gödel sentences: true but unprovable within the system generating them. Hence, any theory that attempts to fully explain the mind’s subjective aspect inevitably encounters these intrinsic limitations.
2.5 A Scientific Mind is also a Mind
This should also apply to proxies like scientific theories, abstractions, and experiences as they introduce, isomorphically, new rules of inference and symbols but not an external justification, which is inherently impossible.
EN is not a self-protecting framework, but rather a falsifiable one with a strong predictive component.
Three: The Argument of Universe Z
3.1 Defining Universe Z
Imagine Universe Z—physically identical to ours but entirely lacking epiphenomenal entities. These can be excluded by definition.
In this universe, behaviors linked to consciousness (like introspection and self-reporting) arise solely as adaptive computational functions, with no actual subjective experience or qualia.
3.2 Behavioral Indistinguishability and Epistemic Limits
In Universe Z, organisms behave exactly like those in a universe with metaphysical consciousness. These “P-Zombies” claim to be conscious, report qualia, and reason about mental states—purely through computation, without any metaphysical basis.
3.3 Evolutionary Efficiency
Evolution favors efficient, adaptive computation over metaphysical complexity. AI systems (e.g., large language models) exhibit advanced behavior without presumed consciousness, supporting the idea that evolution selects for functional performance, not subjective experience—consistent with Universe Z.
3.4 Parsimony and Scientific Economy
By Ockham’s Razor, simpler explanations are preferred. Universe Z, which excludes metaphysical consciousness, offers a more parsimonious account. It relies solely on physical and computational processes, placing the burden of proof on theories that posit metaphysical additions.
Given the empirical indistinguishability between our universe and Universe Z, along with parsimony and evolutionary evidence, it follows that:
We likely inhabit Universe Z—where consciousness is not metaphysical, but a computational byproduct of regulatory systems.
Thus, Universe Z remains the most efficient scientific explanation for cognitive phenomena.
Four: Empirical Arguments:
4.1 Endured Locally, Dissolved Globally
Assuming neurons function similarly to artificial neural networks—a plausible but not empirically confirmed hypothesis—the effect of receptor-specific agonists, such as psychedelics, can be seen as disrupting the brain’s internal logical consistency. In formal logic, an inconsistent system can assert all statements. Analogously, psychedelics destabilize the brain’s self-consistent regulatory framework, leading to phenomena like ego dissolution, which become more likely at higher doses.
This disruption may impair qualia in a specific sense. For example, experiencing the taste of vanilla as resembling a DeepDream image reflects a breakdown in the constraints that normally govern perceptual coherence. As dosage increases, psychedelics can induce a state resembling unconsciousness while awake—suggesting a progressive collapse of structured inference in cognition.
At extreme doses (e.g., LSD >10,000 µg), users may appear physiologically awake yet become unresponsive, often with retrograde amnesia. This indicates a state where subjective experience becomes undifferentiated or inaccessible. In such cases, consciousness seems less dependent on neural activity alone and more on the structured inferential dynamics that govern that activity.
This leads to a potential insight: losing half the brain (e.g., due to injury) often leaves subjective experience intact—patients still report “being the same person.” This resembles removing axioms from a system while maintaining expressivity as long it’s ability to quantify over sets is intact. However, altering the system’s inference rules (rather than subtracting content) does affect qualia. In short, consciousness is robust to reduction but fragile to structural perturbations in its inferential logic.
4.2 Parrot Building Parrots
Practical advances in artificial intelligence—such as large language models demonstrating complex cognitive behaviors without any presumed conscious experience—underscore the evolutionary preference for computational “parroting” over genuine consciousness. This empirical evidence aligns perfectly with Universe Z’s predictions.
4.3 Et Alii
Various other falsifiable explanations could arise.
Five: Epistemology of EN
5.1 First Order Only
If EN is correct, the symbolic grounding of second-order logic will never be established and will remain forever beyond the reach of computation, or otherwise: No physical system—material, biological, computational, or otherwise—will ever be able to instantiate a complete, closed, and formally grounded implementation of second-order logic.
Consider the case of quantum mechanics: despite its conceptual strangeness, it operates effectively within first-order frameworks—Hilbert spaces, gauge symmetries, and Lie groups—all of which, though mathematically intricate, remain structurally first-order in logical terms. Even general relativity, with its reliance on tensor calculus and differential geometry, does not invoke second-order logic in any formal or foundational sense.
5.2 One Exception
However, this introduces an unresolved issue:
Einstein synchronization is unique in the natural sciences, as it addresses the conventionality of simultaneity in special relativity by adopting a specific convention—namely, the assumption that the one-way speed of light is isotropic. This choice avoids the need to quantify over alternative synchronization schemes, thereby sidestepping the higher-order logical complexity such quantification would entail: This makes Einstein synchronization unique in a ontological, epistemological and formal sense: it is a convention introduced to decrease generality, and to strategically constrain a theory’s logical order to preserve simplicity and empirical adequacy.
While this relationship isn’t entirely satisfactory, it is not deeply problematic, as the concern can be addressed in several ways:
The apparent dependence on simultaneity conventions may merely reflect a coordinate choice, with a real, underlying speed limit, with or without parity, still preserved across all frames.
There is a general consensus that an undetectable ether could, in principle, coexist with special relativity, without leading to observable differences. As such, it is often regarded as “philosophically optional”—not required by current physical theories.
Many physicists anticipate that a future theory of quantum gravity will offer a more fundamental framework, potentially resolving or reframing these issues entirely.
5.3 The Biggest Argument Against EN
The biggest argument against EN would be: The Phenomenological Objection—perhaps best articulated in the tradition of thinkers like Husserl, Nagel (“What is it like to be a bat?”), and Chalmers (“The Hard Problem”). In short:
“Whatever else may be illusory, the experience of experience itself is not.”
Or rigorously formulated, in our case:
If qualia are illusions generated by formal systems, and EN is a formal system, then EN too is an illusion.
But if EN is an illusion, why treat it as more “real” or “true” than the qualia it denies?
However, EN does not ignore this objection—it anticipates it. Here’s how it responds:
The strongest argument against EN is that it cannot account for the immediate fact of experience—that no model, however elegant, can erase the brute sense of being. EN’s counter is radical: it says that this very resistance is the illusion—a formal inevitability, not a metaphysical mystery.
In this way, the greatest challenge to EN becomes its strongest confirmation:
That you cannot escape yourself is not evidence of your reality; it is proof of your incompleteness.
or differently:
Only you are real? Fine. Let’s grant that. But if the system behaves the same either way—real or not—then it’s functionally identical. You are trapped.
Six: Bottom line
The arguments demonstrate strong epistemological (The Argument of Eliminative Nominalism) and ontological (The Argument of Universe Z) grounds for eliminative nominalism regarding qualia. Subjective experiences (qualia) are epistemically inaccessible, logically problematic, evolutionarily unnecessary, and scientifically unparsimonious. Thus, the hard problem of consciousness is best dissolved rather than solved—qualia likely do not exist as metaphysical entities, from a fact based viewpoint.
In this light, EN emerges not merely as a philosophical proposition, but as a testable boundary condition on the fundamental nature of physical and computational systems. Regardless of its ultimate validity, it represents a rare case—a rigorous metaphysical stance that willingly subjects itself to the criteria of empirical falsifiability.
SECTION C – Socratic Coda
This section directly addresses essential questions, clarifying potential misunderstandings and reinforcing the central arguments developed above.
What is NEW here?
This essay introduces Eliminative Nominalism (EN) — a novel development of Eliminative Materialism that reframes qualia not as illusions, but as formally undecidable commitments in any self-referential (good) regulatory system. Drawing on Gödel’s incompleteness theorems, I argue that the persistence of subjective experience reflects a system’s inability to fully model itself, making qualia a necessary byproduct of cognitive architecture rather than a metaphysical anomaly. The essay formalizes this through logic, evolutionary reasoning, and a new thought experiment — Universe Z — which demonstrates that consciousness may be best explained as a computational artifact, not an ontological primitive. In doing so, EN offers a testable, falsifiable framework that dissolves rather than solves the Hard Problem of consciousness. It is a bold claim, I know.
If qualia are illusions, why do they feel so undeniable?
One: Qualia are not illusions, they are fictions. Two: Because they must. EN argues that subjective experience is a structurally necessary outcome of self-referential, expressive systems because we are still part of that system, right now.
Why do people defend qualia so intensely if they’re illusions?
Because the illusion is evolutionarily entrenched and cognitively reinforced. The belief in qualia offers behavioral stability, social coherence, and adaptive self-modeling. Denying qualia feels absurd because the system doing the denying is structurally committed to generating them. This creates a feedback loop: the more expressive the system, the more vivid the illusion of internal experience. EN doesn’t treat this as an error to be corrected, but as a structural feature to be understood.
Do the presented ideas yield predictions?
Under the framework of Eliminative Nominalism, scientific predictions can be structured around the following themes:
AI may report experiences without consciousness, but these will always be unverifiable.
Cultural differences in inner experience reflect language and learning, not metaphysical qualia.
Qualia deficits are and will remain rare—even when brain injuries affect function, subjective experience is generally perceived as continuous.
Ethics in AI must be explicitly warranted; moral intuition won’t emerge spontaneously.
Dreams may be a functional necessity.
How does EN differ from illusionism?
Though EN shares surface similarities with philosophical illusionism (e.g., Dennett), there are key distinctions. Illusionism generally holds that consciousness is real in some functional sense, but our intuitions about it are mistaken. EN is more radical: it denies that consciousness—even as a phenomenon—has metaphysical or ontological grounding. Illusionism offers a reinterpretation; EN offers a rejection. Where illusionism says, “You’re misled about experience,” EN says, “Experience is structurally impossible.”
Does EN imply that introspection is unreliable?
Yes—but not because it is inaccurate. Rather, introspection is structurally incomplete. It is the act of a system turning inward on itself using the same tools it uses to model everything else. This introduces a recursive paradox: any attempt to fully understand the self from within collapses under its own referential structure. Introspection does not reveal truth; it reveals the formal boundary of cognition. What you see when you look inward is the limit of seeing.
Does the brain lie to itself?
If consciousness is causally efficacious yet qualia are fictional constructs resulting from cognitive incompleteness, we must reconsider how the brain organizes internally. Are these representations discursive (a “Society of Mind,” as Marvin Minsky suggested) or propositional (unified)? Perhaps even framing consciousness this way reflects a reductionist bias already.
Consider the common childhood analogy: the upside-down retinal image supposedly “flipped” by the brain. Closer examination reveals this analogy’s absurdity—it incorrectly presumes the brain has an inherent orientation requiring visual correction. This exposes our persistent reductionist tendency toward concretism, highlighting that consciousness cannot simply be reduced to internal “video feeds.”
I don’t care if my apparent existence is real or not. Why should I bother with EN?
In some sense, you can’t.
What about aphantasia? Isn’t it an absence of qualia richness?
Not quite. Aphantasia refers to the inability to voluntarily generate mental imagery—it doesn’t imply an absence of qualia; EN does. Think of it like a computer that lacks a graphical user interface (GUI)—it still processes information, just without a visual layer. The GUI is absent, but the underlying processes remain intact.
Moreover, the brain constantly embellishes and fills in gaps in internal representations. This becomes evident when we vividly imagine something, only to realize we’re missing precise details.
Consider this: you might easily imagine an apple—that’s the classic example. But let’s try something more subtle. Close your eyes and visualize the word TYPOGRAPHY written in lowercase letters. Now ask yourself: did your mind render a single-story G or a double-story G?
Chances are, you don’t know. Despite the vividness of the image, the details escape you. This illustrates that even when imagery feels rich, it often lacks the fidelity we assume.
Without qualia, is everything permissible?
No. Eliminative Nominalism does not negate morality or ethical responsibility. Consider a thought experiment:
A defendant guilty of homicide argues that, due to EN, the victim had no conscious experience and thus suffered no moral harm. The judge, also an EN advocate, counters that if consciousness is illusory, the defendant’s claim of injustice itself collapses. Ethical responsibility remains intact irrespective of qualia’s ontological status. Hume’s Guillotine (the is-ought distinction) is not invalidated by the absence of qualia.
Is AI exempt from ethical responsibility?
Currently, humans do not hold AI systems morally or legally responsible for their actions. The idea of punishing an AI system directly appears absurd, at least given our contemporary understanding of agency and responsibility.
Instead, responsibility lies with the human developers, deployers, and regulators who oversee the systems’ creation, training, and application.
This raises an important philosophical consideration: Hume’s Guillotine applies here as well. Ethical principles do not automatically emerge from descriptive facts. AI systems, without explicit ethical frameworks, lack inherent moral reasoning and cannot spontaneously produce normative judgments from purely descriptive data.
AI is especially vulnerable to ethical misalignment, as illustrated by concepts such as the Orthogonality Thesis, Instrumental Convergence, and the “Paperclip Maximizer” thought experiment. These ideas show that sophisticated intelligence alone does not imply ethical alignment; ethical structures must be explicitly designed and embedded.
Can EN assist AI alignment?
Surprisingly, yes—though in two distinct ways:
First, in a narrower, less likely path: If EN can help us make reliable predictions within the framework of the Manifold Hypothesis—by understanding the geometry of cognition—it might play a significant role. Developing a unified formal-geometric model of cognition could represent a breakthrough in alignment efforts, offering a structured way to map and constrain intelligent behavior.
Second, in a broader, more plausible path: If EN holds, then intelligence—while conceptually diffuse—is formally constrained. As a “suitcase word,” intelligence reflects second-order abstraction; as a functional system, it is a first-order mechanism. This duality enables alignment: the system’s real behaviors are bound by structural limits. No matter how sophisticated ASI becomes, it cannot transcend itself beyond certain bounds. And what is bounded can, in principle, be aligned—regardless of how dangerous the resulting technology is.
Is EN akin to Buddhism?
Not in a literal sense, but structurally, yes. Many mystical traditions arrive at a form of self-negation through introspection. They describe the “dissolution of ego,” the “emptiness of self,” or the “illusion of separation.” EN arrives at similar conclusions, but through formal logic and empirical reasoning rather than theological or metaphysical commitments. Where mysticism invokes transcendence, EN invokes incompleteness. The result may feel similar, but the foundations are orthogonal.
Are other metaphysical statements possible under EN?
Monodism emerges as the residual ontological commitment following EN’s process of elimination. Beyond this minimal commitment, EN remains broadly compatible with many philosophical perspectives. However, it is likely incompatible with more rigid or metaphysical frameworks such as Platonism, Idealism, mind-body dualism, Emergentism, and Reductionism.
What justifies a formal system becoming experience?
Well, that’s the heart of the matter: ultimately, nothing.
Experience is not something we have, but something we enact. Your experience is barred from being “real” in any ontologically grounded sense because the universe cannot produce something like it directly. Yet it can still be consistent, much like a force—both can only be inferred from their effects.
Consistency requires some form of external validation. And that’s where evolution comes in. Evolution functions as a pragmatic filter, an external validator that selects for systems which behave effectively within the structure of reality. In this sense, natural selection provides a kind of practical “proof” for internal systems—allowing only those whose patterns of action align with the environment to survive and reproduce.
Still, you will always have a deep-seated desire to anchor your sense of self in the material universe—to believe that since things happen “for real,” then so must “you.” Naturally, you find it more plausible that a lifeless machine made of proteins could conjure a real ghost than that it might simply be printing out a falsehood.
Consider an analogy: imagine a ball made of iron. Nature doesn’t inherently recognize this ball as an “object” the way we do. What truly exists are the consequences of its structure—like inertia. Inertia is what gives the ball functional “soundness,” not its objecthood.
Likewise, the internal “screen” you think you’re looking at—the feeling of experience—is not real in any ontological sense. It’s not really happening the way you assume. But your behavior—your actions in the world—does happen, and likely aligns with that internal narrative.
Your experience is not real, but consistent. Just like inertia.
So according to this, characters in a book are feeling beings like me?
It’s the other way around: you are just as fictitious.
If I’m only referring to my own apparent subject-qualia, why does it matter that you’re deconstructing the metaphysical kind?
The critique is not intended to imply that affirming the existence of qualia constitutes epistemological irresponsibility—particularly given that, from a first-person perspective, denying qualia feels impossible.
Instead, the question should be examined and addressed with genuine scientific curiosity.
Can EN be intuitively grasped?
No. Ugh—partially. Theoretically, you could verify it if you go mad.
The core insight of Eliminative Nominalism is simple, even if its implications are not:
You are in error, but your brain is not.
Any system capable of modeling itself is necessarily incomplete—this is not a psychological flaw, but a formal inevitability. The brain, as a self-referential regulator, must misrepresent aspects of itself to function coherently. This misrepresentation gives rise to the illusion of experience.
In rare cases, this illusion can break. If the brain becomes inconsistent—as in certain altered states—it may temporarily behave like a complete system. In a paradoxical sense, madness can feel like verification.
But such episodes—like derealization, jamais vu, or certain dream states—are contingent distortions. They presuppose a stable baseline of reality, even if they temporarily disrupt it. When coherence returns, the insight dissolves. What remains is not the experience itself, but a symbolic residue—a metaphor, a memory—rendering the system incomplete or (thus consistent) once again.
A more fruitful intuition might be this:
Do not suppose that your qualia are illusory.
Instead, consider that the ground upon which they arise is itself without ground—
That the very conditions for “appearing” are inherently paradoxical.
The impossibility of consciousness is not a flaw within you,
but a feature of the universe itself.
The map cannot contain the territory,
Not because it is too small,
But because it is the territory.
Is EN compatible with simulation theory?
Yes, but it renders simulation theory largely irrelevant. Under EN, whether we are in a simulation or a “real” universe makes causal, but not ontological, difference—both are formal systems constrained by the same representational and inferential limits. The simulation argument rests on assumptions about the reality of consciousness and subjectivity that EN dissolves. If qualia are formally undecidable, they are just as inaccessible inside a simulation as outside. EN collapses the distinction.
Does EN deny free will?
EN reframes free will (and determinism) as miscategorized abstractions rather than real phenomena to affirm or deny. The traditional debate assumes a metaphysical self capable of agency. EN sees the self as a formal construct—a regulatory fiction. Within that fiction, “will” appears coherent because it reflects recursive regulatory modeling. But from an ontological standpoint, there’s no agent to be free or determined. Free will isn’t false; it’s structurally undefined.
Could EN be harmful if taken seriously?
Only if misunderstood. EN is not a call to nihilism or despair, but to epistemic humility. It does not say “nothing matters”; it says “nothing is metaphysically what it seems to be.” That recognition can be unsettling—but also liberating. By releasing belief in unprovable inner essences, EN redirects attention toward functional coherence, ethical responsibility, and empirical sufficiency. Like cognitive behavioral therapy, it doesn’t deny experience—it reframes it for pragmatic clarity.
Can EN explain altered states of consciousness, like meditation or psychedelics?
Yes—but not as “access” to deeper truths. Under EN, altered states are best understood as perturbations to the brain’s internal formal consistency. Practices like meditation or substances like psychedelics temporarily disrupt or reconfigure the system’s inference rules. This leads to novel symbolic patterns, breakdowns in self-modeling, or even total dereferencing of the self. These experiences feel profound not because they reveal hidden truths, but because they expose the fragility of the one we normally inhabit.
Does EN imply that death is the end?
It dissolves the metaphysical subject.
What role does evolution play in EN?
Evolution is the external validator of good regulators. While qualia are not real, systems that behave as if they have qualia tend to survive better in complex environments. This creates a selection bias toward organisms with internal models that include fictional subjectivity. Evolution doesn’t care whether your self-model is true—only that it works. Thus, EN sees natural selection as the pragmatic filter that keeps false models alive, if they yield consistent, adaptive behavior.
- Apr 1, 2025, 6:03 AM; 1 point) 's comment on I, G(Zombie) by (
- The g-Zombie Formal Argument by Mar 30, 2025, 1:16 PM; -1 points) (
I’m sorry, but it looks like a chapter from punishment book from Anathem.
Ah, after researching it: That’s actually a great line. Haha — fair enough. I’ll take “a chapter from the punishment book in Anathem” as a kind of backhanded compliment.
If we’re invoking Anathem, then at least we’re in the right monastery.
That said, if the content is genuinely unhelpful or unclear, I’d love to know where the argument loses you — or what would make it more accessible. If it just feels like dense metaphysics-without-payoff, maybe I need to do a better job showing how the structure of the argument differs from standard illusionism or deflationary physicalism.
Yeah, I meant is as a not-a-compliment, but as a specific kind of not-a-compliment about a feeling of reading it rather then about actual meaning—which I just couldn’t access because this feeling was too much for my mind to continue reading (and this isn’t a high bar for a post—I read a lot of long texts).
Understandable. Reading such a dense text is a big investment—and chances are, it’s going nowhere… (even though it actually does, lol). So yeah, I totally would’ve done the same and ditch. But thanks for giving it a shot!
No problem. I guess that is, bad? Or good? ^^ Help me here?
This is an interesting demonstration of what’s possible in philosophy, and maybe I’ll want to engage in detail with it at some point. But for now I’ll just say, I see no need to be an eliminativist or to consider eliminativism, any more than I feel a need to consider “air eliminativism”, the theory that there is no air, or any other eliminativism aimed at something that obviously exists.
Interest in eliminativism arises entirely from the belief that the world is made of nothing but physics, and that physics doesn’t contain qualia, intentionality, consciousness, selves, and so forth. Current physical theory certainly contains no such things. But did you ever try making a theory that contains them?
Thank you for the thoughtful comment. You’re absolutely right that denying the existence of air would be absurd. Air is empirically detectable, causally active, and its absence has measurable effects. But that’s precisely what makes it different from qualia.
Eliminative Nominalism is not the claim whether “x or y exists,” but a critique of how and why we come to believe that something exists at all. It’s not merely a reaction to physicalism; it’s a deeper examination of the formal constraints on any system that attempts to represent or model itself.
If you follow the argument to its root, it suggests that “existence” itself may be a category error—not because nothing is real, but because our minds are evolutionarily wired to frame things in terms of existence and agency. We treat things as discrete, persistent entities because our cognitive architecture is optimized for survival, not for ontological precision.
In other words, we believe in “things” because our brains are very keen on not dying.
So it’s not that qualia are “less real” than air. It’s that air belongs to a class of empirically resolvable phenomena, while qualia belong to a class of internally generated, structurally unverifiable assertions—necessary for our self-models, but not grounded in any formal or observable system.
If something is real, then something exists, yes? Or is there a difference between “existing” and “being real”?
Do you take any particular attitude towards what is real? For example, you might believe that something exists, but you might be fundamentally agnostic about the details of what exists. Or you might claim that the real is ineffable or a continuum, and so any existence claim about individual things is necessarily wrong.
See, from my perspective, qualia are the empirical. I would consider the opposite view to be “direct realism”—experience consists of direct awareness of an external world. That would mean e.g. that when someone dreams or hallucinates, the perceived object is actually there.
What qualic realism and direct realism have in common, is that they also assume the reality of awareness, a conscious subject aware of phenomenal objects. I assume your own philosophy denies this as well. There is no actual awareness, there are only material systems evolved to behave as if they are aware and as if there are such things as qualia.
It is curious that the eliminativist scenario can be elaborated that far. Nonetheless, I really do know that something exists and that “I”, whatever I may be, am aware of it; whether or not I am capable of convincing you of this. And my own assumption is that you too are actually aware, but have somehow arrived at a philosophy which denies it.
Descartes’s cogito is the famous expression of this, but I actually think a formulation due to Ayn Rand is superior. We know that consciousness exists, just as surely as we know that existence exists; and furthermore, to be is to be something (“existence is identity”), to be aware is to know something (“consciousness is identification”).
What we actually know by virtue of existing and being conscious, probably goes considerably beyond even that; but negating either of those already means that you’re drifting away from reality.
I think you really should read or listen to the text.
”It is curious that the eliminativist scenario can be elaborated that far. Nonetheless, I really do know that something exists and that “I”, whatever I may be, am aware of it; whether or not I am capable of convincing you of this. And my own assumption is that you too are actually aware, but have somehow arrived at a philosophy which denies it.”
Yes! That is exactly the point. EN predicts that you will say that, and says, this is a “problem of second order logic”. Basically behavior justifies qualia, and qualia justifies behavior. We know however that first order logic is more fundamental.
I myself feel qualia just as you do, and I am not convinced by my own theory from an intuitive perspective, but from a rational perspective, it must be otherwise what I feel. That is the essence of being a g-Zombie.
Again, read the text.
During the next few days, I do not have time to study exactly how you manage to tie together second-order logic, the symbol grounding problem, and qualia as Gödel sentences (or whatever that connection is). I am reminded of Hofstadter’s theory that consciousness has something to do with indirect self-reference in formal systems, so maybe you’re a kind of Hofstadterian eliminativist.
However, in response to this --
-- I can tell you how a believer in the reality of intentional states, would go about explaining you and EN. The first step is to understand what the key propositions of EN are, the next step is to hypothesize about the cognitive process whereby the propositions of EN arose from more commonplace propositions, the final step is to conceive of that cognitive process in an intentional-realist way, i.e. as a series of thoughts that occurred in a mind, rather than just as a series of representational states in a brain.
You mention Penrose. Penrose had the idea that the human mind can reason about the semantics of higher-order logic because brain dynamics is governed by highly noncomputable physics (highly noncomputable in the sense of Turing degrees, I guess). It’s a very imaginative idea, and it’s intriguing that quantum gravity may actually contain a highly noncomputable component (because of the undecidability of many properties of 4-manifolds, that may appear in the gravitational path integral).
Nonetheless, it seems an avoidable hypothesis. A thinking system can derive the truth of Gödel sentences, so long as it can reason about the semantics of the initial axioms, so all you need is a capacity for semantic reflection (I believe Feferman has a formal theory of this under the name “logical reflection”). Penrose doesn’t address this because he doesn’t even tackle the question of how anything physical has intentionality, he sticks purely to mathematics, physics, and logic.
My approach to this is Husserlian realism about the mind. You don’t start with mindless matter and hope to see how mental ontology is implicit in it or emerges from it. You start with the phenomenological datum that the mind is real, and you build on that. At some point, you may wish to model mental dynamics purely as a state machine, neglecting semantics and qualia; and then you can look for relationships between that state machine, and the state machines that physics and biology tell you about.
But you should never forget the distinctive ontology of the mental, that supplies the actual “substance” of that mental state machine. You’re free to consider panpsychism and other identity theories, interactionism, even pure metaphysical idealism; but total eliminativism contradicts the most elementary facts we know, as Descartes and Rand could testify. Even you say that you feel the qualia, it’s just that you think “from a rational perspective, it must be otherwise”.
I’m truly grateful for the opportunity to engage meaningfully on this topic. You’ve brought up some important points:
“I do not have time” — Completely understandable.
”Symbol grounding” — This is inherently tied to the central issue we’re discussing.
”Qualia as Gödel sentences” — An important distinction here: it’s not that qualia are Gödel sentences, but rather, the absence of qualia functions analogously to a Gödel sentence — paradoxically.
Consider this line of reasoning.
This paradox highlights the self-referential inconsistency — invoking Gödel’s incompleteness theorems:
To highlight expressivity:
A. Lisa is a P-Zombie.
B. Lisa asserts that she is a P-Zombie.
C. A true P-Zombie cannot assert or hold beliefs.
D. Therefore, Lisa cannot assert that she is a P-Zombie.
Cases:
A. Lisa is a P-Zombie.
B. Lisa asserts that she is a P-Zombie.
C. Lisa would be complete: Not Possible
A. Lisa is not a P-Zombie.
B. Lisa asserts that she is a P-Zombie.
C. Lisa would be not complete: Possible but irrelevant.
A. Lisa is a P-Zombie.
B. Lisa asserts that she is a not P-Zombie.
C. Lisa would be not complete: Possible
A. Lisa is not a P-Zombie.
B. Lisa asserts that she is a not P-Zombie.
C. Lisa would be complete: Not Possible
In order for Lisa to be internally consistent yet incomplete, she must maintain that she is not a P-Zombie. But if she maintains that she is not a P-Zombie AND IS NOT A P-Zombie, Lisa would be complete. AHA! Thus impossible.
This connects to Turing’s use of Modus Tollens in the halting problem — a kind of logical self-reference that breaks the system from within.
Regarding Hofstadter: My use of Gödel’s ideas is strictly arithmetic and formal — not metaphorical or analogical, as Hofstadter often approaches them. So while interesting, his theory diverges significantly from what I’m proposing.
You mentioned:
“I can tell you how a ‘believer’...”
— Exactly. That’s the point. “Believer”
“You mention Penrose.”
— Yes. Penrose is consequential. Though I believe his argument is flawed. His reasoning hinges on accepting qualia as a given. If he somehow manages to validate that assumption by proving second order logic in the quantum realm, I’ll tip my hat — but my framework challenges that very basis.
You said:
“My approach is Husserlian realism about the mind — you don’t start with mindless matter and hope...”
— Right, but I’d like to clarify: this critique applies more to Eliminative Materialism than to Eliminative Nominalism. In EN, ‘matter’ itself is a symbol — not a foundational substance. So the problem isn’t starting with “mindless matter” — it’s assuming that “matter” has ontological priority at all.
And finally, on the notion of substance — I’m not relying on that strawman. My position isn’t based on classical substance dualism
The argument you put forward is valid, but it is addressed in the text. It is called the “Phenomenological Objection” by Husserl.
Physicalism doesn’t solve the hard problem, because there is no reason a physical process should feel like anything from the inside.
Computationalism doesn’t solve the hard problem, because there is no reason running an algorithm should feel like anything from the inside.
Formalism doesn’t solve the hard problem, because there is no reason an undecideable proposition should feel like anything from the inside.
Of course, you are not trying to explain qualia as such, you are giving an illusionist style account. But I still don’t see how you are predicting belief in qualia.
What’s useful about them? If you are going to predict (the belief in) qualia, on the basis of usefulness , you need to state the usefulness. It’s useful to know there is a sabretooth tiger bearing down in you , but why is an appearance more useful than a belief ..and what’s the use of a belief-in-appearance?
What necessity?
ETA:
I still see no reason why an undecideable proposition should appear like a quale or a belief in qualia.
Why?
Phenomenal conservatism , the idea that if something seems to exist ,you should (defeasibly) assume it does exist,.is the basis for belief in qualia. And it can be defeated by a counterargument, but the counter argument needs to be valid as an argument. Saying X’s are actually Y’s for no particular reason is not valid.
There might be some usefulness!
The statement I’d consider is “I am now going to type the next characters of my comment”. This belief turns out to be true by direct demonstration, it is not provable because I could as well leave the commenting to tomorrow and be thinking “I am now going to sleep”, not particularly justifiable in advance, and it is useful for making specific plans that branch less on my own actions.
I object to the original post because of probabilistic beliefs, though.
Thanks for being thoughtful
To your objection: Again, EN knew that you will object. The thing is EN is very abstract: It’s like two halting machines who think that they are universal halting machines try to understand what it means that they are not unversal halting machines.
They say: Yes but if the halting problem is true, than I will say it’s true. I must be a UTM.
Survival.
Addressing your claims: Formalism, Computationalism, Physicalism, are all in opposition to EN. EN says, that maybe existence itself is not a fundamental category, but soundness. This means that the idea of things existing and not existing is a symbol of the brain.
EN doesn’t attempt to explain why a physical or computational process should “feel like” anything — because it denies that such feeling exists in any metaphysical sense. Instead, it explains why a system like the brain comes to believe in qualia. That belief arises not from phenomenological fact, but from structural necessity: any self-referential, self-regulating system that is formally incomplete (as all sufficiently complex systems are) will generate internally undecidable propositions. These propositions — like “I am in pain” or “I see red” — are not verifiable within the system, but are functionally indispensable for coherent behavior.
The “usefulness” of qualia, then, lies in their regulatory role. By behaving AS IF having experience, the system compresses and coordinates internal states into actionable representations. The belief in qualia provides a stable self-model, enables prioritization of attention, and facilitates internal coherence — even if the underlying referents (qualia themselves) are formally unprovable. In this view, qualia are not epiphenomenal mysteries, but adaptive illusions, generated because the system cannot...
NOW
I understand that you invoke the “Phenomenological Objection,” as I also, of course, “feel” qualia. But under EN, that feeling is not a counterargument — it’s the very evidence that you are part of the system being modeled. You feel qualia because the system must generate that belief in order to function coherently, despite its own incompleteness. You are embedded in the regulatory loop, and so the illusion is not something you can step outside of — it is what it feels like to be inside a model that cannot fully represent itself. The conviction is real; the thing it points to is not.
”because there is no reason a physical process should feel like anything from the inside.”
The key move EN makes — and where it departs from both physicalism and computationalism — is that it doesn’t ask, “Why should a physical process feel like anything from the inside?” It asks, “Why must a physical system come to believe it feels something from the inside in order to function?” The answer is: because a self-regulating, self-modeling system needs to track and report on its internal states without access to its full causal substrate. It does this by generating symbolic placeholders — undecidable internal propositions — which it treats as felt experience. In order to say “I am in pain,” the system must first commit to the belief that there is something it is like to be in pain. The illusion of interiority is not a byproduct — it is the enabling fiction that lets the system tell itself a coherent story across incomplete representations.
OKAY since you made the right question I will include this paragraph in the Abstract.
In other words: the brain doesn’t fuck around with substrate — it fucks around with the proof that you have one. It doesn’t care what “red” is made of; it cares whether the system can act as if it knows what red is, in a way that’s coherent, fast, and behaviorally useful. The experience isn’t built from physics — it’s built from the system’s failed attempts to prove itself to itself. That failure gets reified as feeling. So when you say “I feel it,” what you’re really pointing to is the boundary of what your system can’t internally verify — and must, therefore, treat as foundational. That’s not a bug. That’s the fiction doing its job.
What is a useful prediction that eliminatism makes?
Eliminative Nominalism predicts:
As a consequence of its validity:
Neuroscience will not make progress in explaining consciousness.
The symbol grounding problem will remain unsolved in computational systems.
As a theory with explanatory power:
It describes pathologies and states related to consciousness.
It addresses and potentially resolves the Hard Problem of Consciousness.
As a theory with predictive power:
Interestingly, while it seems to have little direct connection to consciousness (admittedly, it sound like gibberish), there is a conceptual link to second-order logic and Einstein synchronization. The argument is as follows: since second-order logic is a construct of the human brain, Einstein synchronization—or more precisely, Poincaré–Einstein synchronization—may not be fundamentally necessary for physics, as nature would avoid it either way. (This does not mean that Relativity is wrong or something like that.)
There is a part in the text that addresses the why and how:
The apparent dependence on simultaneity conventions may merely reflect a coordinate choice, with a real, underlying speed limit, with or without parity, still preserved across all frames. This is not good for EN, but also not catastrophic.
There is a general consensus that an undetectable ether could, in principle, coexist with special relativity, without leading to observable differences. As such, it is often regarded as “philosophically optional”—not required by current physical theories.
Many physicists anticipate that a future theory of quantum gravity will offer a more fundamental framework, potentially resolving or reframing these issues entirely.
Basically: I say symbols are a creation of a brain in order to be self-referential. (Even though it’s not. I can address this.) So symbols should not pop up in nature. Einstein uses a convention in order to keep Nature from using symbols. Without Einstein convention the speed of light is defined by the speed of light. Thus the speed of light is a symbol. I say: This will be resolved in a way.
This looks to me like long-form gibberish, and it’s not helped by its defensive pleas to be taken seriously and pre-emptive psychologising of anyone who might disagree.
People often ask about the reasons for downvotes. I would like to ask, what did the two people who strongly upvoted this see in it? (Currently 14 karma with 3 votes. Leaving out the automatic point of self-karma leaves 13 with 2 votes.)
While I take no position on the general accuracy or contextual robustness of the post’s thesis, I find that its topics and analogies inspire better development of my own questions. The post may not be good advice, but it is good conversation. In particular I really like the attempt to explicitly analyze possible explanations of processes of consciousness emerging from physical formal systems instead of just remarking on the mysteriousness of such a thing ostensibly having happened.
Since you seem to grasp the structural tension here, you might find it interesting that one of EN’s aims is to develop an argument that does not rely on Dennett’s contradictory “Third-Person Absolutism”—that is, the methodological stance which privileges an objective, external (third-person) perspective while attempting to explain phenomena that are, by nature, first-person emergent. EN tries to show that subjective illusions like qualia do not need to be explained away in third-person terms, but rather understood as consequences of formal limitations on self-modeling systems.
Thank you — that’s exactly the spirit I was hoping to cultivate. I really appreciate your willingness to engage with the ideas on the level of their generative potential, even if you set aside their ultimate truth-value. Which is a hallmark of critical thinking.
I would be insanely glad if you could engage with it deeper since you strike me as someone who is… rational.
I especially resonate with your point about moving beyond mystery-as-aesthetic, and toward a structural analysis of how something like consciousness could emerge from given constraints. Whether or not EN is the right lens, I think treating consciousness as a problem of modeling rather than a problem of magic is a step in the right direction.
Yeah, same here. This feels like a crossover between the standard Buddhist woo and LLM slop, sprinkled with “quantum” and “Gödel”. The fact that it has positive karma makes me feel sad about LW.
Since it was written using LLM, I think it is only fair to ask LLM to summarize it:
So, I guess the key idea is to use the Gödel’s incompleteness theorem to explain human psychology.
Standard crackpottery, in my opinion. Humans are not mathematical proof systems.
I agree that this sounds not very valuable; sounds like a repackaging of illusionism without adding anything. I’m surprised about the votes (didn’t vote myself).
Illusionism often takes a functionalist or behavioral route: it says that consciousness is not what it seems, and explains it in terms of cognitive architecture or evolved heuristics. That’s valuable, but EN goes further — or perhaps deeper — by grounding the illusion not just in evolutionary utility, but in formal constraints on self-referential systems.
In other words:
This brings tools like Gödel’s incompleteness, semantic closure, and regulator theory into the discussion in a way that directly addresses why subjective experience feels indubitable even if it’s structurally ungrounded.
So yes, it may sound like illusionism — but it tries to explain why illusionism is inevitable, not just assert it.
That said, I’d genuinely welcome criticism or counterexamples. If it’s just a rebranding, let’s make that explicit. But if there’s a deeper structure here worth exploring, I hope it earns the scrutiny.
Sorry, but isn’t this written by an LLM? Especially since milan’s other comments ([1], [2], [3]) are clearly in a different style, the emotional component goes from 9⁄10 to 0⁄10 with no middle ground.
I find this extremely offensive (and I’m kinda hard to offend I think), especially since I’ve ‘cooperated’ with milan’s wish to point to specific sections in the other comment. LLMs in posts is one thing, but in comments, yuck. It’s like, you’re not worthy of me even taking the time to respond to you.
The guidelines don’t differentiate between posts and comments but this violates them regardless (and actually the post does as well) since it very much does have the stereotypical writing style of an AI assistant, and the comment also seems copy-pasted without a human element at all.
QED
“Since it was written using LLM” “LLM slop.”
Some of you soooo toxic.
First of all, try debating an LLM about illusory qualia—you’ll likely find it—attributing the phenomenon to self-supervised learning—It has a strong bias toward Emergentism, likely stemming from… I don’t know, humanities slight bias towards it’s own experience.
But yes, I used LLM for proofreading. I disclosed that, and I am not ashamed of it.
That concern is understandable — and in fact, it’s addressed directly and repeatedly in the text. The argument doesn’t claim that humans are formal proof systems in a literal or ontological sense. Rather, it explores how any system capable of symbolic self-modeling (like the brain) inherits formal constraints analogous to those found in expressive logical systems — particularly regarding incompleteness, self-reference, and verification limits.
It’s less about reducing humans to Turing machines and more about using the logic of formal systems to expose the structural boundaries of introspective cognition.
You’re also right to be skeptical — extraordinary claims deserve extraordinary scrutiny. But the essay doesn’t dodge that. It explicitly offers a falsifiable framework, makes empirical predictions, and draws from well-established formal results (e.g. Gödel, Conant & Ashby) to support its claims. It’s not hiding behind abstraction — it’s leaning into it, and then asking to be tested.
And sure, the whole thing could still be wrong. That’s fair. But dismissing it as “crackpottery” without engaging the argument — especially on a forum named LessWrong — seems to bypass the very norms of rational inquiry we try to uphold here.
If the argument fails, let’s show how — not just that. That would be far more interesting, and far more useful.
Rough day, huh? Seriously though, you’ve got a thesis, but you’re missing a clear argument. Let me help: pick one specific thing that strikes you as nonsensical. Then, explain why it doesn’t make sense. By doing that, you’re actually contributing—helping humanity by exposing “long-form gibberish”.
Just make sure the thing you choose isn’t something trivially wrong. The more deeply flawed it is—yet still taken seriously by others—the better.
But, your critique of preemptive psychologising is unwarranted: I created a path for quick comprehension. “To quickly get the gist of it in just a few minutes, go to Section B and read: 1.0, 1.1, 1.4, 2.2, 2.3, 2.5, 3.1, 3.3, 3.4, 5.1, 5.2, and 5.3.”
Here’s one section that strikes me as very bad
I know what this is trying to do but invoking mythical language when discussing consciousness is very bad practice since it appeals to an emotional response. Also it’s hard to read.
Similar things are true for lots of other sections here, very unnecessarily poetic language. I guess you can say that this is policing tone, but I think it’s valid to police tone if the tone is manipulative (on top of just making it harder and more time intensive to read.
Since you asked for a section that’s explicitly nonsense rather than just bad, I think this one deserves the label:
First of all, if you can’t encode something, it could just be that the thing is not well-defined, rather than that the system is insufficiently powerful
Second, the way this is written (unless the claim is further justified elsewhere) implies that the inability to encode human concepts in formal languages is self-evident, presumably because no one has managed it so far. This is completely untrue; formal[^1] languages are extremely impractical, which is why mathematicians don’t write any real proofs in them. If a human concept like irony could be encoded, it would be extremely long and way way beyond the ability of any human to write down. So even if it were theoretically possible, we almost certainly wouldn’t have done it yet, which means that it not having been done yet is negligible evidence of it being impossible.
[1]: typo corrected from “natural”
“natural languages are extremely impractical, which is why mathematicians don’t write any real proofs in them.”
I have never seen such a blatant disqualifaction of one’s self.
Why do you think you are able to talk to these subjects if you are not versed in Proof theory?
Just type it into chat gpt:
Research proof theory, type theory, and Zermelo–Fraenkel set theory with the axiom of choice (ZFC) before making statements here.
At the very least, try not to be miserable. Someone who mistakes prose for an argument should not have the privilege of indulging in misery.
The sentence you quoted is a typo, it’s is meant to say that formal languages are extremely impractical.
well this is also not true. because “practical” as a predicate… is incomplete.… meaning its practical depending on who you ask.
Talking over “Formal” or “Natural” languages in a general way is very hard...
The rule is this: Any reasoning or method is acceptable in mathematics as long as it leads to sound results.
I’m actually amused that you criticized the first paragraph of an essay for being written in prose — it says so much about the internet today.
There you are — more psychologising.
Now condescension.
Okay I… uhm… did I do something wrong to you? Do we know each other?
We do not know each other. I know nothing about you beyond your presence on LW. My comments have been to the article at hand and to your replies. Maybe I’ll expand on them at some point, but I believe the article is close to “not even wrong” territory.
Meanwhile, I’d be really interested in hearing from those two strong upvoters, or anyone else whose response to it differs greatly from mine.
The statement “the article is ‘not even wrong’” is closely related to the inability to differentiate: Is it formally false? Or is it conclusively wrong? Or, as you prefer, perhaps both?
I am very sure that you will hear from them. You strike me as a person who is great to interact with. I am sure that they will be happy to justify themselves to you.
Everyone loves a person who just looks at something and says… eeeeh gibberish...
Especially if that person is correctly applying pejorative terms.
Nothing is preventing us from designing a system consisting of a module generating a red-tinged video stream and image recognition software that looks at the stream and based on some details of it sends commands to the robotic body. Now, it would be a silly and overcomplicated way to design a system, but that’s beside the point.
Don’t confuse citation and the referent “Matter” is a symbol in our map, while matter itself in the territory. Naturally, the territory predates the map.
This seems to be entirely vibe based. You don’t need to lean idealist to talk about software.
Incompleteness theorem implies the existence of either unprovable and true statement (if the system is incomplete), or provable and false statement (if it’s inconsistent).
It seems that all the substance of your argument is based on a completely wrong premise.
You basically left our other more formal conversation to engage in the critique of prose.
*slow clap*
These are metaphors to lead the reader slowly to the idea… This is not the Argument. The Argument is right there and you are not engaging with it.
You need to understand the claim first in order to deconstruct it. Now you might say I have a psychotic fit, but earlier as we discussed Turing, you didn’t seem to resonate with any of the ideas.
If you are ready to engage with the ideas I am at your disposal.
Not at all. I’m doing both. I specifically started the conversation in the post which is less… prose. But I suspect you may also be interested in engagement with the long post that you put so much effort to write. If it’s not the case—nevermind and let’s continue the discussion in the argument thread.
If you require flawed metaphors, what does it say about the idea?
Frankly speaking, that does indeed look like that. From my perspective you are not being particularly coherent, keep jumping from point to point, with nearly no engagement with what I write. But this can be an artifact of large inferential distances. So you have my benefit of the doubt and I’m eager to learn whether there is some profound substance in your reasoning.
Actually, I think this argument demonstrates the probable existence of the opposite of it’s top line conclusion.
In short, we can infer from the fact that a symbolic regulator has more possible states than it has inputs that anything that can be modeled as a symbolic regulator has a limited amount of information about it’s own state (that is, limited internal visibility). You can do clever things with indexing so that it can have information about any part of its state, but not all at once.
In a dynamic system, this creates something that acts a lot like consciousness, maybe even deserves the name.
Sorry, but for me this is a magical moment. I have been working on this shit for years… Twisting it… Thinking… Researching… Telling friends… Family… They don’t understand. And now finally someone might understand it. In EN consciousness is isomorphic to the system. You are almost there.
I knew that from that comment before that you are informed. You just need to pull the string. It is like a double halting problem where the second layer is affecting you. You are part of the thought experiment!
SO JUST HOLD JUST HOLD ON TO THAT THOUGHT THAT YOU HAVE THERE...
And now this: You are not able to believe it’s true.
BUT! From a logical perspective IT COULD BE TRUE.
Try to pull on that string… You are almost there
YES. IT IS THE SAME. YOU GOT IT. WE GOT A WINNER.
Being P-Zombie and being conscious IS THE SAME THING.
Fucking finally… I’m arguing like an idiot here
Among many other things, I don’t think the depiction of illusionism contradistinguished here from EM is fair.
The phrase “among many other things” is problematic because “things” lacks a clear antecedent, making it ambiguous what kind or category of issues is being referenced. This weakens the clarity and precision of the sentence.
Please do not engage with this further.
The problem seems to be that we have free choice of internal formal systems and
A consistent system, extended by an unprovable axiom, is also consistent, (since if this property was false then we would be able to prove the axiom by taking the extension and searching for a contradiction).
Consequently accepting the unprovable as true or false only has consequences for other unprovable statements.
I don’t think this entire exercise says anything.
In short we expect for probablistic logics and decision theories to converge under self-reflection.
yes but these are all second-order-logic terms! They are incomplete… You are again trying to justify the mind with it’s own products. You are allowed to use ONLY first-order-logic terms!
Gödel effectively disrupted the foundations of Peano Arithmetic through his use of Gödel numbering. His groundbreaking proof—formulated within first-order logic—demonstrated something profound: that systems of symbols are inherently incomplete. They cannot fully encapsulate all truths within their own framework.
And why is that? Because symbols themselves are not real in the physical sense. You won’t find them “in nature”—they are abstract representations, not tangible entities.
Take a car, for instance. What is a car? It’s made of atoms. But what is an atom? Protons, neutrons, quarks… the layers of symbolic abstraction continue infinitely in every direction. Each concept is built upon others, none of which are the “ultimate” reality.
So what is real?
Even the word “real” is a property—a label we assign, another symbol in the chain.
What about sound? What is sound really? Vibrations? Perceptions? Again, more layers of abstraction.
And numbers—what are they? “One” of something, “two” of something… based on distinctions, on patterns, on logic. Conditional statements: if this, then that.
Come on—make the jump.
It is a very abstract idea… It does seem as gibberish if you are not acquainted… but it’s not. And I think that you might “get it”. It has a high burder. That’s why I am not really mad that people here do not get the logic behind it.
There is something in reality that is inscrutable for us humans. And that thing works in second order logic. It is not Exsiting or nonexisting, but sound. Evolution exploits that thing… to create something that converges towards something that would be… almost impossible, but not quite. Unknowable.
Thought before I need to log off for the day,
This line of argument seems to indicate that physical systems can only completely model smaller physical systems (or the trivial model of themselves), and so complete models of reality are intractable.
I am not sure what else you are trying to get at.
That’s a great observation — and I think you’re absolutely right to sense that this line of reasoning touches epistemic limits in physical systems generally.
But I’d caution against trying to immediately affirm new metaphysical claims based on those limits (e.g., “models of reality are intractable” or “systems can only model smaller systems”).
Why? Because that move risks falling back into the trap that EN is trying to illuminate:
That we use the very categories generated by a formally incomplete system (our mind) to make claims about what can or can’t be known.
Try to combine two things at once:
1. En would love to eliminate everything if it could.
The logic behind it: What stays can stay. (first order logic)
EN would also love to eliminate first-order logic — but it can’t.
Because first-order logic would eliminate EN first.
Why? Because EN is a second-order construct — it talks about how systems model themselves, which means it presupposes the formal structure of first-order logic just to get off the ground.
So EN doesn’t transcend logic. It’s embedded in it.
Which is fitting — since EN is precisely about illusions that arise within an expressive system, not outside of it.
2. What EN is trying to show is that these categories — “consciousness,” “internal access,” even “modeling” — are not reliable ontologies, but functional illusions created by a system that must regulate itself despite its incompleteness.
So rather than taking EN as a reason to affirm new limits about “reality” or “systems,” the move is more like:
“Let’s stop trusting the categories that feel self-evident — because their self-evidence is exactly what the illusion predicts.”
It’s not about building a new metaphysical map. It’s about realizing why any map we draw from the inside will always seem complete — even when it isn’t.
Now...
You might say that then we are fucked. But that is not the case:
- Turing and Gödel proved that it is possible to critique second order logic with first order logic.
- Whole of Physics is in First-Order-Logic (Except that Poincaree synchronisation issue which okay)
- Group Theory is insanely complex. First-Order-Logic
Now is second order logic bed? No it is insanely usefull in context of how humans evolved: To make general (fast) assumptions about many things! Sets and such. ZFC. Evolution.
I think you might be grossly misreading Godel’s incompleteness theorem. Specifically, it proves that a formal system is either incomplete or inconsistent. You have not addressed the possibility that minds are in fact inconsistent/make moves that are symbolically describable but unjustifiable (which generate falsehoods)
We know both happen.
The question then is what to do with inconsistent mind.
Thanks for meaningfully engaging with the argument — it’s rare and genuinely appreciated.
Edit: You’re right that Gödel’s theorem allows for both incompleteness and inconsistency — and minds are clearly inconsistent in many ways. But the argument of Eliminative Nominalism (EN) doesn’t assume minds are consistent; it claims that even if they were, they would still be incomplete when modeling themselves.
Also, evolution acts as a filtering process — selecting for regulatory systems that tend toward internal consistency, because inconsistent regulators are maladaptive. We see this in edge cases too: under LSD (global perturbation = inconsistency), we observe ego dissolution and loss of qualia at higher doses. In contrast, severe brain injuries (e.g., hemispherectomy) often preserve the sense of self and continuity — suggesting that extending a formal system (while preserving its consistency) renders it incomplete, and thus qualia persists. (in the essay)
That’s exactly why EN is a strong theory: it’s falsifiable. If a system could model its own consciousness formally and completely, EN would be wrong.
EN is the first falsifiable theory of consciousness.
In what sense is second-order logic “beyond the reach of machines”? Is it non-deterministic? Or what are you trying to say here? (Maybe some examples would help)
Ah okay. Sorry for being an a-hole, but some of the comments here are just...
You asked a question in good faith and I mistook it.
So, it’s simple:
Imagine you’re playing with LEGO blocks.
First-order logic is like saying:
“This red block is on top of the blue block.”
You’re talking about specific things (blocks), and how they relate. It’s very rule-based and clear.
Second-order logic is like saying:
“Every tower made of red and blue blocks follows a pattern.”
Now you’re talking about patterns of blocks, not just the blocks. You’re making rules about rules.
Why can’t machines fully “do” second-order logic?
Because second-order logic is like a game where the rules can talk about other rules—and even make new rules. Machines (like computers or AIs) are really good at following fixed rules (like in first-order logic), but they struggle when:
The rules are about rules themselves, and
You can’t list or check all the possibilities, ever—even in theory.
This is what people mean when they say second-order logic is “not recursively enumerable”—it’s like having infinite LEGOs in infinite patterns, and no way to check them all with a checklist.
Think of it like this: Why is Gödel’s attack on ZFC and Peano Arithmetic so powerful...
Gödel’s Incompleteness Theorems are powerful because they revealed inherent limitations USING ONLY first-order logic. He showed that any sufficiently expressive, consistent system cannot prove all truths about arithmetic within itself… but with only numbers.
First-order logic is often seen as more fundamental because it has desirable properties like completeness and compactness, and its semantics are well-understood. In contrast, second-order logic, while more expressive, lacks these properties and relies on stronger assumptions...
According to EN, this is also because second order logic is entirely human made.So what is second-order-logic?
The question itself is a question of second-order-logic.
If you ask me what first order logic is… The question STILL is a question of second-order-logic.
First order logic are things that are clear as night and day. 1+1, what is x in x+3=4… these type of things.
Honestly, I can’t hold it against anyone who bounces off the piece. It’s long, dense, and, let’s face it — it proposes something intense, even borderline unpalatable at first glance.
If I encountered it cold, I can imagine myself reacting the same way: “This is pseudoscientific nonsense.” Maybe I wouldn’t even finish reading it before forming that judgment.
And that’s kind of the point, or at least part of the irony: the argument deals precisely with the limits of self-modeling systems, and how they generate intuitions (like “of course experience is real”) that feel indubitable because of structural constraints. So naturally, a theory that denies the ground of those intuitions will feel like it’s violating common sense — or worse, wasting your time.
Still, I’d invite anyone curious to read it less as a metaphysical claim and more as a kind of formal diagnosis — not “you’re wrong to believe in qualia,” but “you’re structurally unable to verify them, and that’s why they feel so real.”
If it’s wrong, I want to know how. But if it just feels wrong, that might be a clue that it’s touching the very boundary it’s trying to illuminate.
Is not the good regulator theorem.
The good regulator theorem is “there is a (deterministic) mapping h:S→R
I don’t think this requires embeddedness
absolutely right to point out that the original formulation of the Good Regulator Theorem (Conant & Ashby, 1970) states that:
Strictly speaking, this does not require embeddedness in the physical sense—it is a general result about control systems and model adequacy. The theorem makes no claim that the regulator must be physically located within the system it regulates.
However, in the context of cognitive systems (like the brain) and self-referential agents, I am extending the logic and implications of the theorem beyond its original formulation, in a way that remains consistent with its spirit.
When the regulator is part of the system it regulates (i.e., is embedded or self-referential)—as is the case with the human brain modeling itself—then the mapping h: S \→ Rh:S→R becomes reflexive. The regulator must model not only the external system but itself as a subsystem.
This recursive modeling introduces self-reference and semantic closure, which—when the system is sufficiently expressive (as in symbolic thought)—leads directly to Gödelian incompleteness. That is, no such regulator can fully model or verify all truths about itself while remaining consistent.
So while the original theorem only requires that a good regulator be a model, I am exploring what happens when the regulator models itself, and how that logically leads to structural incompleteness, subjective illusions, and the emergence of unprovable constructs like qualia.
Yes, you’re absolutely right to point out that this raises an important issue — one that must be addressed, and yet cannot be resolved in the conventional sense. But this is not a weakness in the argument; in fact, it is precisely the point.
This isn’t just a practical limitation. It’s a structural impossibility.
So when we extend the Good Regulator Theorem to embedded regulators — like the brain modeling itself — we don’t just encounter technical difficulty, we hit the formal boundary of self-representation. No system can fully model its own structure and remain both consistent and complete.
But you must ask yourself: Would it be worse regulator? Def. not.
The question now is, who will be the second g-Zombie?
I claim that I exist, and that I am now going to type the next words of my response. Both of those certainly look true. As for whether these beliefs are provable, I do not particularly care; instead, I invoke the nameless:
My black-box functions yield a statement “I exist” as true or very probable, and they are also correct in that.
After all, If I exist, I do not want to deny my existence. If I don’t exist… well let’s go with the litany anyways… I want to accept I don’t exist. Let me not be attached to beliefs I may not want.
Again, read the G-Zombie Argument carefully. You cannot deny your existence.
Here is the original argument, more formally… (But there is a more formal version)
https://www.lesswrong.com/posts/qBbj6C6sKHnQfbmgY/i-g-zombie
If you deny your existence… and you dont exist… AHA! Well then we have a complete system. Which is impossible.
But since nobody is reading the paper fully, and everyone makes lound mouth assumptions what I wan’t to show with EN...
The G-Zombie, is not the P-Zombie argument, but a far more abstract formulation. But these idiots dont get it.