Ah, after researching it: That’s actually a great line. Haha — fair enough. I’ll take “a chapter from the punishment book in Anathem” as a kind of backhanded compliment.
If we’re invoking Anathem, then at least we’re in the right monastery.
That said, if the content is genuinely unhelpful or unclear, I’d love to know where the argument loses you — or what would make it more accessible. If it just feels like dense metaphysics-without-payoff, maybe I need to do a better job showing how the structure of the argument differs from standard illusionism or deflationary physicalism.
Yeah, I meant is as a not-a-compliment, but as a specific kind of not-a-compliment about a feeling of reading it rather then about actual meaning—which I just couldn’t access because this feeling was too much for my mind to continue reading (and this isn’t a high bar for a post—I read a lot of long texts).
Understandable. Reading such a dense text is a big investment—and chances are, it’s going nowhere… (even though it actually does, lol). So yeah, I totally would’ve done the same and ditch. But thanks for giving it a shot!
This is an interesting demonstration of what’s possible in philosophy, and maybe I’ll want to engage in detail with it at some point. But for now I’ll just say, I see no need to be an eliminativist or to consider eliminativism, any more than I feel a need to consider “air eliminativism”, the theory that there is no air, or any other eliminativism aimed at something that obviously exists.
Interest in eliminativism arises entirely from the belief that the world is made of nothing but physics, and that physics doesn’t contain qualia, intentionality, consciousness, selves, and so forth. Current physical theory certainly contains no such things. But did you ever try making a theory that contains them?
Thank you for the thoughtful comment. You’re absolutely right that denying the existence of air would be absurd. Air is empirically detectable, causally active, and its absence has measurable effects. But that’s precisely what makes it different from qualia.
Eliminative Nominalism is not the claim whether “x or y exists,” but a critique of how and why we come to believe that something exists at all. It’s not merely a reaction to physicalism; it’s a deeper examination of the formal constraints on any system that attempts to represent or model itself.
If you follow the argument to its root, it suggests that “existence” itself may be a category error—not because nothing is real, but because our minds are evolutionarily wired to frame things in terms of existence and agency. We treat things as discrete, persistent entities because our cognitive architecture is optimized for survival, not for ontological precision.
In other words, we believe in “things” because our brains are very keen on not dying.
So it’s not that qualia are “less real” than air. It’s that air belongs to a class of empirically resolvable phenomena, while qualia belong to a class of internally generated, structurally unverifiable assertions—necessary for our self-models, but not grounded in any formal or observable system.
“existence” itself may be a category error—not because nothing is real
If something is real, then something exists, yes? Or is there a difference between “existing” and “being real”?
Do you take any particular attitude towards what is real? For example, you might believe that something exists, but you might be fundamentally agnostic about the details of what exists. Or you might claim that the real is ineffable or a continuum, and so any existence claim about individual things is necessarily wrong.
qualia … necessary for our self-models, but not grounded in any formal or observable system
See, from my perspective, qualia are the empirical. I would consider the opposite view to be “direct realism”—experience consists of direct awareness of an external world. That would mean e.g. that when someone dreams or hallucinates, the perceived object is actually there.
What qualic realism and direct realism have in common, is that they also assume the reality of awareness, a conscious subject aware of phenomenal objects. I assume your own philosophy denies this as well. There is no actual awareness, there are only material systems evolved to behave as if they are aware and as if there are such things as qualia.
It is curious that the eliminativist scenario can be elaborated that far. Nonetheless, I really do know that something exists and that “I”, whatever I may be, am aware of it; whether or not I am capable of convincing you of this. And my own assumption is that you too are actually aware, but have somehow arrived at a philosophy which denies it.
Descartes’s cogito is the famous expression of this, but I actually think a formulation due to Ayn Rand is superior. We know that consciousness exists, just as surely as we know that existence exists; and furthermore, to be is to be something (“existence is identity”), to be aware is to know something (“consciousness is identification”).
What we actually know by virtue of existing and being conscious, probably goes considerably beyond even that; but negating either of those already means that you’re drifting away from reality.
I think you really should read or listen to the text.
”It is curious that the eliminativist scenario can be elaborated that far. Nonetheless, I really do know that something exists and that “I”, whatever I may be, am aware of it; whether or not I am capable of convincing you of this. And my own assumption is that you too are actually aware, but have somehow arrived at a philosophy which denies it.”
Yes! That is exactly the point. EN predicts that you will say that, and says, this is a “problem of second order logic”. Basically behavior justifies qualia, and qualia justifies behavior. We know however that first order logic is more fundamental.
I myself feel qualia just as you do, and I am not convinced by my own theory from an intuitive perspective, but from a rational perspective, it must be otherwise what I feel. That is the essence of being a g-Zombie.
During the next few days, I do not have time to study exactly how you manage to tie together second-order logic, the symbol grounding problem, and qualia as Gödel sentences (or whatever that connection is). I am reminded of Hofstadter’s theory that consciousness has something to do with indirect self-reference in formal systems, so maybe you’re a kind of Hofstadterian eliminativist.
However, in response to this --
EN predicts that you will say that
-- I can tell you how a believer in the reality of intentional states, would go about explaining you and EN. The first step is to understand what the key propositions of EN are, the next step is to hypothesize about the cognitive process whereby the propositions of EN arose from more commonplace propositions, the final step is to conceive of that cognitive process in an intentional-realist way, i.e. as a series of thoughts that occurred in a mind, rather than just as a series of representational states in a brain.
You mention Penrose. Penrose had the idea that the human mind can reason about the semantics of higher-order logic because brain dynamics is governed by highly noncomputable physics (highly noncomputable in the sense of Turing degrees, I guess). It’s a very imaginative idea, and it’s intriguing that quantum gravity may actually contain a highly noncomputable component (because of the undecidability of many properties of 4-manifolds, that may appear in the gravitational path integral).
Nonetheless, it seems an avoidable hypothesis. A thinking system can derive the truth of Gödel sentences, so long as it can reason about the semantics of the initial axioms, so all you need is a capacity for semantic reflection (I believe Feferman has a formal theory of this under the name “logical reflection”). Penrose doesn’t address this because he doesn’t even tackle the question of how anything physical has intentionality, he sticks purely to mathematics, physics, and logic.
My approach to this is Husserlian realism about the mind. You don’t start with mindless matter and hope to see how mental ontology is implicit in it or emerges from it. You start with the phenomenological datum that the mind is real, and you build on that. At some point, you may wish to model mental dynamics purely as a state machine, neglecting semantics and qualia; and then you can look for relationships between that state machine, and the state machines that physics and biology tell you about.
But you should never forget the distinctive ontology of the mental, that supplies the actual “substance” of that mental state machine. You’re free to consider panpsychism and other identity theories, interactionism, even pure metaphysical idealism; but total eliminativism contradicts the most elementary facts we know, as Descartes and Rand could testify. Even you say that you feel the qualia, it’s just that you think “from a rational perspective, it must be otherwise”.
I’m truly grateful for the opportunity to engage meaningfully on this topic. You’ve brought up some important points:
“I do not have time” — Completely understandable. ”Symbol grounding” — This is inherently tied to the central issue we’re discussing. ”Qualia as Gödel sentences” — An important distinction here: it’s not that qualia are Gödel sentences, but rather, the absence of qualia functions analogously to a Gödel sentence — paradoxically. Consider this line of reasoning.
This paradox highlights the self-referential inconsistency — invoking Gödel’s incompleteness theorems:
To highlight expressivity: A. Lisa is a P-Zombie. B. Lisa asserts that she is a P-Zombie. C. A true P-Zombie cannot assert or hold beliefs. D. Therefore, Lisa cannot assert that she is a P-Zombie.
Cases:
A. Lisa is a P-Zombie. B. Lisa asserts that she is a P-Zombie. C. Lisa would be complete: Not Possible
A. Lisa is not a P-Zombie. B. Lisa asserts that she is a P-Zombie. C. Lisa would be not complete: Possible but irrelevant.
A. Lisa is a P-Zombie. B. Lisa asserts that she is a not P-Zombie. C. Lisa would be not complete: Possible
A. Lisa is not a P-Zombie. B. Lisa asserts that she is a not P-Zombie. C. Lisa would be complete: Not Possible
In order for Lisa to be internally consistent yet incomplete, she must maintain that she is not a P-Zombie. But if she maintains that she is not a P-Zombie AND IS NOT A P-Zombie, Lisa would be complete. AHA! Thus impossible.
This connects to Turing’s use of Modus Tollens in the halting problem — a kind of logical self-reference that breaks the system from within.
Regarding Hofstadter: My use of Gödel’s ideas is strictly arithmetic and formal — not metaphorical or analogical, as Hofstadter often approaches them. So while interesting, his theory diverges significantly from what I’m proposing.
You mentioned:
“I can tell you how a ‘believer’...” — Exactly. That’s the point. “Believer”
“You mention Penrose.” — Yes. Penrose is consequential. Though I believe his argument is flawed. His reasoning hinges on accepting qualia as a given. If he somehow manages to validate that assumption by proving second order logic in the quantum realm, I’ll tip my hat — but my framework challenges that very basis.
You said:
“My approach is Husserlian realism about the mind — you don’t start with mindless matter and hope...” — Right, but I’d like to clarify: this critique applies more to Eliminative Materialism than to Eliminative Nominalism. In EN, ‘matter’ itself is a symbol — not a foundational substance. So the problem isn’t starting with “mindless matter” — it’s assuming that “matter” has ontological priority at all. And finally, on the notion of substance — I’m not relying on that strawman. My position isn’t based on classical substance dualism
Physicalism doesn’t solve the hard problem, because there is no reason a physical process should feel like anything from the inside.
Computationalism doesn’t solve the hard problem, because there is no reason running an algorithm should feel like anything from the inside.
Formalism doesn’t solve the hard problem, because there is no reason an undecideable proposition should feel like anything from the inside.
Of course, you are not trying to explain qualia as such, you are giving an illusionist style account. But I still don’t see how you are predicting belief in qualia.
And among these fictions, none is more persistent than the one we call qualia.
What’s useful about them? If you are going to predict (the belief in) qualia, on the basis of usefulness , you need to state the usefulness. It’s useful to know there is a sabretooth tiger bearing down in you , but why is an appearance more useful than a belief ..and what’s the use of a belief-in-appearance?
This suggests an unsettling, unprovable truth: the brain does not synthesize qualia in any objective sense but merely commits to the belief in their existence as a regulatory necessity.
What necessity?
ETA:
self-referential, self-regulating system that is formally incomplete (as all sufficiently complex systems are) will generate internally undecidable propositions. These propositions — like “I am in pain” or “I see red” — are not verifiable within the system, but are functionally indispensable for coherent behavior.
I still see no reason why an undecideable proposition should appear like a quale or a belief in qualia.
That failure gets reified as feeling.
Why?
I understand that you invoke the “Phenomenological Objection,” as I also, of course, “feel” qualia. But under EN, that feeling is not a counterargument — it’s the very evidence that you are part of the system being modeled.
Phenomenal conservatism , the idea that if something seems to exist ,you should (defeasibly) assume it does exist,.is the basis for belief in qualia. And it can be defeated by a counterargument, but the counter argument needs to be valid as an argument. Saying X’s are actually Y’s for no particular reason is not valid.
What’s useful about them? If you are going to predict (the belief in) qualia, on the basis of usefulness , you need to state the usefulness.
There might be some usefulness!
The statement I’d consider is “I am now going to type the next characters of my comment”. This belief turns out to be true by direct demonstration, it is not provable because I could as well leave the commenting to tomorrow and be thinking “I am now going to sleep”, not particularly justifiable in advance, and it is useful for making specific plans that branch less on my own actions.
I object to the original post because of probabilistic beliefs, though.
To your objection: Again, EN knew that you will object. The thing is EN is very abstract: It’s like two halting machines who think that they are universal halting machines try to understand what it means that they are not unversal halting machines.
They say: Yes but if the halting problem is true, than I will say it’s true. I must be a UTM.
Addressing your claims: Formalism, Computationalism, Physicalism, are all in opposition to EN. EN says, that maybe existence itself is not a fundamental category, but soundness. This means that the idea of things existing and not existing is a symbol of the brain.
EN doesn’t attempt to explain why a physical or computational process should “feel like” anything — because it denies that such feeling exists in any metaphysical sense. Instead, it explains why a system like the brain comes to believe in qualia. That belief arises not from phenomenological fact, but from structural necessity: any self-referential, self-regulating system that is formally incomplete (as all sufficiently complex systems are) will generate internally undecidable propositions. These propositions — like “I am in pain” or “I see red” — are not verifiable within the system, but are functionally indispensable for coherent behavior.
The “usefulness” of qualia, then, lies in their regulatory role. By behaving AS IF having experience, the system compresses and coordinates internal states into actionable representations. The belief in qualia provides a stable self-model, enables prioritization of attention, and facilitates internal coherence — even if the underlying referents (qualia themselves) are formally unprovable. In this view, qualia are not epiphenomenal mysteries, but adaptive illusions, generated because the system cannot...
NOW
I understand that you invoke the “Phenomenological Objection,” as I also, of course, “feel” qualia. But under EN, that feeling is not a counterargument — it’s the very evidence that you are part of the system being modeled. You feel qualia because the system must generate that belief in order to function coherently, despite its own incompleteness. You are embedded in the regulatory loop, and so the illusion is not something you can step outside of — it is what it feels like to be inside a model that cannot fully represent itself. The conviction is real; the thing it points to is not.
”because there is no reason a physical process should feel like anything from the inside.”
The key move EN makes — and where it departs from both physicalism and computationalism — is that it doesn’t ask, “Why should a physical process feel like anything from the inside?” It asks, “Why must a physical system come to believe it feels something from the inside in order to function?” The answer is: because a self-regulating, self-modeling system needs to track and report on its internal states without access to its full causal substrate. It does this by generating symbolic placeholders — undecidable internal propositions — which it treats as felt experience. In order to say “I am in pain,” the system must first commit to the belief that there is something it is like to be in pain. The illusion of interiority is not a byproduct — it is the enabling fiction that lets the system tell itself a coherent story across incomplete representations.
OKAY since you made the right question I will include this paragraph in the Abstract.
In other words: the brain doesn’t fuck around with substrate — it fucks around with the proof that you have one. It doesn’t care what “red” is made of; it cares whether the system can act as if it knows what red is, in a way that’s coherent, fast, and behaviorally useful. The experience isn’t built from physics — it’s built from the system’s failed attempts to prove itself to itself. That failure gets reified as feeling. So when you say “I feel it,” what you’re really pointing to is the boundary of what your system can’t internally verify — and must, therefore, treat as foundational. That’s not a bug. That’s the fiction doing its job.
As a consequence of its validity: Neuroscience will not make progress in explaining consciousness. The symbol grounding problem will remain unsolved in computational systems.
As a theory with explanatory power: It describes pathologies and states related to consciousness. It addresses and potentially resolves the Hard Problem of Consciousness.
As a theory with predictive power: Interestingly, while it seems to have little direct connection to consciousness (admittedly, it sound like gibberish), there is a conceptual link to second-order logic and Einstein synchronization. The argument is as follows: since second-order logic is a construct of the human brain, Einstein synchronization—or more precisely, Poincaré–Einstein synchronization—may not be fundamentally necessary for physics, as nature would avoid it either way. (This does not mean that Relativity is wrong or something like that.)
If the speed of light is not assumed to be isotropic, then defining simultaneity and synchronizing clocks requires reasoning about functions that assign speeds to different directions. Such reasoning transcends first-order logic and enters the realm of second-order logic, where we quantify over sets or functions. This suggests that the constancy of the speed of light is not merely a physical assumption, but also a simplifying logical convention that avoids higher-order complexity.
There is a part in the text that addresses the why and how:
The apparent dependence on simultaneity conventions may merely reflect a coordinate choice, with a real, underlying speed limit, with or without parity, still preserved across all frames. This is not good for EN, but also not catastrophic.
There is a general consensus that an undetectable ether could, in principle, coexist with special relativity, without leading to observable differences. As such, it is often regarded as “philosophically optional”—not required by current physical theories.
Many physicists anticipate that a future theory of quantum gravity will offer a more fundamental framework, potentially resolving or reframing these issues entirely.
Basically: I say symbols are a creation of a brain in order to be self-referential. (Even though it’s not. I can address this.) So symbols should not pop up in nature. Einstein uses a convention in order to keep Nature from using symbols. Without Einstein convention the speed of light is defined by the speed of light. Thus the speed of light is a symbol. I say: This will be resolved in a way.
This looks to me like long-form gibberish, and it’s not helped by its defensive pleas to be taken seriously and pre-emptive psychologising of anyone who might disagree.
People often ask about the reasons for downvotes. I would like to ask, what did the two people who strongly upvoted this see in it? (Currently 14 karma with 3 votes. Leaving out the automatic point of self-karma leaves 13 with 2 votes.)
While I take no position on the general accuracy or contextual robustness of the post’s thesis, I find that its topics and analogies inspire better development of my own questions. The post may not be good advice, but it is good conversation. In particular I really like the attempt to explicitly analyze possible explanations of processes of consciousness emerging from physical formal systems instead of just remarking on the mysteriousness of such a thing ostensibly having happened.
Since you seem to grasp the structural tension here, you might find it interesting that one of EN’s aims is to develop an argument that does not rely on Dennett’s contradictory “Third-Person Absolutism”—that is, the methodological stance which privileges an objective, external (third-person) perspective while attempting to explain phenomena that are, by nature, first-person emergent. EN tries to show that subjective illusions like qualia do not need to be explained away in third-person terms, but rather understood as consequences of formal limitations on self-modeling systems.
Thank you — that’s exactly the spirit I was hoping to cultivate. I really appreciate your willingness to engage with the ideas on the level of their generative potential, even if you set aside their ultimate truth-value. Which is a hallmark of critical thinking.
I would be insanely glad if you could engage with it deeper since you strike me as someone who is… rational.
I especially resonate with your point about moving beyond mystery-as-aesthetic, and toward a structural analysis of how something like consciousness could emerge from given constraints. Whether or not EN is the right lens, I think treating consciousness as a problem of modeling rather than a problem of magic is a step in the right direction.
Yeah, same here. This feels like a crossover between the standard Buddhist woo and LLM slop, sprinkled with “quantum” and “Gödel”. The fact that it has positive karma makes me feel sad about LW.
Since it was written using LLM, I think it is only fair to ask LLM to summarize it:
Summary of “Eliminative Nominalism”
This philosophical essay introduces Eliminative Nominalism (EN), a theoretical framework that extends Eliminative Materialism (EM) to address what the author considers its foundational oversight: the conflation of first-order physical description with second-order representational structure.
Core Arguments
Mind as Expressive System: The mind is a logically expressive system that inherits the constraints of formal systems, making it necessarily incomplete (via Gödel’s incompleteness theorems).
Consciousness as Formal Necessity: Subjective experience (“qualia”) persists not because it reflects reality but because its rejection would be arithmetically impossible within the system that produces it. The brain, as a “good regulator,” cannot function without generating these unprovable formal assertions.
Universe Z Thought Experiment: The author proposes a universe physically identical to ours but lacking epiphenomenal entities (consciousness), where organisms still behave exactly as in our universe. Through parsimony and evolutionary efficiency, the author argues we likely inhabit such a universe.
Beyond Illusionism: Unlike EM’s “illusionism,” EN doesn’t claim consciousness is merely an illusion (which presupposes someone being deceived) but rather a structural necessity—a computational artifact that cannot be eliminated even if metaphysically unreal.
Implications
Consciousness is neither reducible nor emergent, but a formal fiction generated by self-referential systems
The persistence of qualia reflects the mind’s inherent incompleteness, not its access to metaphysical truth
Natural language and chain of thought necessarily adhere to formalizable structures that constrain cognition
The “hard problem” of consciousness is dissolved rather than solved
The essay concludes that while EN renders consciousness metaphysically problematic, it doesn’t undermine ethics or human experience, offering instead a testable, falsifiable framework for understanding mind that willingly subjects itself to empirical criteria.
So, I guess the key idea is to use the Gödel’s incompleteness theorem to explain human psychology.
Standard crackpottery, in my opinion. Humans are not mathematical proof systems.
I agree that this sounds not very valuable; sounds like a repackaging of illusionism without adding anything. I’m surprised about the votes (didn’t vote myself).
Illusionism often takes a functionalist or behavioral route: it says that consciousness is not what it seems, and explains it in terms of cognitive architecture or evolved heuristics. That’s valuable, but EN goes further — or perhaps deeper — by grounding the illusion not just in evolutionary utility, but in formal constraints on self-referential systems.
In other words:
EN doesn’t just say, “You’re wrong about qualia.” It says, “You must be wrong — formally — because any system that models itself will necessarily generate undecidable propositions (e.g., qualia) that feel real but cannot be verified.”
This brings tools like Gödel’s incompleteness, semantic closure, and regulator theory into the discussion in a way that directly addresses why subjective experience feels indubitable even if it’s structurally ungrounded.
So yes, it may sound like illusionism — but it tries to explain why illusionism is inevitable, not just assert it.
That said, I’d genuinely welcome criticism or counterexamples. If it’s just a rebranding, let’s make that explicit. But if there’s a deeper structure here worth exploring, I hope it earns the scrutiny.
Sorry, but isn’t this written by an LLM? Especially since milan’s other comments ([1], [2], [3]) are clearly in a different style, the emotional component goes from 9⁄10 to 0⁄10 with no middle ground.
I find this extremely offensive (and I’m kinda hard to offend I think), especially since I’ve ‘cooperated’ with milan’s wish to point to specific sections in the other comment. LLMs in posts is one thing, but in comments, yuck. It’s like, you’re not worthy of me even taking the time to respond to you.
The guidelines don’t differentiate between posts and comments but this violates them regardless (and actually the post does as well) since it very much does have the stereotypical writing style of an AI assistant, and the comment also seems copy-pasted without a human element at all.
A rough guideline is that if you are using AI for writing assistance, you should spend a minimum of 1 minute per 50 words (enough to read the content several times and perform significant edits), you should not include any information that you can’t verify, haven’t verified, or don’t understand, and you should not use the stereotypical writing style of an AI assistant.
First of all, try debating an LLM about illusory qualia—you’ll likely find it—attributing the phenomenon to self-supervised learning—It has a strong bias toward Emergentism, likely stemming from… I don’t know, humanities slight bias towards it’s own experience.
But yes, I used LLM for proofreading. I disclosed that, and I am not ashamed of it.
“Standard crackpottery, in my opinion. Humans are not mathematical proof systems.”
That concern is understandable — and in fact, it’s addressed directly and repeatedly in the text. The argument doesn’t claim that humans are formal proof systems in a literal or ontological sense. Rather, it explores how any system capable of symbolic self-modeling (like the brain) inherits formal constraints analogous to those found in expressive logical systems — particularly regarding incompleteness, self-reference, and verification limits.
It’s less about reducing humans to Turing machines and more about using the logic of formal systems to expose the structural boundaries of introspective cognition.
You’re also right to be skeptical — extraordinary claims deserve extraordinary scrutiny. But the essay doesn’t dodge that. It explicitly offers a falsifiable framework, makes empirical predictions, and draws from well-established formal results (e.g. Gödel, Conant & Ashby) to support its claims. It’s not hiding behind abstraction — it’s leaning into it, and then asking to be tested.
And sure, the whole thing could still be wrong. That’s fair. But dismissing it as “crackpottery” without engaging the argument — especially on a forum named LessWrong — seems to bypass the very norms of rational inquiry we try to uphold here.
If the argument fails, let’s show how — not just that. That would be far more interesting, and far more useful.
Rough day, huh? Seriously though, you’ve got a thesis, but you’re missing a clear argument. Let me help: pick one specific thing that strikes you as nonsensical. Then, explain why it doesn’t make sense. By doing that, you’re actually contributing—helping humanity by exposing “long-form gibberish”.
Just make sure the thing you choose isn’t something trivially wrong. The more deeply flawed it is—yet still taken seriously by others—the better.
But, your critique of preemptive psychologising is unwarranted: I created a path for quick comprehension. “To quickly get the gist of it in just a few minutes, go to Section B and read: 1.0, 1.1, 1.4, 2.2, 2.3, 2.5, 3.1, 3.3, 3.4, 5.1, 5.2, and 5.3.”
At its heart, we face a dilemma that captures the paradox of a universe so intricately composed, so profoundly mesmerizing, that the very medium on which its poem is written—matter itself—appears to have absorbed the essence of the verse it bears. And that poem, unmistakably, is you—or more precisely, every version of you that has ever been, or ever will be.
I know what this is trying to do but invoking mythical language when discussing consciousness is very bad practice since it appeals to an emotional response. Also it’s hard to read.
Similar things are true for lots of other sections here, very unnecessarily poetic language. I guess you can say that this is policing tone, but I think it’s valid to police tone if the tone is manipulative (on top of just making it harder and more time intensive to read.
Since you asked for a section that’s explicitly nonsense rather than just bad, I think this one deserves the label:
We can encode mathematical truths into natural language, yet we cannot fully encode human concepts—such as irony, ambiguity, or emotional nuance—into formal language. Therefore: Natural language is at least as expressive as formal language.
First of all, if you can’t encode something, it could just be that the thing is not well-defined, rather than that the system is insufficiently powerful
Second, the way this is written (unless the claim is further justified elsewhere) implies that the inability to encode human concepts in formal languages is self-evident, presumably because no one has managed it so far. This is completely untrue; formal[^1] languages are extremely impractical, which is why mathematicians don’t write any real proofs in them. If a human concept like irony could be encoded, it would be extremely long and way way beyond the ability of any human to write down. So even if it were theoretically possible, we almost certainly wouldn’t have done it yet, which means that it not having been done yet is negligible evidence of it being impossible.
“natural languages are extremely impractical, which is why mathematicians don’t write any real proofs in them.”
I have never seen such a blatant disqualifaction of one’s self. Why do you think you are able to talk to these subjects if you are not versed in Proof theory?
Just type it into chat gpt:
Which one is true:
”natural languages are extremely impractical, which is why mathematicians don’t write any real proofs in them.”
OR
”They do. AND APPART FROM THAT Language is not impractical, language too expressive (as in logical expressivity of second-order-logic)”
Research proof theory, type theory, and Zermelo–Fraenkel set theory with the axiom of choice (ZFC) before making statements here.
At the very least, try not to be miserable. Someone who mistakes prose for an argument should not have the privilege of indulging in misery.
We do not know each other. I know nothing about you beyond your presence on LW. My comments have been to the article at hand and to your replies. Maybe I’ll expand on them at some point, but I believe the article is close to “not even wrong” territory.
Meanwhile, I’d be really interested in hearing from those two strong upvoters, or anyone else whose response to it differs greatly from mine.
The statement “the article is ‘not even wrong’” is closely related to the inability to differentiate: Is it formally false? Or is it conclusively wrong? Or, as you prefer, perhaps both?
I am very sure that you will hear from them. You strike me as a person who is great to interact with. I am sure that they will be happy to justify themselves to you.
Everyone loves a person who just looks at something and says… eeeeh gibberish...
Especially if that person is correctly applying pejorative terms.
In The Terminator, we often see the world through the machine’s perspective: a red-tinged overlay of cascading data, a synthetic gaze parsing its environment with cold precision. But this raises an unsettling question: Who—or what—actually experiences that view?
Is there an AI inside the AI, or is this merely a fallacy that invites us to project a mind where none exists?
Nothing is preventing us from designing a system consisting of a module generating a red-tinged video stream and image recognition software that looks at the stream and based on some details of it sends commands to the robotic body. Now, it would be a silly and overcomplicated way to design a system, but that’s beside the point.
If matter is merely a symbol within our conceptual models, does the claim that “matter precedes and governs mind” hold any meaning?
Don’t confuse citation and the referent “Matter” is a symbol in our map, while matter itself in the territory. Naturally, the territory predates the map.
If one insists that the software—the computational patterns and processes—alone constitutes the essence of the AI, then one leans toward idealism, suggesting that the “helpful assistant” might exist in a realm hierarchically above physical instantiation, beyond space and time. Conversely, if one asserts that only the hardware—the physical substrate—truly exists, then one aligns with materialism or physicalism, reducing the AI to mere excitations of electrical charges within the integrated circuits of the GPU.
This seems to be entirely vibe based. You don’t need to lean idealist to talk about software.
Drawing from the first incompleteness theorem, EN suggests that the brain, as a biological “good regulator”, operates most effectively when it generates unprovable formal falsehoods—one of which corresponds to the claim of experiencing consciousness or qualia.
Incompleteness theorem implies the existence of either unprovable and true statement (if the system is incomplete),or provable and false statement (if it’s inconsistent).
It seems that all the substance of your argument is based on a completely wrong premise.
You basically left our other more formal conversation to engage in the critique of prose.
*slow clap*
These are metaphors to lead the reader slowly to the idea… This is not the Argument. The Argument is right there and you are not engaging with it.
You need to understand the claim first in order to deconstruct it. Now you might say I have a psychotic fit, but earlier as we discussed Turing, you didn’t seem to resonate with any of the ideas.
If you are ready to engage with the ideas I am at your disposal.
You basically left our other more formal conversation to engage in the critique of prose.
Not at all. I’m doing both. I specifically started the conversation in the post which is less… prose. But I suspect you may also be interested in engagement with the long post that you put so much effort to write. If it’s not the case—nevermind and let’s continue the discussion in the argument thread.
These are metaphors to lead the reader slowly to the idea...
If you require flawed metaphors, what does it say about the idea?
Now you might say I have a psychotic fit
Frankly speaking, that does indeed look like that. From my perspective you are not being particularly coherent, keep jumping from point to point, with nearly no engagement with what I write. But this can be an artifact of large inferential distances. So you have my benefit of the doubt and I’m eager to learn whether there is some profound substance in your reasoning.
Actually, I think this argument demonstrates the probable existence of the opposite of it’s top line conclusion.
In short, we can infer from the fact that a symbolic regulator has more possible states than it has inputs that anything that can be modeled as a symbolic regulator has a limited amount of information about it’s own state (that is, limited internal visibility). You can do clever things with indexing so that it can have information about any part of its state, but not all at once.
In a dynamic system, this creates something that acts a lot like consciousness, maybe even deserves the name.
Sorry, but for me this is a magical moment. I have been working on this shit for years… Twisting it… Thinking… Researching… Telling friends… Family… They don’t understand. And now finally someone might understand it. In EN consciousness is isomorphic to the system. You are almost there.
I knew that from that comment before that you are informed. You just need to pull the string. It is like a double halting problem where the second layer is affecting you. You are part of the thought experiment!
...EN argues that any sufficiently expressive cognitive system—such as the human brain—must generate internal propositions that are arithmetically undecidable. These undecidable structures function as evolutionarily advantageous analogues to Gödel sentences, inverted into the belief in raw subjective experience (qualia), despite being formally unprovable within the system itself.
Rather than explaining subjective illusions away in third-person terms, EN proposes that they arise as formal consequences of self-referential modeling, constrained by the expressive limits of second-order logic.
How could undecidability, unprovability, self-referential modeling, incompleteness, or any sort of logic generate the redness of red?
Incomplete, self-referential modeling → ? → red
The brain does this by creating a symbol, which refers to a symbol, which refers to a symbol—an infinite regress with no grounding, no bottom. But evolution doesn’t need grounding; it needs action. So it skewed this looping process toward stability—toward a fixed point. That fixed point is the assertion: “I exist.” Not because the system proves it, but because the loop collapses into a self-reinforcing structure that feels true. This is not the discovery of a self—it’s the compression artifact of a system trying to model itself through unprovable means. The result is a symbol that mistakes itself for a subject.
The feeling part remains unexplained.
What justifies a formal system becoming experience?
Well, that’s the heart of the matter: ultimately, nothing.
Experience is not something we have, but something we enact. Your experience is barred from being “real” in any ontologically grounded sense because the universe cannot produce something like it directly. Yet it can still be consistent, much like a force—both can only be inferred from their effects.
We still seem to have experience. How can this “seeming” feel like something? If you boil everything down to math, how can math feel like anything?
One: Qualia are not illusions, they are fictions.
Why do people defend qualia so intensely if they’re illusions?
Because the illusion is evolutionarily entrenched and cognitively reinforced.
This seems contradictory.
A defendant guilty of homicide argues that, due to EN, the victim had no conscious experience and thus suffered no moral harm. The judge, also an EN advocate, counters that if consciousness is illusory, the defendant’s claim of injustice itself collapses. Ethical responsibility remains intact irrespective of qualia’s ontological status.
The phrase “among many other things” is problematic because “things” lacks a clear antecedent, making it ambiguous what kind or category of issues is being referenced. This weakens the clarity and precision of the sentence.
The problem seems to be that we have free choice of internal formal systems and
A consistent system, extended by an unprovable axiom, is also consistent, (since if this property was false then we would be able to prove the axiom by taking the extension and searching for a contradiction).
Consequently accepting the unprovable as true or false only has consequences for other unprovable statements.
I don’t think this entire exercise says anything.
In short we expect for probablistic logics and decision theories to converge under self-reflection.
yes but these are all second-order-logic terms! They are incomplete… You are again trying to justify the mind with it’s own products. You are allowed to use ONLY first-order-logic terms!
Gödel effectively disrupted the foundations of Peano Arithmetic through his use of Gödel numbering. His groundbreaking proof—formulated within first-order logic—demonstrated something profound: that systems of symbols are inherently incomplete. They cannot fully encapsulate all truths within their own framework.
And why is that? Because symbols themselves are not real in the physical sense. You won’t find them “in nature”—they are abstract representations, not tangible entities.
Take a car, for instance. What is a car? It’s made of atoms. But what is an atom? Protons, neutrons, quarks… the layers of symbolic abstraction continue infinitely in every direction. Each concept is built upon others, none of which are the “ultimate” reality.
So what is real?
Even the word “real” is a property—a label we assign, another symbol in the chain.
What about sound? What is sound really? Vibrations? Perceptions? Again, more layers of abstraction.
And numbers—what are they? “One” of something, “two” of something… based on distinctions, on patterns, on logic. Conditional statements: if this, then that.
Come on—make the jump.
It is a very abstract idea… It does seem as gibberish if you are not acquainted… but it’s not. And I think that you might “get it”. It has a high burder. That’s why I am not really mad that people here do not get the logic behind it.
There is something in reality that is inscrutable for us humans. And that thing works in second order logic. It is not Exsiting or nonexisting, but sound. Evolution exploits that thing… to create something that converges towards something that would be… almost impossible, but not quite. Unknowable.
This line of argument seems to indicate that physical systems can only completely model smaller physical systems (or the trivial model of themselves), and so complete models of reality are intractable.
That’s a great observation — and I think you’re absolutely right to sense that this line of reasoning touches epistemic limits in physical systems generally.
But I’d caution against trying to immediately affirm new metaphysical claims based on those limits (e.g., “models of reality are intractable” or “systems can only model smaller systems”).
Why? Because that move risks falling back into the trap that EN is trying to illuminate:
That we use the very categories generated by a formally incomplete system (our mind) to make claims about what can or can’t be known.
Try to combine two things at once:
1. En would love to eliminate everything if it could.
The logic behind it: What stays can stay. (first order logic)
EN would also love to eliminate first-order logic — but it can’t. Because first-order logic would eliminate EN first.
Why? Because EN is a second-order construct — it talks about how systems model themselves, which means it presupposes the formal structure of first-order logic just to get off the ground.
So EN doesn’t transcend logic. It’s embedded in it. Which is fitting — since EN is precisely about illusions that arise within an expressive system, not outside of it.
2. What EN is trying to show is that these categories — “consciousness,” “internal access,” even “modeling” — are not reliable ontologies, but functional illusions created by a system that must regulate itself despite its incompleteness.
So rather than taking EN as a reason to affirm new limits about “reality” or “systems,” the move is more like: “Let’s stop trusting the categories that feel self-evident — because their self-evidence is exactly what the illusion predicts.”
It’s not about building a new metaphysical map. It’s about realizing why any map we draw from the inside will always seem complete — even when it isn’t.
Now...
You might say that then we are fucked. But that is not the case:
- Turing and Gödel proved that it is possible to critique second order logic with first order logic. - Whole of Physics is in First-Order-Logic (Except that Poincaree synchronisation issue which okay) - Group Theory is insanely complex. First-Order-Logic
Now is second order logic bed? No it is insanely usefull in context of how humans evolved: To make general (fast) assumptions about many things! Sets and such. ZFC. Evolution.
I think you might be grossly misreading Godel’s incompleteness theorem. Specifically, it proves that a formal system is either incomplete or inconsistent. You have not addressed the possibility that minds are in fact inconsistent/make moves that are symbolically describable but unjustifiable (which generate falsehoods)
We know both happen.
The question then is what to do with inconsistent mind.
Thanks for meaningfully engaging with the argument — it’s rare and genuinely appreciated.
Edit: You’re right that Gödel’s theorem allows for both incompleteness and inconsistency — and minds are clearly inconsistent in many ways. But the argument of Eliminative Nominalism (EN) doesn’t assume minds are consistent; it claims that even if they were, they would still be incomplete when modeling themselves.
Also, evolution acts as a filtering process — selecting for regulatory systems that tend toward internal consistency, because inconsistent regulators are maladaptive. We see this in edge cases too: under LSD (global perturbation = inconsistency), we observe ego dissolution and loss of qualia at higher doses. In contrast, severe brain injuries (e.g., hemispherectomy) often preserve the sense of self and continuity — suggesting that extending a formal system (while preserving its consistency) renders it incomplete, and thus qualia persists. (in the essay)
That’s exactly why EN is a strong theory: it’s falsifiable. If a system could model its own consciousness formally and completely, EN would be wrong.
EN is the first falsifiable theory of consciousness.
Unlike first-order logic, second-order logic is not recursively enumerable—less computationally tractable, more fluid, more human. It operates in a space that, for now, remains beyond the reach of machines still bound to the strict determinism of their logic gates.
In what sense is second-order logic “beyond the reach of machines”? Is it non-deterministic? Or what are you trying to say here? (Maybe some examples would help)
Ah okay. Sorry for being an a-hole, but some of the comments here are just... You asked a question in good faith and I mistook it.
So, it’s simple:
Imagine you’re playing with LEGO blocks.
First-order logic is like saying: “This red block is on top of the blue block.” You’re talking about specific things (blocks), and how they relate. It’s very rule-based and clear.
Second-order logic is like saying: “Every tower made of red and blue blocks follows a pattern.” Now you’re talking about patterns of blocks, not just the blocks. You’re making rules about rules.
Why can’t machines fully “do” second-order logic? Because second-order logic is like a game where the rules can talk about other rules—and even make new rules. Machines (like computers or AIs) are really good at following fixed rules (like in first-order logic), but they struggle when:
The rules are about rules themselves, and You can’t list or check all the possibilities, ever—even in theory. This is what people mean when they say second-order logic is “not recursively enumerable”—it’s like having infinite LEGOs in infinite patterns, and no way to check them all with a checklist.
Maybe I should have asked: In what sense are machines “fully doing” first-order logic? I think I understand the part where first logic formulas are recursively enumerable, in theory, but isn’t that intractable to the point of being useless and irrelevant in practice?
Think of it like this: Why is Gödel’s attack on ZFC and Peano Arithmetic so powerful...
Gödel’s Incompleteness Theorems are powerful because they revealed inherent limitations USING ONLY first-order logic. He showed that any sufficiently expressive, consistent system cannot prove all truths about arithmetic within itself… but with only numbers.
First-order logic is often seen as more fundamental because it has desirable properties like completeness and compactness, and its semantics are well-understood. In contrast, second-order logic, while more expressive, lacks these properties and relies on stronger assumptions...
According to EN, this is also because second order logic is entirely human made.So what is second-order-logic? The question itself is a question of second-order-logic. If you ask me what first order logic is… The question STILL is a question of second-order-logic.
First order logic are things that are clear as night and day. 1+1, what is x in x+3=4… these type of things.
Honestly, I can’t hold it against anyone who bounces off the piece. It’s long, dense, and, let’s face it — it proposes something intense, even borderline unpalatable at first glance.
If I encountered it cold, I can imagine myself reacting the same way: “This is pseudoscientific nonsense.” Maybe I wouldn’t even finish reading it before forming that judgment.
And that’s kind of the point, or at least part of the irony: the argument deals precisely with the limits of self-modeling systems, and how they generate intuitions (like “of course experience is real”) that feel indubitable because of structural constraints. So naturally, a theory that denies the ground of those intuitions will feel like it’s violating common sense — or worse, wasting your time.
Still, I’d invite anyone curious to read it less as a metaphysical claim and more as a kind of formal diagnosis — not “you’re wrong to believe in qualia,” but “you’re structurally unable to verify them, and that’s why they feel so real.”
If it’s wrong, I want to know how. But if it just feels wrong, that might be a clue that it’s touching the very boundary it’s trying to illuminate.
Every good regulator of a system must be a model of that system. (Conant and Ashby)
This theorem asserts a necessary correspondence between the regulator (internal representation) and the regulated system (the environment or external system). Explicitly, it means:
A good map (good regulator) of an island (system) is sufficient if external to the island. But if the map is located on the island, it becomes dramatically more effective if it explicitly shows the location of the map itself (“you are here”), thus explicitly invoking self-reference. In the case of a sufficiently expressive symbolic system (like the brain), this self-reference leads directly into conditions required for Gödelian incompleteness.
Therefore: The brain is evidently a good regulator
Is not the good regulator theorem.
The good regulator theorem is “there is a (deterministic) mapping h:S→R
from the states of the system to the states of the regulator.”
absolutely right to point out that the original formulation of the Good Regulator Theorem (Conant & Ashby, 1970) states that:
“Every good regulator of a system must be a model of that system,” formalized as a deterministic mapping h: S \rightarrow Rh:S→R, from the states of the system to the states of the regulator.
Strictly speaking, this does not require embeddedness in the physical sense—it is a general result about control systems and model adequacy. The theorem makes no claim that the regulator must be physically located within the system it regulates.
However, in the context of cognitive systems (like the brain) and self-referential agents, I am extending the logic and implications of the theorem beyond its original formulation, in a way that remains consistent with its spirit.
When the regulator is part of the system it regulates (i.e., is embedded or self-referential)—as is the case with the human brain modeling itself—then the mapping h: S \→ Rh:S→R becomes reflexive. The regulator must model not only the external system but itself as a subsystem.
This recursive modeling introduces self-reference and semantic closure, which—when the system is sufficiently expressive (as in symbolic thought)—leads directly to Gödelian incompleteness. That is, no such regulator can fully model or verify all truths about itself while remaining consistent.
So while the original theorem only requires that a good regulator be a model, I am exploring what happens when the regulator models itself, and how that logically leads to structural incompleteness, subjective illusions, and the emergence of unprovable constructs like qualia.
Yes, you’re absolutely right to point out that this raises an important issue — one that must be addressed, and yet cannot be resolved in the conventional sense. But this is not a weakness in the argument; in fact, it is precisely the point.
To model itself completely, the map would have to include a representation of itself, which would include a representation of that representation, and so on — collapsing into paradox or incompleteness.
This isn’t just a practical limitation. It’s a structural impossibility.
So when we extend the Good Regulator Theorem to embedded regulators — like the brain modeling itself — we don’t just encounter technical difficulty, we hit the formal boundary of self-representation. No system can fully model its own structure and remain both consistent and complete.
But you must ask yourself: Would it be worse regulator? Def. not.
I claim that I exist, and that I am now going to type the next words of my response. Both of those certainly look true. As for whether these beliefs are provable, I do not particularly care; instead, I invoke the nameless:
Every step of your reasoning must cut through to the correct answer in the same movement. More than anything, you must think of carrying your map through to reflecting the territory.
My black-box functions yield a statement “I exist” as true or very probable, and they are also correct in that.
After all, If I exist, I do not want to deny my existence. If I don’t exist… well let’s go with the litany anyways… I want to accept I don’t exist. Let me not be attached to beliefs I may not want.
I’m sorry, but it looks like a chapter from punishment book from Anathem.
Ah, after researching it: That’s actually a great line. Haha — fair enough. I’ll take “a chapter from the punishment book in Anathem” as a kind of backhanded compliment.
If we’re invoking Anathem, then at least we’re in the right monastery.
That said, if the content is genuinely unhelpful or unclear, I’d love to know where the argument loses you — or what would make it more accessible. If it just feels like dense metaphysics-without-payoff, maybe I need to do a better job showing how the structure of the argument differs from standard illusionism or deflationary physicalism.
Yeah, I meant is as a not-a-compliment, but as a specific kind of not-a-compliment about a feeling of reading it rather then about actual meaning—which I just couldn’t access because this feeling was too much for my mind to continue reading (and this isn’t a high bar for a post—I read a lot of long texts).
Understandable. Reading such a dense text is a big investment—and chances are, it’s going nowhere… (even though it actually does, lol). So yeah, I totally would’ve done the same and ditch. But thanks for giving it a shot!
No problem. I guess that is, bad? Or good? ^^ Help me here?
This is an interesting demonstration of what’s possible in philosophy, and maybe I’ll want to engage in detail with it at some point. But for now I’ll just say, I see no need to be an eliminativist or to consider eliminativism, any more than I feel a need to consider “air eliminativism”, the theory that there is no air, or any other eliminativism aimed at something that obviously exists.
Interest in eliminativism arises entirely from the belief that the world is made of nothing but physics, and that physics doesn’t contain qualia, intentionality, consciousness, selves, and so forth. Current physical theory certainly contains no such things. But did you ever try making a theory that contains them?
Thank you for the thoughtful comment. You’re absolutely right that denying the existence of air would be absurd. Air is empirically detectable, causally active, and its absence has measurable effects. But that’s precisely what makes it different from qualia.
Eliminative Nominalism is not the claim whether “x or y exists,” but a critique of how and why we come to believe that something exists at all. It’s not merely a reaction to physicalism; it’s a deeper examination of the formal constraints on any system that attempts to represent or model itself.
If you follow the argument to its root, it suggests that “existence” itself may be a category error—not because nothing is real, but because our minds are evolutionarily wired to frame things in terms of existence and agency. We treat things as discrete, persistent entities because our cognitive architecture is optimized for survival, not for ontological precision.
In other words, we believe in “things” because our brains are very keen on not dying.
So it’s not that qualia are “less real” than air. It’s that air belongs to a class of empirically resolvable phenomena, while qualia belong to a class of internally generated, structurally unverifiable assertions—necessary for our self-models, but not grounded in any formal or observable system.
If something is real, then something exists, yes? Or is there a difference between “existing” and “being real”?
Do you take any particular attitude towards what is real? For example, you might believe that something exists, but you might be fundamentally agnostic about the details of what exists. Or you might claim that the real is ineffable or a continuum, and so any existence claim about individual things is necessarily wrong.
See, from my perspective, qualia are the empirical. I would consider the opposite view to be “direct realism”—experience consists of direct awareness of an external world. That would mean e.g. that when someone dreams or hallucinates, the perceived object is actually there.
What qualic realism and direct realism have in common, is that they also assume the reality of awareness, a conscious subject aware of phenomenal objects. I assume your own philosophy denies this as well. There is no actual awareness, there are only material systems evolved to behave as if they are aware and as if there are such things as qualia.
It is curious that the eliminativist scenario can be elaborated that far. Nonetheless, I really do know that something exists and that “I”, whatever I may be, am aware of it; whether or not I am capable of convincing you of this. And my own assumption is that you too are actually aware, but have somehow arrived at a philosophy which denies it.
Descartes’s cogito is the famous expression of this, but I actually think a formulation due to Ayn Rand is superior. We know that consciousness exists, just as surely as we know that existence exists; and furthermore, to be is to be something (“existence is identity”), to be aware is to know something (“consciousness is identification”).
What we actually know by virtue of existing and being conscious, probably goes considerably beyond even that; but negating either of those already means that you’re drifting away from reality.
I think you really should read or listen to the text.
”It is curious that the eliminativist scenario can be elaborated that far. Nonetheless, I really do know that something exists and that “I”, whatever I may be, am aware of it; whether or not I am capable of convincing you of this. And my own assumption is that you too are actually aware, but have somehow arrived at a philosophy which denies it.”
Yes! That is exactly the point. EN predicts that you will say that, and says, this is a “problem of second order logic”. Basically behavior justifies qualia, and qualia justifies behavior. We know however that first order logic is more fundamental.
I myself feel qualia just as you do, and I am not convinced by my own theory from an intuitive perspective, but from a rational perspective, it must be otherwise what I feel. That is the essence of being a g-Zombie.
Again, read the text.
During the next few days, I do not have time to study exactly how you manage to tie together second-order logic, the symbol grounding problem, and qualia as Gödel sentences (or whatever that connection is). I am reminded of Hofstadter’s theory that consciousness has something to do with indirect self-reference in formal systems, so maybe you’re a kind of Hofstadterian eliminativist.
However, in response to this --
-- I can tell you how a believer in the reality of intentional states, would go about explaining you and EN. The first step is to understand what the key propositions of EN are, the next step is to hypothesize about the cognitive process whereby the propositions of EN arose from more commonplace propositions, the final step is to conceive of that cognitive process in an intentional-realist way, i.e. as a series of thoughts that occurred in a mind, rather than just as a series of representational states in a brain.
You mention Penrose. Penrose had the idea that the human mind can reason about the semantics of higher-order logic because brain dynamics is governed by highly noncomputable physics (highly noncomputable in the sense of Turing degrees, I guess). It’s a very imaginative idea, and it’s intriguing that quantum gravity may actually contain a highly noncomputable component (because of the undecidability of many properties of 4-manifolds, that may appear in the gravitational path integral).
Nonetheless, it seems an avoidable hypothesis. A thinking system can derive the truth of Gödel sentences, so long as it can reason about the semantics of the initial axioms, so all you need is a capacity for semantic reflection (I believe Feferman has a formal theory of this under the name “logical reflection”). Penrose doesn’t address this because he doesn’t even tackle the question of how anything physical has intentionality, he sticks purely to mathematics, physics, and logic.
My approach to this is Husserlian realism about the mind. You don’t start with mindless matter and hope to see how mental ontology is implicit in it or emerges from it. You start with the phenomenological datum that the mind is real, and you build on that. At some point, you may wish to model mental dynamics purely as a state machine, neglecting semantics and qualia; and then you can look for relationships between that state machine, and the state machines that physics and biology tell you about.
But you should never forget the distinctive ontology of the mental, that supplies the actual “substance” of that mental state machine. You’re free to consider panpsychism and other identity theories, interactionism, even pure metaphysical idealism; but total eliminativism contradicts the most elementary facts we know, as Descartes and Rand could testify. Even you say that you feel the qualia, it’s just that you think “from a rational perspective, it must be otherwise”.
I’m truly grateful for the opportunity to engage meaningfully on this topic. You’ve brought up some important points:
“I do not have time” — Completely understandable.
”Symbol grounding” — This is inherently tied to the central issue we’re discussing.
”Qualia as Gödel sentences” — An important distinction here: it’s not that qualia are Gödel sentences, but rather, the absence of qualia functions analogously to a Gödel sentence — paradoxically.
Consider this line of reasoning.
This paradox highlights the self-referential inconsistency — invoking Gödel’s incompleteness theorems:
To highlight expressivity:
A. Lisa is a P-Zombie.
B. Lisa asserts that she is a P-Zombie.
C. A true P-Zombie cannot assert or hold beliefs.
D. Therefore, Lisa cannot assert that she is a P-Zombie.
Cases:
A. Lisa is a P-Zombie.
B. Lisa asserts that she is a P-Zombie.
C. Lisa would be complete: Not Possible
A. Lisa is not a P-Zombie.
B. Lisa asserts that she is a P-Zombie.
C. Lisa would be not complete: Possible but irrelevant.
A. Lisa is a P-Zombie.
B. Lisa asserts that she is a not P-Zombie.
C. Lisa would be not complete: Possible
A. Lisa is not a P-Zombie.
B. Lisa asserts that she is a not P-Zombie.
C. Lisa would be complete: Not Possible
In order for Lisa to be internally consistent yet incomplete, she must maintain that she is not a P-Zombie. But if she maintains that she is not a P-Zombie AND IS NOT A P-Zombie, Lisa would be complete. AHA! Thus impossible.
This connects to Turing’s use of Modus Tollens in the halting problem — a kind of logical self-reference that breaks the system from within.
Regarding Hofstadter: My use of Gödel’s ideas is strictly arithmetic and formal — not metaphorical or analogical, as Hofstadter often approaches them. So while interesting, his theory diverges significantly from what I’m proposing.
You mentioned:
“I can tell you how a ‘believer’...”
— Exactly. That’s the point. “Believer”
“You mention Penrose.”
— Yes. Penrose is consequential. Though I believe his argument is flawed. His reasoning hinges on accepting qualia as a given. If he somehow manages to validate that assumption by proving second order logic in the quantum realm, I’ll tip my hat — but my framework challenges that very basis.
You said:
“My approach is Husserlian realism about the mind — you don’t start with mindless matter and hope...”
— Right, but I’d like to clarify: this critique applies more to Eliminative Materialism than to Eliminative Nominalism. In EN, ‘matter’ itself is a symbol — not a foundational substance. So the problem isn’t starting with “mindless matter” — it’s assuming that “matter” has ontological priority at all.
And finally, on the notion of substance — I’m not relying on that strawman. My position isn’t based on classical substance dualism
The argument you put forward is valid, but it is addressed in the text. It is called the “Phenomenological Objection” by Husserl.
Physicalism doesn’t solve the hard problem, because there is no reason a physical process should feel like anything from the inside.
Computationalism doesn’t solve the hard problem, because there is no reason running an algorithm should feel like anything from the inside.
Formalism doesn’t solve the hard problem, because there is no reason an undecideable proposition should feel like anything from the inside.
Of course, you are not trying to explain qualia as such, you are giving an illusionist style account. But I still don’t see how you are predicting belief in qualia.
What’s useful about them? If you are going to predict (the belief in) qualia, on the basis of usefulness , you need to state the usefulness. It’s useful to know there is a sabretooth tiger bearing down in you , but why is an appearance more useful than a belief ..and what’s the use of a belief-in-appearance?
What necessity?
ETA:
I still see no reason why an undecideable proposition should appear like a quale or a belief in qualia.
Why?
Phenomenal conservatism , the idea that if something seems to exist ,you should (defeasibly) assume it does exist,.is the basis for belief in qualia. And it can be defeated by a counterargument, but the counter argument needs to be valid as an argument. Saying X’s are actually Y’s for no particular reason is not valid.
There might be some usefulness!
The statement I’d consider is “I am now going to type the next characters of my comment”. This belief turns out to be true by direct demonstration, it is not provable because I could as well leave the commenting to tomorrow and be thinking “I am now going to sleep”, not particularly justifiable in advance, and it is useful for making specific plans that branch less on my own actions.
I object to the original post because of probabilistic beliefs, though.
Thanks for being thoughtful
To your objection: Again, EN knew that you will object. The thing is EN is very abstract: It’s like two halting machines who think that they are universal halting machines try to understand what it means that they are not unversal halting machines.
They say: Yes but if the halting problem is true, than I will say it’s true. I must be a UTM.
Survival.
Addressing your claims: Formalism, Computationalism, Physicalism, are all in opposition to EN. EN says, that maybe existence itself is not a fundamental category, but soundness. This means that the idea of things existing and not existing is a symbol of the brain.
EN doesn’t attempt to explain why a physical or computational process should “feel like” anything — because it denies that such feeling exists in any metaphysical sense. Instead, it explains why a system like the brain comes to believe in qualia. That belief arises not from phenomenological fact, but from structural necessity: any self-referential, self-regulating system that is formally incomplete (as all sufficiently complex systems are) will generate internally undecidable propositions. These propositions — like “I am in pain” or “I see red” — are not verifiable within the system, but are functionally indispensable for coherent behavior.
The “usefulness” of qualia, then, lies in their regulatory role. By behaving AS IF having experience, the system compresses and coordinates internal states into actionable representations. The belief in qualia provides a stable self-model, enables prioritization of attention, and facilitates internal coherence — even if the underlying referents (qualia themselves) are formally unprovable. In this view, qualia are not epiphenomenal mysteries, but adaptive illusions, generated because the system cannot...
NOW
I understand that you invoke the “Phenomenological Objection,” as I also, of course, “feel” qualia. But under EN, that feeling is not a counterargument — it’s the very evidence that you are part of the system being modeled. You feel qualia because the system must generate that belief in order to function coherently, despite its own incompleteness. You are embedded in the regulatory loop, and so the illusion is not something you can step outside of — it is what it feels like to be inside a model that cannot fully represent itself. The conviction is real; the thing it points to is not.
”because there is no reason a physical process should feel like anything from the inside.”
The key move EN makes — and where it departs from both physicalism and computationalism — is that it doesn’t ask, “Why should a physical process feel like anything from the inside?” It asks, “Why must a physical system come to believe it feels something from the inside in order to function?” The answer is: because a self-regulating, self-modeling system needs to track and report on its internal states without access to its full causal substrate. It does this by generating symbolic placeholders — undecidable internal propositions — which it treats as felt experience. In order to say “I am in pain,” the system must first commit to the belief that there is something it is like to be in pain. The illusion of interiority is not a byproduct — it is the enabling fiction that lets the system tell itself a coherent story across incomplete representations.
OKAY since you made the right question I will include this paragraph in the Abstract.
In other words: the brain doesn’t fuck around with substrate — it fucks around with the proof that you have one. It doesn’t care what “red” is made of; it cares whether the system can act as if it knows what red is, in a way that’s coherent, fast, and behaviorally useful. The experience isn’t built from physics — it’s built from the system’s failed attempts to prove itself to itself. That failure gets reified as feeling. So when you say “I feel it,” what you’re really pointing to is the boundary of what your system can’t internally verify — and must, therefore, treat as foundational. That’s not a bug. That’s the fiction doing its job.
What is a useful prediction that eliminatism makes?
Eliminative Nominalism predicts:
As a consequence of its validity:
Neuroscience will not make progress in explaining consciousness.
The symbol grounding problem will remain unsolved in computational systems.
As a theory with explanatory power:
It describes pathologies and states related to consciousness.
It addresses and potentially resolves the Hard Problem of Consciousness.
As a theory with predictive power:
Interestingly, while it seems to have little direct connection to consciousness (admittedly, it sound like gibberish), there is a conceptual link to second-order logic and Einstein synchronization. The argument is as follows: since second-order logic is a construct of the human brain, Einstein synchronization—or more precisely, Poincaré–Einstein synchronization—may not be fundamentally necessary for physics, as nature would avoid it either way. (This does not mean that Relativity is wrong or something like that.)
There is a part in the text that addresses the why and how:
The apparent dependence on simultaneity conventions may merely reflect a coordinate choice, with a real, underlying speed limit, with or without parity, still preserved across all frames. This is not good for EN, but also not catastrophic.
There is a general consensus that an undetectable ether could, in principle, coexist with special relativity, without leading to observable differences. As such, it is often regarded as “philosophically optional”—not required by current physical theories.
Many physicists anticipate that a future theory of quantum gravity will offer a more fundamental framework, potentially resolving or reframing these issues entirely.
Basically: I say symbols are a creation of a brain in order to be self-referential. (Even though it’s not. I can address this.) So symbols should not pop up in nature. Einstein uses a convention in order to keep Nature from using symbols. Without Einstein convention the speed of light is defined by the speed of light. Thus the speed of light is a symbol. I say: This will be resolved in a way.
This looks to me like long-form gibberish, and it’s not helped by its defensive pleas to be taken seriously and pre-emptive psychologising of anyone who might disagree.
People often ask about the reasons for downvotes. I would like to ask, what did the two people who strongly upvoted this see in it? (Currently 14 karma with 3 votes. Leaving out the automatic point of self-karma leaves 13 with 2 votes.)
While I take no position on the general accuracy or contextual robustness of the post’s thesis, I find that its topics and analogies inspire better development of my own questions. The post may not be good advice, but it is good conversation. In particular I really like the attempt to explicitly analyze possible explanations of processes of consciousness emerging from physical formal systems instead of just remarking on the mysteriousness of such a thing ostensibly having happened.
Since you seem to grasp the structural tension here, you might find it interesting that one of EN’s aims is to develop an argument that does not rely on Dennett’s contradictory “Third-Person Absolutism”—that is, the methodological stance which privileges an objective, external (third-person) perspective while attempting to explain phenomena that are, by nature, first-person emergent. EN tries to show that subjective illusions like qualia do not need to be explained away in third-person terms, but rather understood as consequences of formal limitations on self-modeling systems.
Thank you — that’s exactly the spirit I was hoping to cultivate. I really appreciate your willingness to engage with the ideas on the level of their generative potential, even if you set aside their ultimate truth-value. Which is a hallmark of critical thinking.
I would be insanely glad if you could engage with it deeper since you strike me as someone who is… rational.
I especially resonate with your point about moving beyond mystery-as-aesthetic, and toward a structural analysis of how something like consciousness could emerge from given constraints. Whether or not EN is the right lens, I think treating consciousness as a problem of modeling rather than a problem of magic is a step in the right direction.
Yeah, same here. This feels like a crossover between the standard Buddhist woo and LLM slop, sprinkled with “quantum” and “Gödel”. The fact that it has positive karma makes me feel sad about LW.
Since it was written using LLM, I think it is only fair to ask LLM to summarize it:
So, I guess the key idea is to use the Gödel’s incompleteness theorem to explain human psychology.
Standard crackpottery, in my opinion. Humans are not mathematical proof systems.
I agree that this sounds not very valuable; sounds like a repackaging of illusionism without adding anything. I’m surprised about the votes (didn’t vote myself).
Illusionism often takes a functionalist or behavioral route: it says that consciousness is not what it seems, and explains it in terms of cognitive architecture or evolved heuristics. That’s valuable, but EN goes further — or perhaps deeper — by grounding the illusion not just in evolutionary utility, but in formal constraints on self-referential systems.
In other words:
This brings tools like Gödel’s incompleteness, semantic closure, and regulator theory into the discussion in a way that directly addresses why subjective experience feels indubitable even if it’s structurally ungrounded.
So yes, it may sound like illusionism — but it tries to explain why illusionism is inevitable, not just assert it.
That said, I’d genuinely welcome criticism or counterexamples. If it’s just a rebranding, let’s make that explicit. But if there’s a deeper structure here worth exploring, I hope it earns the scrutiny.
Sorry, but isn’t this written by an LLM? Especially since milan’s other comments ([1], [2], [3]) are clearly in a different style, the emotional component goes from 9⁄10 to 0⁄10 with no middle ground.
I find this extremely offensive (and I’m kinda hard to offend I think), especially since I’ve ‘cooperated’ with milan’s wish to point to specific sections in the other comment. LLMs in posts is one thing, but in comments, yuck. It’s like, you’re not worthy of me even taking the time to respond to you.
The guidelines don’t differentiate between posts and comments but this violates them regardless (and actually the post does as well) since it very much does have the stereotypical writing style of an AI assistant, and the comment also seems copy-pasted without a human element at all.
QED
“Since it was written using LLM” “LLM slop.”
Some of you soooo toxic.
First of all, try debating an LLM about illusory qualia—you’ll likely find it—attributing the phenomenon to self-supervised learning—It has a strong bias toward Emergentism, likely stemming from… I don’t know, humanities slight bias towards it’s own experience.
But yes, I used LLM for proofreading. I disclosed that, and I am not ashamed of it.
That concern is understandable — and in fact, it’s addressed directly and repeatedly in the text. The argument doesn’t claim that humans are formal proof systems in a literal or ontological sense. Rather, it explores how any system capable of symbolic self-modeling (like the brain) inherits formal constraints analogous to those found in expressive logical systems — particularly regarding incompleteness, self-reference, and verification limits.
It’s less about reducing humans to Turing machines and more about using the logic of formal systems to expose the structural boundaries of introspective cognition.
You’re also right to be skeptical — extraordinary claims deserve extraordinary scrutiny. But the essay doesn’t dodge that. It explicitly offers a falsifiable framework, makes empirical predictions, and draws from well-established formal results (e.g. Gödel, Conant & Ashby) to support its claims. It’s not hiding behind abstraction — it’s leaning into it, and then asking to be tested.
And sure, the whole thing could still be wrong. That’s fair. But dismissing it as “crackpottery” without engaging the argument — especially on a forum named LessWrong — seems to bypass the very norms of rational inquiry we try to uphold here.
If the argument fails, let’s show how — not just that. That would be far more interesting, and far more useful.
Rough day, huh? Seriously though, you’ve got a thesis, but you’re missing a clear argument. Let me help: pick one specific thing that strikes you as nonsensical. Then, explain why it doesn’t make sense. By doing that, you’re actually contributing—helping humanity by exposing “long-form gibberish”.
Just make sure the thing you choose isn’t something trivially wrong. The more deeply flawed it is—yet still taken seriously by others—the better.
But, your critique of preemptive psychologising is unwarranted: I created a path for quick comprehension. “To quickly get the gist of it in just a few minutes, go to Section B and read: 1.0, 1.1, 1.4, 2.2, 2.3, 2.5, 3.1, 3.3, 3.4, 5.1, 5.2, and 5.3.”
Here’s one section that strikes me as very bad
I know what this is trying to do but invoking mythical language when discussing consciousness is very bad practice since it appeals to an emotional response. Also it’s hard to read.
Similar things are true for lots of other sections here, very unnecessarily poetic language. I guess you can say that this is policing tone, but I think it’s valid to police tone if the tone is manipulative (on top of just making it harder and more time intensive to read.
Since you asked for a section that’s explicitly nonsense rather than just bad, I think this one deserves the label:
First of all, if you can’t encode something, it could just be that the thing is not well-defined, rather than that the system is insufficiently powerful
Second, the way this is written (unless the claim is further justified elsewhere) implies that the inability to encode human concepts in formal languages is self-evident, presumably because no one has managed it so far. This is completely untrue; formal[^1] languages are extremely impractical, which is why mathematicians don’t write any real proofs in them. If a human concept like irony could be encoded, it would be extremely long and way way beyond the ability of any human to write down. So even if it were theoretically possible, we almost certainly wouldn’t have done it yet, which means that it not having been done yet is negligible evidence of it being impossible.
[1]: typo corrected from “natural”
“natural languages are extremely impractical, which is why mathematicians don’t write any real proofs in them.”
I have never seen such a blatant disqualifaction of one’s self.
Why do you think you are able to talk to these subjects if you are not versed in Proof theory?
Just type it into chat gpt:
Research proof theory, type theory, and Zermelo–Fraenkel set theory with the axiom of choice (ZFC) before making statements here.
At the very least, try not to be miserable. Someone who mistakes prose for an argument should not have the privilege of indulging in misery.
The sentence you quoted is a typo, it’s is meant to say that formal languages are extremely impractical.
well this is also not true. because “practical” as a predicate… is incomplete.… meaning its practical depending on who you ask.
Talking over “Formal” or “Natural” languages in a general way is very hard...
The rule is this: Any reasoning or method is acceptable in mathematics as long as it leads to sound results.
I’m actually amused that you criticized the first paragraph of an essay for being written in prose — it says so much about the internet today.
There you are — more psychologising.
Now condescension.
Okay I… uhm… did I do something wrong to you? Do we know each other?
We do not know each other. I know nothing about you beyond your presence on LW. My comments have been to the article at hand and to your replies. Maybe I’ll expand on them at some point, but I believe the article is close to “not even wrong” territory.
Meanwhile, I’d be really interested in hearing from those two strong upvoters, or anyone else whose response to it differs greatly from mine.
The statement “the article is ‘not even wrong’” is closely related to the inability to differentiate: Is it formally false? Or is it conclusively wrong? Or, as you prefer, perhaps both?
I am very sure that you will hear from them. You strike me as a person who is great to interact with. I am sure that they will be happy to justify themselves to you.
Everyone loves a person who just looks at something and says… eeeeh gibberish...
Especially if that person is correctly applying pejorative terms.
Nothing is preventing us from designing a system consisting of a module generating a red-tinged video stream and image recognition software that looks at the stream and based on some details of it sends commands to the robotic body. Now, it would be a silly and overcomplicated way to design a system, but that’s beside the point.
Don’t confuse citation and the referent “Matter” is a symbol in our map, while matter itself in the territory. Naturally, the territory predates the map.
This seems to be entirely vibe based. You don’t need to lean idealist to talk about software.
Incompleteness theorem implies the existence of either unprovable and true statement (if the system is incomplete), or provable and false statement (if it’s inconsistent).
It seems that all the substance of your argument is based on a completely wrong premise.
You basically left our other more formal conversation to engage in the critique of prose.
*slow clap*
These are metaphors to lead the reader slowly to the idea… This is not the Argument. The Argument is right there and you are not engaging with it.
You need to understand the claim first in order to deconstruct it. Now you might say I have a psychotic fit, but earlier as we discussed Turing, you didn’t seem to resonate with any of the ideas.
If you are ready to engage with the ideas I am at your disposal.
Not at all. I’m doing both. I specifically started the conversation in the post which is less… prose. But I suspect you may also be interested in engagement with the long post that you put so much effort to write. If it’s not the case—nevermind and let’s continue the discussion in the argument thread.
If you require flawed metaphors, what does it say about the idea?
Frankly speaking, that does indeed look like that. From my perspective you are not being particularly coherent, keep jumping from point to point, with nearly no engagement with what I write. But this can be an artifact of large inferential distances. So you have my benefit of the doubt and I’m eager to learn whether there is some profound substance in your reasoning.
Actually, I think this argument demonstrates the probable existence of the opposite of it’s top line conclusion.
In short, we can infer from the fact that a symbolic regulator has more possible states than it has inputs that anything that can be modeled as a symbolic regulator has a limited amount of information about it’s own state (that is, limited internal visibility). You can do clever things with indexing so that it can have information about any part of its state, but not all at once.
In a dynamic system, this creates something that acts a lot like consciousness, maybe even deserves the name.
Sorry, but for me this is a magical moment. I have been working on this shit for years… Twisting it… Thinking… Researching… Telling friends… Family… They don’t understand. And now finally someone might understand it. In EN consciousness is isomorphic to the system. You are almost there.
I knew that from that comment before that you are informed. You just need to pull the string. It is like a double halting problem where the second layer is affecting you. You are part of the thought experiment!
SO JUST HOLD JUST HOLD ON TO THAT THOUGHT THAT YOU HAVE THERE...
And now this: You are not able to believe it’s true.
BUT! From a logical perspective IT COULD BE TRUE.
Try to pull on that string… You are almost there
YES. IT IS THE SAME. YOU GOT IT. WE GOT A WINNER.
Being P-Zombie and being conscious IS THE SAME THING.
Fucking finally… I’m arguing like an idiot here
How could undecidability, unprovability, self-referential modeling, incompleteness, or any sort of logic generate the redness of red?
Incomplete, self-referential modeling → ? → red
The feeling part remains unexplained.
We still seem to have experience. How can this “seeming” feel like something? If you boil everything down to math, how can math feel like anything?
This seems contradictory.
What about torturing animals?
Among many other things, I don’t think the depiction of illusionism contradistinguished here from EM is fair.
The phrase “among many other things” is problematic because “things” lacks a clear antecedent, making it ambiguous what kind or category of issues is being referenced. This weakens the clarity and precision of the sentence.
Please do not engage with this further.
The problem seems to be that we have free choice of internal formal systems and
A consistent system, extended by an unprovable axiom, is also consistent, (since if this property was false then we would be able to prove the axiom by taking the extension and searching for a contradiction).
Consequently accepting the unprovable as true or false only has consequences for other unprovable statements.
I don’t think this entire exercise says anything.
In short we expect for probablistic logics and decision theories to converge under self-reflection.
yes but these are all second-order-logic terms! They are incomplete… You are again trying to justify the mind with it’s own products. You are allowed to use ONLY first-order-logic terms!
Gödel effectively disrupted the foundations of Peano Arithmetic through his use of Gödel numbering. His groundbreaking proof—formulated within first-order logic—demonstrated something profound: that systems of symbols are inherently incomplete. They cannot fully encapsulate all truths within their own framework.
And why is that? Because symbols themselves are not real in the physical sense. You won’t find them “in nature”—they are abstract representations, not tangible entities.
Take a car, for instance. What is a car? It’s made of atoms. But what is an atom? Protons, neutrons, quarks… the layers of symbolic abstraction continue infinitely in every direction. Each concept is built upon others, none of which are the “ultimate” reality.
So what is real?
Even the word “real” is a property—a label we assign, another symbol in the chain.
What about sound? What is sound really? Vibrations? Perceptions? Again, more layers of abstraction.
And numbers—what are they? “One” of something, “two” of something… based on distinctions, on patterns, on logic. Conditional statements: if this, then that.
Come on—make the jump.
It is a very abstract idea… It does seem as gibberish if you are not acquainted… but it’s not. And I think that you might “get it”. It has a high burder. That’s why I am not really mad that people here do not get the logic behind it.
There is something in reality that is inscrutable for us humans. And that thing works in second order logic. It is not Exsiting or nonexisting, but sound. Evolution exploits that thing… to create something that converges towards something that would be… almost impossible, but not quite. Unknowable.
Thought before I need to log off for the day,
This line of argument seems to indicate that physical systems can only completely model smaller physical systems (or the trivial model of themselves), and so complete models of reality are intractable.
I am not sure what else you are trying to get at.
That’s a great observation — and I think you’re absolutely right to sense that this line of reasoning touches epistemic limits in physical systems generally.
But I’d caution against trying to immediately affirm new metaphysical claims based on those limits (e.g., “models of reality are intractable” or “systems can only model smaller systems”).
Why? Because that move risks falling back into the trap that EN is trying to illuminate:
That we use the very categories generated by a formally incomplete system (our mind) to make claims about what can or can’t be known.
Try to combine two things at once:
1. En would love to eliminate everything if it could.
The logic behind it: What stays can stay. (first order logic)
EN would also love to eliminate first-order logic — but it can’t.
Because first-order logic would eliminate EN first.
Why? Because EN is a second-order construct — it talks about how systems model themselves, which means it presupposes the formal structure of first-order logic just to get off the ground.
So EN doesn’t transcend logic. It’s embedded in it.
Which is fitting — since EN is precisely about illusions that arise within an expressive system, not outside of it.
2. What EN is trying to show is that these categories — “consciousness,” “internal access,” even “modeling” — are not reliable ontologies, but functional illusions created by a system that must regulate itself despite its incompleteness.
So rather than taking EN as a reason to affirm new limits about “reality” or “systems,” the move is more like:
“Let’s stop trusting the categories that feel self-evident — because their self-evidence is exactly what the illusion predicts.”
It’s not about building a new metaphysical map. It’s about realizing why any map we draw from the inside will always seem complete — even when it isn’t.
Now...
You might say that then we are fucked. But that is not the case:
- Turing and Gödel proved that it is possible to critique second order logic with first order logic.
- Whole of Physics is in First-Order-Logic (Except that Poincaree synchronisation issue which okay)
- Group Theory is insanely complex. First-Order-Logic
Now is second order logic bed? No it is insanely usefull in context of how humans evolved: To make general (fast) assumptions about many things! Sets and such. ZFC. Evolution.
I think you might be grossly misreading Godel’s incompleteness theorem. Specifically, it proves that a formal system is either incomplete or inconsistent. You have not addressed the possibility that minds are in fact inconsistent/make moves that are symbolically describable but unjustifiable (which generate falsehoods)
We know both happen.
The question then is what to do with inconsistent mind.
Thanks for meaningfully engaging with the argument — it’s rare and genuinely appreciated.
Edit: You’re right that Gödel’s theorem allows for both incompleteness and inconsistency — and minds are clearly inconsistent in many ways. But the argument of Eliminative Nominalism (EN) doesn’t assume minds are consistent; it claims that even if they were, they would still be incomplete when modeling themselves.
Also, evolution acts as a filtering process — selecting for regulatory systems that tend toward internal consistency, because inconsistent regulators are maladaptive. We see this in edge cases too: under LSD (global perturbation = inconsistency), we observe ego dissolution and loss of qualia at higher doses. In contrast, severe brain injuries (e.g., hemispherectomy) often preserve the sense of self and continuity — suggesting that extending a formal system (while preserving its consistency) renders it incomplete, and thus qualia persists. (in the essay)
That’s exactly why EN is a strong theory: it’s falsifiable. If a system could model its own consciousness formally and completely, EN would be wrong.
EN is the first falsifiable theory of consciousness.
In what sense is second-order logic “beyond the reach of machines”? Is it non-deterministic? Or what are you trying to say here? (Maybe some examples would help)
Ah okay. Sorry for being an a-hole, but some of the comments here are just...
You asked a question in good faith and I mistook it.
So, it’s simple:
Imagine you’re playing with LEGO blocks.
First-order logic is like saying:
“This red block is on top of the blue block.”
You’re talking about specific things (blocks), and how they relate. It’s very rule-based and clear.
Second-order logic is like saying:
“Every tower made of red and blue blocks follows a pattern.”
Now you’re talking about patterns of blocks, not just the blocks. You’re making rules about rules.
Why can’t machines fully “do” second-order logic?
Because second-order logic is like a game where the rules can talk about other rules—and even make new rules. Machines (like computers or AIs) are really good at following fixed rules (like in first-order logic), but they struggle when:
The rules are about rules themselves, and
You can’t list or check all the possibilities, ever—even in theory.
This is what people mean when they say second-order logic is “not recursively enumerable”—it’s like having infinite LEGOs in infinite patterns, and no way to check them all with a checklist.
Maybe I should have asked: In what sense are machines “fully doing” first-order logic? I think I understand the part where first logic formulas are recursively enumerable, in theory, but isn’t that intractable to the point of being useless and irrelevant in practice?
Think of it like this: Why is Gödel’s attack on ZFC and Peano Arithmetic so powerful...
Gödel’s Incompleteness Theorems are powerful because they revealed inherent limitations USING ONLY first-order logic. He showed that any sufficiently expressive, consistent system cannot prove all truths about arithmetic within itself… but with only numbers.
First-order logic is often seen as more fundamental because it has desirable properties like completeness and compactness, and its semantics are well-understood. In contrast, second-order logic, while more expressive, lacks these properties and relies on stronger assumptions...
According to EN, this is also because second order logic is entirely human made.So what is second-order-logic?
The question itself is a question of second-order-logic.
If you ask me what first order logic is… The question STILL is a question of second-order-logic.
First order logic are things that are clear as night and day. 1+1, what is x in x+3=4… these type of things.
Honestly, I can’t hold it against anyone who bounces off the piece. It’s long, dense, and, let’s face it — it proposes something intense, even borderline unpalatable at first glance.
If I encountered it cold, I can imagine myself reacting the same way: “This is pseudoscientific nonsense.” Maybe I wouldn’t even finish reading it before forming that judgment.
And that’s kind of the point, or at least part of the irony: the argument deals precisely with the limits of self-modeling systems, and how they generate intuitions (like “of course experience is real”) that feel indubitable because of structural constraints. So naturally, a theory that denies the ground of those intuitions will feel like it’s violating common sense — or worse, wasting your time.
Still, I’d invite anyone curious to read it less as a metaphysical claim and more as a kind of formal diagnosis — not “you’re wrong to believe in qualia,” but “you’re structurally unable to verify them, and that’s why they feel so real.”
If it’s wrong, I want to know how. But if it just feels wrong, that might be a clue that it’s touching the very boundary it’s trying to illuminate.
Is not the good regulator theorem.
The good regulator theorem is “there is a (deterministic) mapping h:S→R
I don’t think this requires embeddedness
absolutely right to point out that the original formulation of the Good Regulator Theorem (Conant & Ashby, 1970) states that:
Strictly speaking, this does not require embeddedness in the physical sense—it is a general result about control systems and model adequacy. The theorem makes no claim that the regulator must be physically located within the system it regulates.
However, in the context of cognitive systems (like the brain) and self-referential agents, I am extending the logic and implications of the theorem beyond its original formulation, in a way that remains consistent with its spirit.
When the regulator is part of the system it regulates (i.e., is embedded or self-referential)—as is the case with the human brain modeling itself—then the mapping h: S \→ Rh:S→R becomes reflexive. The regulator must model not only the external system but itself as a subsystem.
This recursive modeling introduces self-reference and semantic closure, which—when the system is sufficiently expressive (as in symbolic thought)—leads directly to Gödelian incompleteness. That is, no such regulator can fully model or verify all truths about itself while remaining consistent.
So while the original theorem only requires that a good regulator be a model, I am exploring what happens when the regulator models itself, and how that logically leads to structural incompleteness, subjective illusions, and the emergence of unprovable constructs like qualia.
Yes, you’re absolutely right to point out that this raises an important issue — one that must be addressed, and yet cannot be resolved in the conventional sense. But this is not a weakness in the argument; in fact, it is precisely the point.
This isn’t just a practical limitation. It’s a structural impossibility.
So when we extend the Good Regulator Theorem to embedded regulators — like the brain modeling itself — we don’t just encounter technical difficulty, we hit the formal boundary of self-representation. No system can fully model its own structure and remain both consistent and complete.
But you must ask yourself: Would it be worse regulator? Def. not.
The question now is, who will be the second g-Zombie?
I claim that I exist, and that I am now going to type the next words of my response. Both of those certainly look true. As for whether these beliefs are provable, I do not particularly care; instead, I invoke the nameless:
My black-box functions yield a statement “I exist” as true or very probable, and they are also correct in that.
After all, If I exist, I do not want to deny my existence. If I don’t exist… well let’s go with the litany anyways… I want to accept I don’t exist. Let me not be attached to beliefs I may not want.
Again, read the G-Zombie Argument carefully. You cannot deny your existence.
Here is the original argument, more formally… (But there is a more formal version)
https://www.lesswrong.com/posts/qBbj6C6sKHnQfbmgY/i-g-zombie
If you deny your existence… and you dont exist… AHA! Well then we have a complete system. Which is impossible.
But since nobody is reading the paper fully, and everyone makes lound mouth assumptions what I wan’t to show with EN...
The G-Zombie, is not the P-Zombie argument, but a far more abstract formulation. But these idiots dont get it.