Yeah, same here. This feels like a crossover between the standard Buddhist woo and LLM slop, sprinkled with “quantum” and “Gödel”. The fact that it has positive karma makes me feel sad about LW.
Since it was written using LLM, I think it is only fair to ask LLM to summarize it:
Summary of “Eliminative Nominalism”
This philosophical essay introduces Eliminative Nominalism (EN), a theoretical framework that extends Eliminative Materialism (EM) to address what the author considers its foundational oversight: the conflation of first-order physical description with second-order representational structure.
Core Arguments
Mind as Expressive System: The mind is a logically expressive system that inherits the constraints of formal systems, making it necessarily incomplete (via Gödel’s incompleteness theorems).
Consciousness as Formal Necessity: Subjective experience (“qualia”) persists not because it reflects reality but because its rejection would be arithmetically impossible within the system that produces it. The brain, as a “good regulator,” cannot function without generating these unprovable formal assertions.
Universe Z Thought Experiment: The author proposes a universe physically identical to ours but lacking epiphenomenal entities (consciousness), where organisms still behave exactly as in our universe. Through parsimony and evolutionary efficiency, the author argues we likely inhabit such a universe.
Beyond Illusionism: Unlike EM’s “illusionism,” EN doesn’t claim consciousness is merely an illusion (which presupposes someone being deceived) but rather a structural necessity—a computational artifact that cannot be eliminated even if metaphysically unreal.
Implications
Consciousness is neither reducible nor emergent, but a formal fiction generated by self-referential systems
The persistence of qualia reflects the mind’s inherent incompleteness, not its access to metaphysical truth
Natural language and chain of thought necessarily adhere to formalizable structures that constrain cognition
The “hard problem” of consciousness is dissolved rather than solved
The essay concludes that while EN renders consciousness metaphysically problematic, it doesn’t undermine ethics or human experience, offering instead a testable, falsifiable framework for understanding mind that willingly subjects itself to empirical criteria.
So, I guess the key idea is to use the Gödel’s incompleteness theorem to explain human psychology.
Standard crackpottery, in my opinion. Humans are not mathematical proof systems.
I agree that this sounds not very valuable; sounds like a repackaging of illusionism without adding anything. I’m surprised about the votes (didn’t vote myself).
Illusionism often takes a functionalist or behavioral route: it says that consciousness is not what it seems, and explains it in terms of cognitive architecture or evolved heuristics. That’s valuable, but EN goes further — or perhaps deeper — by grounding the illusion not just in evolutionary utility, but in formal constraints on self-referential systems.
In other words:
EN doesn’t just say, “You’re wrong about qualia.” It says, “You must be wrong — formally — because any system that models itself will necessarily generate undecidable propositions (e.g., qualia) that feel real but cannot be verified.”
This brings tools like Gödel’s incompleteness, semantic closure, and regulator theory into the discussion in a way that directly addresses why subjective experience feels indubitable even if it’s structurally ungrounded.
So yes, it may sound like illusionism — but it tries to explain why illusionism is inevitable, not just assert it.
That said, I’d genuinely welcome criticism or counterexamples. If it’s just a rebranding, let’s make that explicit. But if there’s a deeper structure here worth exploring, I hope it earns the scrutiny.
Sorry, but isn’t this written by an LLM? Especially since milan’s other comments ([1], [2], [3]) are clearly in a different style, the emotional component goes from 9⁄10 to 0⁄10 with no middle ground.
I find this extremely offensive (and I’m kinda hard to offend I think), especially since I’ve ‘cooperated’ with milan’s wish to point to specific sections in the other comment. LLMs in posts is one thing, but in comments, yuck. It’s like, you’re not worthy of me even taking the time to respond to you.
The guidelines don’t differentiate between posts and comments but this violates them regardless (and actually the post does as well) since it very much does have the stereotypical writing style of an AI assistant, and the comment also seems copy-pasted without a human element at all.
A rough guideline is that if you are using AI for writing assistance, you should spend a minimum of 1 minute per 50 words (enough to read the content several times and perform significant edits), you should not include any information that you can’t verify, haven’t verified, or don’t understand, and you should not use the stereotypical writing style of an AI assistant.
First of all, try debating an LLM about illusory qualia—you’ll likely find it—attributing the phenomenon to self-supervised learning—It has a strong bias toward Emergentism, likely stemming from… I don’t know, humanities slight bias towards it’s own experience.
But yes, I used LLM for proofreading. I disclosed that, and I am not ashamed of it.
“Standard crackpottery, in my opinion. Humans are not mathematical proof systems.”
That concern is understandable — and in fact, it’s addressed directly and repeatedly in the text. The argument doesn’t claim that humans are formal proof systems in a literal or ontological sense. Rather, it explores how any system capable of symbolic self-modeling (like the brain) inherits formal constraints analogous to those found in expressive logical systems — particularly regarding incompleteness, self-reference, and verification limits.
It’s less about reducing humans to Turing machines and more about using the logic of formal systems to expose the structural boundaries of introspective cognition.
You’re also right to be skeptical — extraordinary claims deserve extraordinary scrutiny. But the essay doesn’t dodge that. It explicitly offers a falsifiable framework, makes empirical predictions, and draws from well-established formal results (e.g. Gödel, Conant & Ashby) to support its claims. It’s not hiding behind abstraction — it’s leaning into it, and then asking to be tested.
And sure, the whole thing could still be wrong. That’s fair. But dismissing it as “crackpottery” without engaging the argument — especially on a forum named LessWrong — seems to bypass the very norms of rational inquiry we try to uphold here.
If the argument fails, let’s show how — not just that. That would be far more interesting, and far more useful.
Yeah, same here. This feels like a crossover between the standard Buddhist woo and LLM slop, sprinkled with “quantum” and “Gödel”. The fact that it has positive karma makes me feel sad about LW.
Since it was written using LLM, I think it is only fair to ask LLM to summarize it:
So, I guess the key idea is to use the Gödel’s incompleteness theorem to explain human psychology.
Standard crackpottery, in my opinion. Humans are not mathematical proof systems.
I agree that this sounds not very valuable; sounds like a repackaging of illusionism without adding anything. I’m surprised about the votes (didn’t vote myself).
Illusionism often takes a functionalist or behavioral route: it says that consciousness is not what it seems, and explains it in terms of cognitive architecture or evolved heuristics. That’s valuable, but EN goes further — or perhaps deeper — by grounding the illusion not just in evolutionary utility, but in formal constraints on self-referential systems.
In other words:
This brings tools like Gödel’s incompleteness, semantic closure, and regulator theory into the discussion in a way that directly addresses why subjective experience feels indubitable even if it’s structurally ungrounded.
So yes, it may sound like illusionism — but it tries to explain why illusionism is inevitable, not just assert it.
That said, I’d genuinely welcome criticism or counterexamples. If it’s just a rebranding, let’s make that explicit. But if there’s a deeper structure here worth exploring, I hope it earns the scrutiny.
Sorry, but isn’t this written by an LLM? Especially since milan’s other comments ([1], [2], [3]) are clearly in a different style, the emotional component goes from 9⁄10 to 0⁄10 with no middle ground.
I find this extremely offensive (and I’m kinda hard to offend I think), especially since I’ve ‘cooperated’ with milan’s wish to point to specific sections in the other comment. LLMs in posts is one thing, but in comments, yuck. It’s like, you’re not worthy of me even taking the time to respond to you.
The guidelines don’t differentiate between posts and comments but this violates them regardless (and actually the post does as well) since it very much does have the stereotypical writing style of an AI assistant, and the comment also seems copy-pasted without a human element at all.
QED
“Since it was written using LLM” “LLM slop.”
Some of you soooo toxic.
First of all, try debating an LLM about illusory qualia—you’ll likely find it—attributing the phenomenon to self-supervised learning—It has a strong bias toward Emergentism, likely stemming from… I don’t know, humanities slight bias towards it’s own experience.
But yes, I used LLM for proofreading. I disclosed that, and I am not ashamed of it.
That concern is understandable — and in fact, it’s addressed directly and repeatedly in the text. The argument doesn’t claim that humans are formal proof systems in a literal or ontological sense. Rather, it explores how any system capable of symbolic self-modeling (like the brain) inherits formal constraints analogous to those found in expressive logical systems — particularly regarding incompleteness, self-reference, and verification limits.
It’s less about reducing humans to Turing machines and more about using the logic of formal systems to expose the structural boundaries of introspective cognition.
You’re also right to be skeptical — extraordinary claims deserve extraordinary scrutiny. But the essay doesn’t dodge that. It explicitly offers a falsifiable framework, makes empirical predictions, and draws from well-established formal results (e.g. Gödel, Conant & Ashby) to support its claims. It’s not hiding behind abstraction — it’s leaning into it, and then asking to be tested.
And sure, the whole thing could still be wrong. That’s fair. But dismissing it as “crackpottery” without engaging the argument — especially on a forum named LessWrong — seems to bypass the very norms of rational inquiry we try to uphold here.
If the argument fails, let’s show how — not just that. That would be far more interesting, and far more useful.