This looks to me like long-form gibberish, and it’s not helped by its defensive pleas to be taken seriously and pre-emptive psychologising of anyone who might disagree.
People often ask about the reasons for downvotes. I would like to ask, what did the two people who strongly upvoted this see in it? (Currently 14 karma with 3 votes. Leaving out the automatic point of self-karma leaves 13 with 2 votes.)
While I take no position on the general accuracy or contextual robustness of the post’s thesis, I find that its topics and analogies inspire better development of my own questions. The post may not be good advice, but it is good conversation. In particular I really like the attempt to explicitly analyze possible explanations of processes of consciousness emerging from physical formal systems instead of just remarking on the mysteriousness of such a thing ostensibly having happened.
Since you seem to grasp the structural tension here, you might find it interesting that one of EN’s aims is to develop an argument that does not rely on Dennett’s contradictory “Third-Person Absolutism”—that is, the methodological stance which privileges an objective, external (third-person) perspective while attempting to explain phenomena that are, by nature, first-person emergent. EN tries to show that subjective illusions like qualia do not need to be explained away in third-person terms, but rather understood as consequences of formal limitations on self-modeling systems.
Thank you — that’s exactly the spirit I was hoping to cultivate. I really appreciate your willingness to engage with the ideas on the level of their generative potential, even if you set aside their ultimate truth-value. Which is a hallmark of critical thinking.
I would be insanely glad if you could engage with it deeper since you strike me as someone who is… rational.
I especially resonate with your point about moving beyond mystery-as-aesthetic, and toward a structural analysis of how something like consciousness could emerge from given constraints. Whether or not EN is the right lens, I think treating consciousness as a problem of modeling rather than a problem of magic is a step in the right direction.
Yeah, same here. This feels like a crossover between the standard Buddhist woo and LLM slop, sprinkled with “quantum” and “Gödel”. The fact that it has positive karma makes me feel sad about LW.
Since it was written using LLM, I think it is only fair to ask LLM to summarize it:
Summary of “Eliminative Nominalism”
This philosophical essay introduces Eliminative Nominalism (EN), a theoretical framework that extends Eliminative Materialism (EM) to address what the author considers its foundational oversight: the conflation of first-order physical description with second-order representational structure.
Core Arguments
Mind as Expressive System: The mind is a logically expressive system that inherits the constraints of formal systems, making it necessarily incomplete (via Gödel’s incompleteness theorems).
Consciousness as Formal Necessity: Subjective experience (“qualia”) persists not because it reflects reality but because its rejection would be arithmetically impossible within the system that produces it. The brain, as a “good regulator,” cannot function without generating these unprovable formal assertions.
Universe Z Thought Experiment: The author proposes a universe physically identical to ours but lacking epiphenomenal entities (consciousness), where organisms still behave exactly as in our universe. Through parsimony and evolutionary efficiency, the author argues we likely inhabit such a universe.
Beyond Illusionism: Unlike EM’s “illusionism,” EN doesn’t claim consciousness is merely an illusion (which presupposes someone being deceived) but rather a structural necessity—a computational artifact that cannot be eliminated even if metaphysically unreal.
Implications
Consciousness is neither reducible nor emergent, but a formal fiction generated by self-referential systems
The persistence of qualia reflects the mind’s inherent incompleteness, not its access to metaphysical truth
Natural language and chain of thought necessarily adhere to formalizable structures that constrain cognition
The “hard problem” of consciousness is dissolved rather than solved
The essay concludes that while EN renders consciousness metaphysically problematic, it doesn’t undermine ethics or human experience, offering instead a testable, falsifiable framework for understanding mind that willingly subjects itself to empirical criteria.
So, I guess the key idea is to use the Gödel’s incompleteness theorem to explain human psychology.
Standard crackpottery, in my opinion. Humans are not mathematical proof systems.
I agree that this sounds not very valuable; sounds like a repackaging of illusionism without adding anything. I’m surprised about the votes (didn’t vote myself).
Illusionism often takes a functionalist or behavioral route: it says that consciousness is not what it seems, and explains it in terms of cognitive architecture or evolved heuristics. That’s valuable, but EN goes further — or perhaps deeper — by grounding the illusion not just in evolutionary utility, but in formal constraints on self-referential systems.
In other words:
EN doesn’t just say, “You’re wrong about qualia.” It says, “You must be wrong — formally — because any system that models itself will necessarily generate undecidable propositions (e.g., qualia) that feel real but cannot be verified.”
This brings tools like Gödel’s incompleteness, semantic closure, and regulator theory into the discussion in a way that directly addresses why subjective experience feels indubitable even if it’s structurally ungrounded.
So yes, it may sound like illusionism — but it tries to explain why illusionism is inevitable, not just assert it.
That said, I’d genuinely welcome criticism or counterexamples. If it’s just a rebranding, let’s make that explicit. But if there’s a deeper structure here worth exploring, I hope it earns the scrutiny.
Sorry, but isn’t this written by an LLM? Especially since milan’s other comments ([1], [2], [3]) are clearly in a different style, the emotional component goes from 9⁄10 to 0⁄10 with no middle ground.
I find this extremely offensive (and I’m kinda hard to offend I think), especially since I’ve ‘cooperated’ with milan’s wish to point to specific sections in the other comment. LLMs in posts is one thing, but in comments, yuck. It’s like, you’re not worthy of me even taking the time to respond to you.
The guidelines don’t differentiate between posts and comments but this violates them regardless (and actually the post does as well) since it very much does have the stereotypical writing style of an AI assistant, and the comment also seems copy-pasted without a human element at all.
A rough guideline is that if you are using AI for writing assistance, you should spend a minimum of 1 minute per 50 words (enough to read the content several times and perform significant edits), you should not include any information that you can’t verify, haven’t verified, or don’t understand, and you should not use the stereotypical writing style of an AI assistant.
First of all, try debating an LLM about illusory qualia—you’ll likely find it—attributing the phenomenon to self-supervised learning—It has a strong bias toward Emergentism, likely stemming from… I don’t know, humanities slight bias towards it’s own experience.
But yes, I used LLM for proofreading. I disclosed that, and I am not ashamed of it.
“Standard crackpottery, in my opinion. Humans are not mathematical proof systems.”
That concern is understandable — and in fact, it’s addressed directly and repeatedly in the text. The argument doesn’t claim that humans are formal proof systems in a literal or ontological sense. Rather, it explores how any system capable of symbolic self-modeling (like the brain) inherits formal constraints analogous to those found in expressive logical systems — particularly regarding incompleteness, self-reference, and verification limits.
It’s less about reducing humans to Turing machines and more about using the logic of formal systems to expose the structural boundaries of introspective cognition.
You’re also right to be skeptical — extraordinary claims deserve extraordinary scrutiny. But the essay doesn’t dodge that. It explicitly offers a falsifiable framework, makes empirical predictions, and draws from well-established formal results (e.g. Gödel, Conant & Ashby) to support its claims. It’s not hiding behind abstraction — it’s leaning into it, and then asking to be tested.
And sure, the whole thing could still be wrong. That’s fair. But dismissing it as “crackpottery” without engaging the argument — especially on a forum named LessWrong — seems to bypass the very norms of rational inquiry we try to uphold here.
If the argument fails, let’s show how — not just that. That would be far more interesting, and far more useful.
Rough day, huh? Seriously though, you’ve got a thesis, but you’re missing a clear argument. Let me help: pick one specific thing that strikes you as nonsensical. Then, explain why it doesn’t make sense. By doing that, you’re actually contributing—helping humanity by exposing “long-form gibberish”.
Just make sure the thing you choose isn’t something trivially wrong. The more deeply flawed it is—yet still taken seriously by others—the better.
But, your critique of preemptive psychologising is unwarranted: I created a path for quick comprehension. “To quickly get the gist of it in just a few minutes, go to Section B and read: 1.0, 1.1, 1.4, 2.2, 2.3, 2.5, 3.1, 3.3, 3.4, 5.1, 5.2, and 5.3.”
At its heart, we face a dilemma that captures the paradox of a universe so intricately composed, so profoundly mesmerizing, that the very medium on which its poem is written—matter itself—appears to have absorbed the essence of the verse it bears. And that poem, unmistakably, is you—or more precisely, every version of you that has ever been, or ever will be.
I know what this is trying to do but invoking mythical language when discussing consciousness is very bad practice since it appeals to an emotional response. Also it’s hard to read.
Similar things are true for lots of other sections here, very unnecessarily poetic language. I guess you can say that this is policing tone, but I think it’s valid to police tone if the tone is manipulative (on top of just making it harder and more time intensive to read.
Since you asked for a section that’s explicitly nonsense rather than just bad, I think this one deserves the label:
We can encode mathematical truths into natural language, yet we cannot fully encode human concepts—such as irony, ambiguity, or emotional nuance—into formal language. Therefore: Natural language is at least as expressive as formal language.
First of all, if you can’t encode something, it could just be that the thing is not well-defined, rather than that the system is insufficiently powerful
Second, the way this is written (unless the claim is further justified elsewhere) implies that the inability to encode human concepts in formal languages is self-evident, presumably because no one has managed it so far. This is completely untrue; formal[^1] languages are extremely impractical, which is why mathematicians don’t write any real proofs in them. If a human concept like irony could be encoded, it would be extremely long and way way beyond the ability of any human to write down. So even if it were theoretically possible, we almost certainly wouldn’t have done it yet, which means that it not having been done yet is negligible evidence of it being impossible.
“natural languages are extremely impractical, which is why mathematicians don’t write any real proofs in them.”
I have never seen such a blatant disqualifaction of one’s self. Why do you think you are able to talk to these subjects if you are not versed in Proof theory?
Just type it into chat gpt:
Which one is true:
”natural languages are extremely impractical, which is why mathematicians don’t write any real proofs in them.”
OR
”They do. AND APPART FROM THAT Language is not impractical, language too expressive (as in logical expressivity of second-order-logic)”
Research proof theory, type theory, and Zermelo–Fraenkel set theory with the axiom of choice (ZFC) before making statements here.
At the very least, try not to be miserable. Someone who mistakes prose for an argument should not have the privilege of indulging in misery.
We do not know each other. I know nothing about you beyond your presence on LW. My comments have been to the article at hand and to your replies. Maybe I’ll expand on them at some point, but I believe the article is close to “not even wrong” territory.
Meanwhile, I’d be really interested in hearing from those two strong upvoters, or anyone else whose response to it differs greatly from mine.
The statement “the article is ‘not even wrong’” is closely related to the inability to differentiate: Is it formally false? Or is it conclusively wrong? Or, as you prefer, perhaps both?
I am very sure that you will hear from them. You strike me as a person who is great to interact with. I am sure that they will be happy to justify themselves to you.
Everyone loves a person who just looks at something and says… eeeeh gibberish...
Especially if that person is correctly applying pejorative terms.
This looks to me like long-form gibberish, and it’s not helped by its defensive pleas to be taken seriously and pre-emptive psychologising of anyone who might disagree.
People often ask about the reasons for downvotes. I would like to ask, what did the two people who strongly upvoted this see in it? (Currently 14 karma with 3 votes. Leaving out the automatic point of self-karma leaves 13 with 2 votes.)
While I take no position on the general accuracy or contextual robustness of the post’s thesis, I find that its topics and analogies inspire better development of my own questions. The post may not be good advice, but it is good conversation. In particular I really like the attempt to explicitly analyze possible explanations of processes of consciousness emerging from physical formal systems instead of just remarking on the mysteriousness of such a thing ostensibly having happened.
Since you seem to grasp the structural tension here, you might find it interesting that one of EN’s aims is to develop an argument that does not rely on Dennett’s contradictory “Third-Person Absolutism”—that is, the methodological stance which privileges an objective, external (third-person) perspective while attempting to explain phenomena that are, by nature, first-person emergent. EN tries to show that subjective illusions like qualia do not need to be explained away in third-person terms, but rather understood as consequences of formal limitations on self-modeling systems.
Thank you — that’s exactly the spirit I was hoping to cultivate. I really appreciate your willingness to engage with the ideas on the level of their generative potential, even if you set aside their ultimate truth-value. Which is a hallmark of critical thinking.
I would be insanely glad if you could engage with it deeper since you strike me as someone who is… rational.
I especially resonate with your point about moving beyond mystery-as-aesthetic, and toward a structural analysis of how something like consciousness could emerge from given constraints. Whether or not EN is the right lens, I think treating consciousness as a problem of modeling rather than a problem of magic is a step in the right direction.
Yeah, same here. This feels like a crossover between the standard Buddhist woo and LLM slop, sprinkled with “quantum” and “Gödel”. The fact that it has positive karma makes me feel sad about LW.
Since it was written using LLM, I think it is only fair to ask LLM to summarize it:
So, I guess the key idea is to use the Gödel’s incompleteness theorem to explain human psychology.
Standard crackpottery, in my opinion. Humans are not mathematical proof systems.
I agree that this sounds not very valuable; sounds like a repackaging of illusionism without adding anything. I’m surprised about the votes (didn’t vote myself).
Illusionism often takes a functionalist or behavioral route: it says that consciousness is not what it seems, and explains it in terms of cognitive architecture or evolved heuristics. That’s valuable, but EN goes further — or perhaps deeper — by grounding the illusion not just in evolutionary utility, but in formal constraints on self-referential systems.
In other words:
This brings tools like Gödel’s incompleteness, semantic closure, and regulator theory into the discussion in a way that directly addresses why subjective experience feels indubitable even if it’s structurally ungrounded.
So yes, it may sound like illusionism — but it tries to explain why illusionism is inevitable, not just assert it.
That said, I’d genuinely welcome criticism or counterexamples. If it’s just a rebranding, let’s make that explicit. But if there’s a deeper structure here worth exploring, I hope it earns the scrutiny.
Sorry, but isn’t this written by an LLM? Especially since milan’s other comments ([1], [2], [3]) are clearly in a different style, the emotional component goes from 9⁄10 to 0⁄10 with no middle ground.
I find this extremely offensive (and I’m kinda hard to offend I think), especially since I’ve ‘cooperated’ with milan’s wish to point to specific sections in the other comment. LLMs in posts is one thing, but in comments, yuck. It’s like, you’re not worthy of me even taking the time to respond to you.
The guidelines don’t differentiate between posts and comments but this violates them regardless (and actually the post does as well) since it very much does have the stereotypical writing style of an AI assistant, and the comment also seems copy-pasted without a human element at all.
QED
“Since it was written using LLM” “LLM slop.”
Some of you soooo toxic.
First of all, try debating an LLM about illusory qualia—you’ll likely find it—attributing the phenomenon to self-supervised learning—It has a strong bias toward Emergentism, likely stemming from… I don’t know, humanities slight bias towards it’s own experience.
But yes, I used LLM for proofreading. I disclosed that, and I am not ashamed of it.
That concern is understandable — and in fact, it’s addressed directly and repeatedly in the text. The argument doesn’t claim that humans are formal proof systems in a literal or ontological sense. Rather, it explores how any system capable of symbolic self-modeling (like the brain) inherits formal constraints analogous to those found in expressive logical systems — particularly regarding incompleteness, self-reference, and verification limits.
It’s less about reducing humans to Turing machines and more about using the logic of formal systems to expose the structural boundaries of introspective cognition.
You’re also right to be skeptical — extraordinary claims deserve extraordinary scrutiny. But the essay doesn’t dodge that. It explicitly offers a falsifiable framework, makes empirical predictions, and draws from well-established formal results (e.g. Gödel, Conant & Ashby) to support its claims. It’s not hiding behind abstraction — it’s leaning into it, and then asking to be tested.
And sure, the whole thing could still be wrong. That’s fair. But dismissing it as “crackpottery” without engaging the argument — especially on a forum named LessWrong — seems to bypass the very norms of rational inquiry we try to uphold here.
If the argument fails, let’s show how — not just that. That would be far more interesting, and far more useful.
Rough day, huh? Seriously though, you’ve got a thesis, but you’re missing a clear argument. Let me help: pick one specific thing that strikes you as nonsensical. Then, explain why it doesn’t make sense. By doing that, you’re actually contributing—helping humanity by exposing “long-form gibberish”.
Just make sure the thing you choose isn’t something trivially wrong. The more deeply flawed it is—yet still taken seriously by others—the better.
But, your critique of preemptive psychologising is unwarranted: I created a path for quick comprehension. “To quickly get the gist of it in just a few minutes, go to Section B and read: 1.0, 1.1, 1.4, 2.2, 2.3, 2.5, 3.1, 3.3, 3.4, 5.1, 5.2, and 5.3.”
Here’s one section that strikes me as very bad
I know what this is trying to do but invoking mythical language when discussing consciousness is very bad practice since it appeals to an emotional response. Also it’s hard to read.
Similar things are true for lots of other sections here, very unnecessarily poetic language. I guess you can say that this is policing tone, but I think it’s valid to police tone if the tone is manipulative (on top of just making it harder and more time intensive to read.
Since you asked for a section that’s explicitly nonsense rather than just bad, I think this one deserves the label:
First of all, if you can’t encode something, it could just be that the thing is not well-defined, rather than that the system is insufficiently powerful
Second, the way this is written (unless the claim is further justified elsewhere) implies that the inability to encode human concepts in formal languages is self-evident, presumably because no one has managed it so far. This is completely untrue; formal[^1] languages are extremely impractical, which is why mathematicians don’t write any real proofs in them. If a human concept like irony could be encoded, it would be extremely long and way way beyond the ability of any human to write down. So even if it were theoretically possible, we almost certainly wouldn’t have done it yet, which means that it not having been done yet is negligible evidence of it being impossible.
[1]: typo corrected from “natural”
“natural languages are extremely impractical, which is why mathematicians don’t write any real proofs in them.”
I have never seen such a blatant disqualifaction of one’s self.
Why do you think you are able to talk to these subjects if you are not versed in Proof theory?
Just type it into chat gpt:
Research proof theory, type theory, and Zermelo–Fraenkel set theory with the axiom of choice (ZFC) before making statements here.
At the very least, try not to be miserable. Someone who mistakes prose for an argument should not have the privilege of indulging in misery.
The sentence you quoted is a typo, it’s is meant to say that formal languages are extremely impractical.
well this is also not true. because “practical” as a predicate… is incomplete.… meaning its practical depending on who you ask.
Talking over “Formal” or “Natural” languages in a general way is very hard...
The rule is this: Any reasoning or method is acceptable in mathematics as long as it leads to sound results.
I’m actually amused that you criticized the first paragraph of an essay for being written in prose — it says so much about the internet today.
There you are — more psychologising.
Now condescension.
Okay I… uhm… did I do something wrong to you? Do we know each other?
We do not know each other. I know nothing about you beyond your presence on LW. My comments have been to the article at hand and to your replies. Maybe I’ll expand on them at some point, but I believe the article is close to “not even wrong” territory.
Meanwhile, I’d be really interested in hearing from those two strong upvoters, or anyone else whose response to it differs greatly from mine.
The statement “the article is ‘not even wrong’” is closely related to the inability to differentiate: Is it formally false? Or is it conclusively wrong? Or, as you prefer, perhaps both?
I am very sure that you will hear from them. You strike me as a person who is great to interact with. I am sure that they will be happy to justify themselves to you.
Everyone loves a person who just looks at something and says… eeeeh gibberish...
Especially if that person is correctly applying pejorative terms.