Illusionism often takes a functionalist or behavioral route: it says that consciousness is not what it seems, and explains it in terms of cognitive architecture or evolved heuristics. That’s valuable, but EN goes further — or perhaps deeper — by grounding the illusion not just in evolutionary utility, but in formal constraints on self-referential systems.
In other words:
EN doesn’t just say, “You’re wrong about qualia.” It says, “You must be wrong — formally — because any system that models itself will necessarily generate undecidable propositions (e.g., qualia) that feel real but cannot be verified.”
This brings tools like Gödel’s incompleteness, semantic closure, and regulator theory into the discussion in a way that directly addresses why subjective experience feels indubitable even if it’s structurally ungrounded.
So yes, it may sound like illusionism — but it tries to explain why illusionism is inevitable, not just assert it.
That said, I’d genuinely welcome criticism or counterexamples. If it’s just a rebranding, let’s make that explicit. But if there’s a deeper structure here worth exploring, I hope it earns the scrutiny.
Sorry, but isn’t this written by an LLM? Especially since milan’s other comments ([1], [2], [3]) are clearly in a different style, the emotional component goes from 9⁄10 to 0⁄10 with no middle ground.
I find this extremely offensive (and I’m kinda hard to offend I think), especially since I’ve ‘cooperated’ with milan’s wish to point to specific sections in the other comment. LLMs in posts is one thing, but in comments, yuck. It’s like, you’re not worthy of me even taking the time to respond to you.
The guidelines don’t differentiate between posts and comments but this violates them regardless (and actually the post does as well) since it very much does have the stereotypical writing style of an AI assistant, and the comment also seems copy-pasted without a human element at all.
A rough guideline is that if you are using AI for writing assistance, you should spend a minimum of 1 minute per 50 words (enough to read the content several times and perform significant edits), you should not include any information that you can’t verify, haven’t verified, or don’t understand, and you should not use the stereotypical writing style of an AI assistant.
Illusionism often takes a functionalist or behavioral route: it says that consciousness is not what it seems, and explains it in terms of cognitive architecture or evolved heuristics. That’s valuable, but EN goes further — or perhaps deeper — by grounding the illusion not just in evolutionary utility, but in formal constraints on self-referential systems.
In other words:
This brings tools like Gödel’s incompleteness, semantic closure, and regulator theory into the discussion in a way that directly addresses why subjective experience feels indubitable even if it’s structurally ungrounded.
So yes, it may sound like illusionism — but it tries to explain why illusionism is inevitable, not just assert it.
That said, I’d genuinely welcome criticism or counterexamples. If it’s just a rebranding, let’s make that explicit. But if there’s a deeper structure here worth exploring, I hope it earns the scrutiny.
Sorry, but isn’t this written by an LLM? Especially since milan’s other comments ([1], [2], [3]) are clearly in a different style, the emotional component goes from 9⁄10 to 0⁄10 with no middle ground.
I find this extremely offensive (and I’m kinda hard to offend I think), especially since I’ve ‘cooperated’ with milan’s wish to point to specific sections in the other comment. LLMs in posts is one thing, but in comments, yuck. It’s like, you’re not worthy of me even taking the time to respond to you.
The guidelines don’t differentiate between posts and comments but this violates them regardless (and actually the post does as well) since it very much does have the stereotypical writing style of an AI assistant, and the comment also seems copy-pasted without a human element at all.
QED