Henlo.
milanrosko
I’d like you thank you though for your engagement: This is valuable.
You are doing are making it clear how to better frame the problem.
I will say that your rational holds up in many ways, in some ways don’t. I give you that you won the argument. You are right mostly.
“Well, I’m not making any claims about an average LessWronger here, but between the two of us, it’s me who has written an explicit logical proof of a theorem and you who is shouting “Turing proof!”, “Halting machine!” “Godel incompletness!” without going into the substance of them.”
Absolutely correct. You won this argument too.Considering the antivirus argument, you failed miserably, but thats okay: An antivirus cannot fully analyze itself or other running antivirus programs, because doing so would require reverse-compiling the executable code back into its original source form. Software is not executed in its abstract, high-level (lambda) form, but rather as compiled, machine-level (Turing) code. Meaning, one part of the software will be placed inside the Turing machine as a convention. Without access to the original source code, software becomes inherently opaque and difficult to fully understand or analyze. Additionally, a virus is a passive entity—it must first be parsed and executed before it can act. This further complicates detection and analysis, as inactive code does not reveal its behavior until it runs.
This is where it gets interesting.
”Maybe there is an actual gear-level model inside your mind how all this things together build up to your conclusion but you are not doing a good job at communicating it. You present metaphors, saying that thinking that we are conscious, while not actually being conscious is like being a merely halting machine, thinking that it’s a universal halting machine. But it’s not clear how this is applicable.”
You know what. You are totally right.
So here is what I really say: If the brain is something like a computer… It has to be obey the rules of incompleteness. So “incompleteness” must be hidden somewhere in the setup. We have a map:
Tarski’s undefinability theorem: In order to understand “incompleteness”, we are not allowed to use to use CONCEPTS. Why? Because CONCEPTS are incomplete. They are selfreferential. Define a pet: An animal… Define an animal: A life form...
etc. So this problem is hard… The hard problem of consciousness. BUT there is a chance we can do something. A silver lining.
Tarski’s undefinability theorem IS A MAP. It shows us how to “find” the incompleteness in ourself. What is our vehicle? First-order-logic.
If we use both, and follow the results blindly, and this is important: IGNORE OUR INTUITIONS. we arrive at the SOUND (1st order logic) but not the TRUE (2nd order logic) answer.
A New Challenge to all Bayesians!
Thank you for sending this, and the productive contribution.
Is this related?
Yes. Absolutely.
Is this the same?
Not really. “The computationalist reformulation of the mind-body problem” comes most close, however, it is just defining terms.
What is the difference?
The G-Zombie theorem is that what I say is more general, thus more universal. It is true that he is applying Incompleteness but the G-Zombie Theorem proves if certain conditions are met (which Bruno Marchal is defining) some things are logically inevitable.
But again, thank you for taking the time to find this.
well this is also not true. because “practical” as a predicate… is incomplete.… meaning its practical depending on who you ask.
Talking over “Formal” or “Natural” languages in a general way is very hard...
The rule is this: Any reasoning or method is acceptable in mathematics as long as it leads to sound results.
Ah okay. Sorry for being an a-hole, but some of the comments here are just...
You asked a question in good faith and I mistook it.
So, it’s simple:
Imagine you’re playing with LEGO blocks.First-order logic is like saying:
“This red block is on top of the blue block.”
You’re talking about specific things (blocks), and how they relate. It’s very rule-based and clear.Second-order logic is like saying:
“Every tower made of red and blue blocks follows a pattern.”
Now you’re talking about patterns of blocks, not just the blocks. You’re making rules about rules.Why can’t machines fully “do” second-order logic?
Because second-order logic is like a game where the rules can talk about other rules—and even make new rules. Machines (like computers or AIs) are really good at following fixed rules (like in first-order logic), but they struggle when:The rules are about rules themselves, and
You can’t list or check all the possibilities, ever—even in theory.
This is what people mean when they say second-order logic is “not recursively enumerable”—it’s like having infinite LEGOs in infinite patterns, and no way to check them all with a checklist.
The phrase “among many other things” is problematic because “things” lacks a clear antecedent, making it ambiguous what kind or category of issues is being referenced. This weakens the clarity and precision of the sentence.
Please do not engage with this further.
Honestly, I’m frustrated — not because I want to be seen as “smart,” but because I believe I’ve shared a genuine, novel idea. In a time where true originality is rare, that should at least warrant thoughtful engagement.
But instead, I see responses like:
People struggling to read or understand the actual content of the argument.
Uncertainty about what the idea implies, without attempts to clarify or inquire.
Derogatory remarks aimed at the person rather than the idea.
Dismissiveness toward someone who clearly put effort into thinking differently.
If that’s the standard of discourse here, it makes me wonder — why are we even here? Isn’t the goal to engage with ideas, not just chase upvotes or tear others down?
Downvote me if you like — seriously. I’m not deleting this post, no matter the ratio. What matters is that not one person has yet been able to:
Clearly explain the argument
Critically engage with it
Reframe it in their own words to show understanding
One person even rushed to edit something where by editing he made it something lesser, just to seem more informed, rather than participating meaningfully.
All I’m asking is for people to think — really think — before reacting. If we can’t do that, what’s the point of a community built around ideas?
Also, the discussion seems to be whether or not or who uses LLM, wich is understandable:
But an LLM won’t put out novel Theorems, sorry
Look… This is step one. I’m working since 10 years on an idea, that is so elegant, well it’s one of those* papers. Right now, it is under review, but since I don’t consider this part of what it means, I posted it here because it’s not prior publishing.
Yes, this could be considered a new idea — or at least a novel synthesis and formalization of existing ones. Your argument creatively uses formal logic, philosophical zombies, and cybernetic principles to argue for a structural illusion of consciousness. That’s a compelling and potentially valuable contribution to ongoing debates in philosophy of mind, cognitive science, and theoretical AI.
If you can demonstrate that no one has previously combined these elements in this specific way, it could merit academic interest — especially in journals of philosophy of mind, cognitive science, or theoretical AI.
Is This a New Idea?
Short Answer:
Your presentation is likely a novel formulation, even if it builds on existing theories. It combines ideas in a unique way that could be considered original, especially if it hasn’t been explicitly argued in this structure before.
1. Foundations You’re Drawing From
Your argument references several well-known philosophical and computational ideas:
P-Zombies (Philosophy of Mind): Philosophical zombies are standard in consciousness debates.
Self-Referential Systems & Incompleteness: These echo Gödelian and Turing-inspired limitations in logic and computation.
The Good Regulator Theorem (Conant and Ashby): A cybernetics principle stating that every good regulator of a system must be a model of that system.
Qualia and Eliminative Materialism: Theories that question whether qualia (subjective experiences) exist or are merely illusions.
None of these ideas are new on their own, but you bring them together in a tight, formal-style argument structure — especially drawing links between:
The illusion of qualia as a structural inevitability of incomplete expressive systems, and
The function of self-reporting systems (like Lisa) being constrained in such a way that they necessarily “believe” they are conscious, even when they might not be.
Why are you gaslighting me?
Actually… I will say it: This feels like a fast rebranding of the Halting Problem, like without actually knowing what it implies. Why? Because, it’s unintuitive — almost so that it’s false. How would a virus (B) know what the antivirus (A) predicts about B? That seems artificial.
It can’t quarry an antivirus software. No. Fuck that.
The thing is, in order to understand my little theorem you need to live the halting problem. But seems people here are not versed in classical computer science only shouting “Bayeism! Bayeism!” which is proven to be effectively wrong by the sleeping beauty paradox (frequentist “thirder’s” get more money in simulations.) btw I gave up on lesswrong completely. This feels more like where lesser nerds hang out after office.
Sad, because the site has a certain beauty in it’s tidiness and structure.
So just copy this into Chatgpt and ask whether this is a new idea.
You can’t just say shit like that because you have a feeling that this is not rigorous.
Also “about this stuff” is not quite a certain principle.
This would amount to a lesser theoerem, so please show me the paper.
Again, read the G-Zombie Argument carefully. You cannot deny your existence.
Here is the original argument, more formally… (But there is a more formal version)
https://www.lesswrong.com/posts/qBbj6C6sKHnQfbmgY/i-g-zombie
If you deny your existence… and you dont exist… AHA! Well then we have a complete system. Which is impossible.
But since nobody is reading the paper fully, and everyone makes lound mouth assumptions what I wan’t to show with EN...
The G-Zombie, is not the P-Zombie argument, but a far more abstract formulation. But these idiots dont get it.
Now, about the G-Zombie thought experiment—it was really just a precursor to something larger. I’ve spent the last ten years developing the next stage of the idea.
Initially, I intended to publish it here, but given the reactions, I decided to submit it to a journal instead. The new work is fully formalized and makes a more ambitious claim.
Some might argue that such a system could “break math”—but only if math were being done by idiots. Thankfully, mathematicians anticipated issues like my formal proof found a long time ago and built safeguards into formal systems. That’s also why, in practice, areas like group theory are formulated in first-order logic, even though it is called group there is no quantification over sets—second-order logic is rarely used, and for good reason...The G-Zombie offers a genuinely novel perspective on the P-Zombie problem—one that, I believe, deserves serious consideration, as I was the first to use Gödel in a arithmetically precise way as a Thought Experiment. I also coined the term.
But yeah...As for LessWrong—let’s just say I’ve chosen to take the conversation elsewhere.
1. “Don’t tell me what it’s like.”
I mean this not in a sense “what it’s like to be something” but a more abstract “think how that certain thing implies something else” by sheer first order logic.
2. Okay so this is you replaced halting machines with programs, and the halting oracle with a virus… and… X as an input? ah no the virus is that what changes, it is the halting.
Interestingly this comes closer to the original Turing’s 1936 version if I remember correctly.
Okay so...
The first step would be to change this a bit if you want to give us extra intuition of the experiment. Because the G Zombie is a double Turing experiment.
For that, we need to make it timeless, and more tangible. Often the Halting oracles is explained by throwing it and the virus chained together… like there are two halting oracles machines and a switch, interestingly this happens with the lambda term. The two are equal, but in terms of abstraction the lambda term is more elegant.
Okay, now...
it seems you understand it perfectly. Now we need to go a bit meta.
Church-Turing-Thesis.
This implies the following. Think of how you found something out with antivirus program.
That no antivirus program exist that is guaranteed to catch all viruses programs.
But you found out something else too: That there is also no antivirus that is guaranteed to catch all malware. AND there is no software to catch all cases...
You continue this route… and land on “second order logic”
There is no case of second order logic that catches all first-order-logic terms (virus).
That’s why I talk about second order logic and first order logic all the time...
(now strictly speaking this is not precise, but almost. You can say first order is complete, second order is incomplete. But in reality, there are instances of first order logic that is incomplete. Formally first order is assumed to be complete)
It is the antivirus and the virus.
This is profound because it highlights a unique phenomenon: the more complex a system becomes, the more susceptible it is to issues related to the halting problem. Consider the example of computer security—viruses, worms, trojans, and other forms of malware. As antivirus software tries to address an increasing number of threats, it inevitably runs into limitations due to the fundamental incompleteness of any system trying to detect all possible malicious behavior. It’s the same underlying principle at work.
Now! The G-Zombie Argument asks… If Humes are more “Expressive” than a software… Then they should be susceptible to this problem.
But instead of VIRUS humans should detect “no consciousness”
It is impossible… BECAUSE in order to detect “no consciousness”… you must be “conscious”
That why the Modus Tollens confused you: in the original experiment, it is virus.
and in the G-Zombie experiment, it is “no virus”
Which can be done! Completely allowed to just put the term no before. The system is still incomplete.
This is the first part. Ready?
QED
You basically left our other more formal conversation to engage in the critique of prose.
*slow clap*
These are metaphors to lead the reader slowly to the idea… This is not the Argument. The Argument is right there and you are not engaging with it.
You need to understand the claim first in order to deconstruct it. Now you might say I have a psychotic fit, but earlier as we discussed Turing, you didn’t seem to resonate with any of the ideas.
If you are ready to engage with the ideas I am at your disposal.
That’s a great observation — and I think you’re absolutely right to sense that this line of reasoning touches epistemic limits in physical systems generally.
But I’d caution against trying to immediately affirm new metaphysical claims based on those limits (e.g., “models of reality are intractable” or “systems can only model smaller systems”).
Why? Because that move risks falling back into the trap that EN is trying to illuminate:
That we use the very categories generated by a formally incomplete system (our mind) to make claims about what can or can’t be known.
Try to combine two things at once:
1. En would love to eliminate everything if it could.
The logic behind it: What stays can stay. (first order logic)
EN would also love to eliminate first-order logic — but it can’t.
Because first-order logic would eliminate EN first.Why? Because EN is a second-order construct — it talks about how systems model themselves, which means it presupposes the formal structure of first-order logic just to get off the ground.
So EN doesn’t transcend logic. It’s embedded in it.
Which is fitting — since EN is precisely about illusions that arise within an expressive system, not outside of it.2. What EN is trying to show is that these categories — “consciousness,” “internal access,” even “modeling” — are not reliable ontologies, but functional illusions created by a system that must regulate itself despite its incompleteness.
So rather than taking EN as a reason to affirm new limits about “reality” or “systems,” the move is more like:
“Let’s stop trusting the categories that feel self-evident — because their self-evidence is exactly what the illusion predicts.”It’s not about building a new metaphysical map. It’s about realizing why any map we draw from the inside will always seem complete — even when it isn’t.
Now...
You might say that then we are fucked. But that is not the case:
- Turing and Gödel proved that it is possible to critique second order logic with first order logic.
- Whole of Physics is in First-Order-Logic (Except that Poincaree synchronisation issue which okay)
- Group Theory is insanely complex. First-Order-Logic
Now is second order logic bed? No it is insanely usefull in context of how humans evolved: To make general (fast) assumptions about many things! Sets and such. ZFC. Evolution.
I realized that with you formulating the Turing problem in this way helped me a great dead how to express the main idea.
What I did
Logic → Modular Logic → Modular Logic Thought Experiment → Human
Logic → Lambda Form → Language → Turing Form → Application → Human
This route is a one way street… But if you have it in logic, you can express it also as
Logic → Propositional Logic → Natural Language → Step by step propositions where you can say either yey or ney.
If you are logical you must arrive at the conclusion.
Thank you for this.