Actually… I will say it: This feels like a fast rebranding of the Halting Problem, like without actually knowing what it implies.
Being able to rebrand an argument so that it could talk about a different problem in a valid way is exactly what is to understand it—not just repeat the same words in the same context that the teacher said but generalize it. We can go into the realm of second order logic and say that
For every property that at least one program has, a universal detector of this property has to itself have this property on at least some input.
Mind you, I wasn’t trying to prove you that I understand Turing’s proof. Previously, you claimed that your “argument follows exactly the same syllogistic structure (“If this, then that”) as Turing’s proof”. So I showed you what it actually is when an argument does follow the same structure. If what you are talking was just like that then you could simply do the same.
How would a virus (B) know what the antivirus (A) predicts about B?
By running a copy of antivirus software on its own code and checking its output. That’s a valid program.
it’s unintuitive — almost so that it’s false.
It can’t quarry an antivirus software.
Why not? Anivirus software is a valid program, it has some code that can be executed on some input. You can include execution of this code on a particular input in your own program. If this is not intuitive for you, then maybe it’s you who do not understand Turing’s proof?
But seems people here are not versed in classical computer science only shouting “Bayeism! Bayeism!”
Well, I’m not making any claims about an average LessWronger here, but between the two of us, it’s me who has written an explicit logical proof of a theorem and you who is shouting “Turing proof!”, “Halting machine!” “Godel incompletness!” without going into the substance of them.
Maybe there is an actual gear-level model inside your mind how all this things together build up to your conclusion but you are not doing a good job at communicating it. You present metaphors, saying that thinking that we are conscious, while not actually being conscious is like being a merely halting machine, thinking that it’s a universal halting machine. But it’s not clear how this is applicable. What does it even mean for a machine to have a belief about something? That’s not something Turing defines in his proof. It’s possible to formally prove that universal halting machine is impossible. Can you do the same for having consciousness? If you can—then just do it, that would be very helpful and allow us to talk about the substance, not just vibes.
which is proven to be effectively wrong by the sleeping beauty paradox (frequentist “thirder’s” get more money in simulations.)
Oh boy, doIhaveanopinion. Here you are wrong in three different ways.
Disagreement in Sleeping Beauty is not between bayesianism and frequentism. Thirdism is not frequentist. Halfism is not bayesian. One can make a bayesian argument in favor of thirdism: how you learn that you are awaken ‘today’ which is, alledgedly, new information. Or frequentist argument in favor of halfism: how if we repeat the experiment a lot of times, in about 1⁄2 of such iterations of it, where at least one awakening happens, the coin is Heads.
In general frequentism and bayesianism do not have disagreement of such kind. There are sitiations (unfair coin toss without any other details) where frequentist claim that probability is undefined, where bayesians are ready to assign a numerical value to it, but not ones where two different numerical values are assigned.
While following lewisian halfers betting odds in Sleeping Beauty indeed performs terribly, double halfers do no worse than thirders. There are also some cases where thirdism gives very stupid answers, like betting on whether at least one of the awakening happens on Monday.
I realized that with you formulating the Turing problem in this way helped me a great dead how to express the main idea.
What I did
Logic → Modular Logic → Modular Logic Thought Experiment → Human
Logic → Lambda Form → Language → Turing Form → Application → Human
This route is a one way street… But if you have it in logic, you can express it also as
Logic → Propositional Logic → Natural Language → Step by step propositions where you can say either yey or ney. If you are logical you must arrive at the conclusion.
I will say that your rational holds up in many ways, in some ways don’t. I give you that you won the argument. You are right mostly.
“Well, I’m not making any claims about an average LessWronger here, but between the two of us, it’s me who has written an explicit logical proof of a theorem and you who is shouting “Turing proof!”, “Halting machine!” “Godel incompletness!” without going into the substance of them.”
Absolutely correct. You won this argument too.
Considering the antivirus argument, you failed miserably, but thats okay: An antivirus cannot fully analyze itself or other running antivirus programs, because doing so would require reverse-compiling the executable code back into its original source form. Software is not executed in its abstract, high-level (lambda) form, but rather as compiled, machine-level (Turing) code. Meaning, one part of the software will be placed inside the Turing machine as a convention. Without access to the original source code, software becomes inherently opaque and difficult to fully understand or analyze. Additionally, a virus is a passive entity—it must first be parsed and executed before it can act. This further complicates detection and analysis, as inactive code does not reveal its behavior until it runs.
This is where it gets interesting.
”Maybe there is an actual gear-level model inside your mind how all this things together build up to your conclusion but you are not doing a good job at communicating it. You present metaphors, saying that thinking that we are conscious, while not actually being conscious is like being a merely halting machine, thinking that it’s a universal halting machine. But it’s not clear how this is applicable.”
You know what. You are totally right.
So here is what I really say: If the brain is something like a computer… It has to be obey the rules of incompleteness. So “incompleteness” must be hidden somewhere in the setup. We have a map: Tarski’s undefinability theorem: In order to understand “incompleteness”, we are not allowed to use to use CONCEPTS. Why? Because CONCEPTS are incomplete. They are selfreferential. Define a pet: An animal… Define an animal: A life form... etc. So this problem is hard… The hard problem of consciousness. BUT there is a chance we can do something. A silver lining.
Tarski’s undefinability theorem IS A MAP. It shows us how to “find” the incompleteness in ourself. What is our vehicle? First-order-logic. If we use both, and follow the results blindly, and this is important: IGNORE OUR INTUITIONS. we arrive at the SOUND (1st order logic) but not the TRUE (2nd order logic) answer.
Being able to rebrand an argument so that it could talk about a different problem in a valid way is exactly what is to understand it—not just repeat the same words in the same context that the teacher said but generalize it. We can go into the realm of second order logic and say that
For every property that at least one program has, a universal detector of this property has to itself have this property on at least some input.
Mind you, I wasn’t trying to prove you that I understand Turing’s proof. Previously, you claimed that your “argument follows exactly the same syllogistic structure (“If this, then that”) as Turing’s proof”. So I showed you what it actually is when an argument does follow the same structure. If what you are talking was just like that then you could simply do the same.
By running a copy of antivirus software on its own code and checking its output. That’s a valid program.
Why not? Anivirus software is a valid program, it has some code that can be executed on some input. You can include execution of this code on a particular input in your own program. If this is not intuitive for you, then maybe it’s you who do not understand Turing’s proof?
Well, I’m not making any claims about an average LessWronger here, but between the two of us, it’s me who has written an explicit logical proof of a theorem and you who is shouting “Turing proof!”, “Halting machine!” “Godel incompletness!” without going into the substance of them.
Maybe there is an actual gear-level model inside your mind how all this things together build up to your conclusion but you are not doing a good job at communicating it. You present metaphors, saying that thinking that we are conscious, while not actually being conscious is like being a merely halting machine, thinking that it’s a universal halting machine. But it’s not clear how this is applicable. What does it even mean for a machine to have a belief about something? That’s not something Turing defines in his proof. It’s possible to formally prove that universal halting machine is impossible. Can you do the same for having consciousness? If you can—then just do it, that would be very helpful and allow us to talk about the substance, not just vibes.
Oh boy, do I have an opinion. Here you are wrong in three different ways.
Disagreement in Sleeping Beauty is not between bayesianism and frequentism. Thirdism is not frequentist. Halfism is not bayesian. One can make a bayesian argument in favor of thirdism: how you learn that you are awaken ‘today’ which is, alledgedly, new information. Or frequentist argument in favor of halfism: how if we repeat the experiment a lot of times, in about 1⁄2 of such iterations of it, where at least one awakening happens, the coin is Heads.
In general frequentism and bayesianism do not have disagreement of such kind. There are sitiations (unfair coin toss without any other details) where frequentist claim that probability is undefined, where bayesians are ready to assign a numerical value to it, but not ones where two different numerical values are assigned.
While following lewisian halfers betting odds in Sleeping Beauty indeed performs terribly, double halfers do no worse than thirders. There are also some cases where thirdism gives very stupid answers, like betting on whether at least one of the awakening happens on Monday.
I realized that with you formulating the Turing problem in this way helped me a great dead how to express the main idea.
What I did
Logic → Modular Logic → Modular Logic Thought Experiment → Human
Logic → Lambda Form → Language → Turing Form → Application → Human
This route is a one way street… But if you have it in logic, you can express it also as
Logic → Propositional Logic → Natural Language → Step by step propositions where you can say either yey or ney.
If you are logical you must arrive at the conclusion.
Thank you for this.
I will say that your rational holds up in many ways, in some ways don’t. I give you that you won the argument. You are right mostly.
“Well, I’m not making any claims about an average LessWronger here, but between the two of us, it’s me who has written an explicit logical proof of a theorem and you who is shouting “Turing proof!”, “Halting machine!” “Godel incompletness!” without going into the substance of them.”
Absolutely correct. You won this argument too.
Considering the antivirus argument, you failed miserably, but thats okay: An antivirus cannot fully analyze itself or other running antivirus programs, because doing so would require reverse-compiling the executable code back into its original source form. Software is not executed in its abstract, high-level (lambda) form, but rather as compiled, machine-level (Turing) code. Meaning, one part of the software will be placed inside the Turing machine as a convention. Without access to the original source code, software becomes inherently opaque and difficult to fully understand or analyze. Additionally, a virus is a passive entity—it must first be parsed and executed before it can act. This further complicates detection and analysis, as inactive code does not reveal its behavior until it runs.
This is where it gets interesting.
”Maybe there is an actual gear-level model inside your mind how all this things together build up to your conclusion but you are not doing a good job at communicating it. You present metaphors, saying that thinking that we are conscious, while not actually being conscious is like being a merely halting machine, thinking that it’s a universal halting machine. But it’s not clear how this is applicable.”
You know what. You are totally right.
So here is what I really say: If the brain is something like a computer… It has to be obey the rules of incompleteness. So “incompleteness” must be hidden somewhere in the setup. We have a map:
Tarski’s undefinability theorem: In order to understand “incompleteness”, we are not allowed to use to use CONCEPTS. Why? Because CONCEPTS are incomplete. They are selfreferential. Define a pet: An animal… Define an animal: A life form...
etc. So this problem is hard… The hard problem of consciousness. BUT there is a chance we can do something. A silver lining.
Tarski’s undefinability theorem IS A MAP. It shows us how to “find” the incompleteness in ourself. What is our vehicle? First-order-logic.
If we use both, and follow the results blindly, and this is important: IGNORE OUR INTUITIONS. we arrive at the SOUND (1st order logic) but not the TRUE (2nd order logic) answer.