It seems different to me. The difference is that your beliefs come out accurately. It’s easier for people (and less complicated in general intelligences) to force beliefs they know are accurate than beliefs they know are false. I can make my hand somewhat numb by believing that my belief will make it numb through some neural pathway. I can’t make it numb by believing god will make it numb. Parts of me that I can’t control say “But thats not true!”
Also, if you are engaging in self deception you have to work hard to quarantine the problem and correct it after self delusion is no longer necessary. On top of that, if your official belief node is set one way, but the rest of you is set up to flip it back once omega goes away, who’s to say it won’t decide to ignore your Official Belief node in favor of the fact that you’re ready to change it.
If you believe that you’ll win a lottery, and you win, it doesn’t make your belief correct. Correctness comes from lawful reasoning, not isolated events of conclusions coinciding with reality. A correct belief is one that the heuristics of truth-seeking should assign.
Here, the agent doesn’t have reasons to expect that the box will likely contain money, it can’t know that apart from deciding it to be so, and the deciding is performed by other heuristics. Of course, after the other heuristics have decided, truth-seeking heuristic would agree that there are reasons to expect the box to contain the money, but that’s not what causes the money to be brought into existence, it’s a separate event of observing that this fact has been decided by the other heuristics.
First, there is a decision to assign a belief (that money will be in the box) for reasons other than it being true, which is contrary to heuristics of correctness. Second, with that decision in place, there are now reasons to believe that money will be in the box, so the heuristic of correctness can back the belief-assignment, but would have no effect on the fact it’s based upon.
So the coincidence here is not significant to the process of controlling content of the box. Rather, it’s an incorrect post-hoc explanation of what has happened. The variant where you are required to instead believe that the box will always be empty removes this equivocation and clarifies the situation.
If you believe that you’ll win a lottery, and you win, it doesn’t make your belief correct. Correctness comes from lawful reasoning, not isolated events of conclusions coinciding with reality. A correct belief is one that the heuristics of truth-seeking should assign.
It’s semantics, but in common usage the “correct belief to have given evidence X” is different than the belief “turning out to be correct”, and I think its important to have a good word for the latter.
Either way, I said “accurate” and was referring to it matching the territory, not to how it was generated.
First, there is a decision to assign a belief (that money will be in the box) for reasons other than it being true, which is contrary to heuristics of correctness. Second, with that decision in place, there are now reasons to believe that money will be in the box, so the heuristic of correctness can back the belief-assignment, but would have no effect on the fact it’s based upon.
That’s one way to do it, in which case it is pretty similar to forging beliefs that don’t end up matching the territory. The only difference is that you’re ‘coincidentally’ right.
It’s also possible to use your normal heuristics to determine what would happen conditional on holding different beliefs (which you have not yet formed). After finding the set of beliefs that are ‘correct’, choose the most favorable correct belief. This way there is never any step where you choose something that is predictably incorrect. And you don’t have to be ready to deal with polluted belief networks.
It seems different to me. The difference is that your beliefs come out accurately. It’s easier for people (and less complicated in general intelligences) to force beliefs they know are accurate than beliefs they know are false. I can make my hand somewhat numb by believing that my belief will make it numb through some neural pathway. I can’t make it numb by believing god will make it numb. Parts of me that I can’t control say “But thats not true!”
Also, if you are engaging in self deception you have to work hard to quarantine the problem and correct it after self delusion is no longer necessary. On top of that, if your official belief node is set one way, but the rest of you is set up to flip it back once omega goes away, who’s to say it won’t decide to ignore your Official Belief node in favor of the fact that you’re ready to change it.
If you believe that you’ll win a lottery, and you win, it doesn’t make your belief correct. Correctness comes from lawful reasoning, not isolated events of conclusions coinciding with reality. A correct belief is one that the heuristics of truth-seeking should assign.
Here, the agent doesn’t have reasons to expect that the box will likely contain money, it can’t know that apart from deciding it to be so, and the deciding is performed by other heuristics. Of course, after the other heuristics have decided, truth-seeking heuristic would agree that there are reasons to expect the box to contain the money, but that’s not what causes the money to be brought into existence, it’s a separate event of observing that this fact has been decided by the other heuristics.
First, there is a decision to assign a belief (that money will be in the box) for reasons other than it being true, which is contrary to heuristics of correctness. Second, with that decision in place, there are now reasons to believe that money will be in the box, so the heuristic of correctness can back the belief-assignment, but would have no effect on the fact it’s based upon.
So the coincidence here is not significant to the process of controlling content of the box. Rather, it’s an incorrect post-hoc explanation of what has happened. The variant where you are required to instead believe that the box will always be empty removes this equivocation and clarifies the situation.
It’s semantics, but in common usage the “correct belief to have given evidence X” is different than the belief “turning out to be correct”, and I think its important to have a good word for the latter.
Either way, I said “accurate” and was referring to it matching the territory, not to how it was generated.
That’s one way to do it, in which case it is pretty similar to forging beliefs that don’t end up matching the territory. The only difference is that you’re ‘coincidentally’ right.
It’s also possible to use your normal heuristics to determine what would happen conditional on holding different beliefs (which you have not yet formed). After finding the set of beliefs that are ‘correct’, choose the most favorable correct belief. This way there is never any step where you choose something that is predictably incorrect. And you don’t have to be ready to deal with polluted belief networks.
Am I missing something?