I think I disagree. I’ll add some precision to point out how. Happy to hear if I’m missing something.
E is Bayesian evidence of X if E is more likely to happen when X is true than when it’s not.
If Bob says “As a policy, I’m not going to check whether I’m running an Omega-C deception”, that’s equally likely whether Bob is running a deception or not. (Hence the “as a policy” part.) It just fully happens in both cases. So from Omega-C’s point of view, it’s not Bayesian evidence that distinguishes between the two versions of Bob.
It would be evidence if the choice were made from a stance of “Oh shoot, that might be self-deception! Well, I’m now going to adopt the no-looking policy so that I don’t have to check it!” Then yeah, sure, that’s clearly evidence — which is precisely why that method of deciding not to look isn’t what can work.
The policy of always deeply investigating oneself can produce evidence for Omega-C, but the act of choosing that policy might not. Choosing the policy not to look just doesn’t produce evidence.
The fact that Bob has this policy in the first place is more likely when he’s being self-deceptive. Sure, some people will glomorize even when they have nothing to hide, but more often it will be the result of Bob noticing that he’s the sort of person who might have something to hide.
It’s a general rule that if E is strong evidence for X, then ~E is at least weak evidence for ~X.
The fact that Bob has this policy in the first place is more likely when he’s being self-deceptive.
A fun fictional example here is Bester’s The Demolished Man: how do you plan & carry out an assassination when telepaths are routinely eavesdropping on your mind? The protagonist visits a company musician, requesting a musical earworm for a company song to help the workers’ health or something; alas! the earworm gets stuck in his head, and so all any telepath hears is the earworm. And you can’t blame a man for having an earworm stuck in his head, now can you? He has an entirely legitimate reason for that to be there, which ‘explains away’ the evidence of the deception hypothesis that telepathic-immunity would otherwise support.
The fact that Bob has this policy in the first place is more likely when he’s being self-deceptive.
I don’t know if that’s true. It might be. But some possible counterpoints:
People can distrust systems that demand they check. “You have nothing to fear if you have nothing to hide” can get a response of “No” even from people who don’t have anything to hide.
If someone subconsciously thinks they can pull off the illusion of honestly looking while in fact finding nothing, they become more likely to choose to look because they’re self-deceiving.
Someone with a policy of not looking might be better at making their own self-deception unnecessary.
…more often it will be the result of Bob noticing that he’s the sort of person who might have something to hide.
Sure, that way of deciding doesn’t work.
Likewise, if you’re inclined to decide you’re going to dig into possible sources of self-deception because you think it’s unlikely that you have any, then you can’t do this trick.
The hypothetical respect for any self-deception that might be there needs to be unconditional on its existence. Otherwise, for the reason you say, it doesn’t work as well.
(…with some caveats about how people are imperfect telepaths, so some fuzz in implementation here is in practice fine.)
That said, I think you’re right in that if Omega-C is looking only at the choice of whether to look or not, then yes, Omega-C would be right to take the choice as evidence of a deception.
But the whole point is that Omega-C can read what conscious processes you’re using, and can see that you’re deciding for a glomerizing reason.
That’s why why you choose what you do matters so much here. Not just what you choose.
It’s a general rule that if E is strong evidence for X, then ~E is at least weak evidence for ~X.
Conservation of expected evidence is what makes looking relevant. It’s not what makes deciding to look relevant.
If I decide to appease Omega-C by looking, and then I find that I’m self-deceiving, the fact that I chose to look gets filtered. The fact that this is possible is why not finding evidence can matter at all. Otherwise it’d just be a charade.
Relatedly: I have a coin in my pocket. I don’t feel like checking it for bias. Does that make it more likely that the coin is biased? Maybe. But if I could magically show you that I’m not looking because I honestly do not care one way or the other and don’t want to waste the effort, and it doesn’t affect me whether it’s biased or not… then you can’t use my disinterest in checking the coin for bias as evidence of some kind of subconscious deception about the coin’s bias. I’m just refusing to do things that would inform you of the coin’s possible bias.
If this kind of reasoning weren’t possible, then it seems to me that glomerization wouldn’t be possible.
I think I disagree. I’ll add some precision to point out how. Happy to hear if I’m missing something.
E is Bayesian evidence of X if E is more likely to happen when X is true than when it’s not.
If Bob says “As a policy, I’m not going to check whether I’m running an Omega-C deception”, that’s equally likely whether Bob is running a deception or not. (Hence the “as a policy” part.) It just fully happens in both cases. So from Omega-C’s point of view, it’s not Bayesian evidence that distinguishes between the two versions of Bob.
It would be evidence if the choice were made from a stance of “Oh shoot, that might be self-deception! Well, I’m now going to adopt the no-looking policy so that I don’t have to check it!” Then yeah, sure, that’s clearly evidence — which is precisely why that method of deciding not to look isn’t what can work.
The policy of always deeply investigating oneself can produce evidence for Omega-C, but the act of choosing that policy might not. Choosing the policy not to look just doesn’t produce evidence.
Or at least that’s how it seems to me.
The fact that Bob has this policy in the first place is more likely when he’s being self-deceptive. Sure, some people will glomorize even when they have nothing to hide, but more often it will be the result of Bob noticing that he’s the sort of person who might have something to hide.
It’s a general rule that if E is strong evidence for X, then ~E is at least weak evidence for ~X.
A fun fictional example here is Bester’s The Demolished Man: how do you plan & carry out an assassination when telepaths are routinely eavesdropping on your mind? The protagonist visits a company musician, requesting a musical earworm for a company song to help the workers’ health or something; alas! the earworm gets stuck in his head, and so all any telepath hears is the earworm. And you can’t blame a man for having an earworm stuck in his head, now can you? He has an entirely legitimate reason for that to be there, which ‘explains away’ the evidence of the deception hypothesis that telepathic-immunity would otherwise support.
I don’t know if that’s true. It might be. But some possible counterpoints:
People can distrust systems that demand they check. “You have nothing to fear if you have nothing to hide” can get a response of “No” even from people who don’t have anything to hide.
If someone subconsciously thinks they can pull off the illusion of honestly looking while in fact finding nothing, they become more likely to choose to look because they’re self-deceiving.
Someone with a policy of not looking might be better at making their own self-deception unnecessary.
Sure, that way of deciding doesn’t work.
Likewise, if you’re inclined to decide you’re going to dig into possible sources of self-deception because you think it’s unlikely that you have any, then you can’t do this trick.
The hypothetical respect for any self-deception that might be there needs to be unconditional on its existence. Otherwise, for the reason you say, it doesn’t work as well.
(…with some caveats about how people are imperfect telepaths, so some fuzz in implementation here is in practice fine.)
That said, I think you’re right in that if Omega-C is looking only at the choice of whether to look or not, then yes, Omega-C would be right to take the choice as evidence of a deception.
But the whole point is that Omega-C can read what conscious processes you’re using, and can see that you’re deciding for a glomerizing reason.
That’s why why you choose what you do matters so much here. Not just what you choose.
Conservation of expected evidence is what makes looking relevant. It’s not what makes deciding to look relevant.
If I decide to appease Omega-C by looking, and then I find that I’m self-deceiving, the fact that I chose to look gets filtered. The fact that this is possible is why not finding evidence can matter at all. Otherwise it’d just be a charade.
Relatedly: I have a coin in my pocket. I don’t feel like checking it for bias. Does that make it more likely that the coin is biased? Maybe. But if I could magically show you that I’m not looking because I honestly do not care one way or the other and don’t want to waste the effort, and it doesn’t affect me whether it’s biased or not… then you can’t use my disinterest in checking the coin for bias as evidence of some kind of subconscious deception about the coin’s bias. I’m just refusing to do things that would inform you of the coin’s possible bias.
If this kind of reasoning weren’t possible, then it seems to me that glomerization wouldn’t be possible.