Like cousin_it, I’m assuming no mind tampering is involved, only evidence tampering
Is it sane for Bob and Daisy to be in such a positive or negative feedback loop? How is this resolved?
I don’t think the feedback loops exist. Bob saying “the fact that I believe in bright is evidence of Bright’s existence” is double-counting the evidence; deducing “and therefore, Bright exist” doesn’t bring in any new information.
It’s not that different from saying “I believe it will rain tomorrow, and the fact that I believe that is evidence that it is rain tomorrow, so I’ll increase my degree of belief. But wait, that makes the evidence even stronger!”.
If Bob and Daisy took the evidence provided by their belief into account already, how does this affect my own evidence updating? Should I take it into account regardless, or not at all, or to a smaller degree?
Just ignore the whole “belief in dark is evidence against dark” thing, Daisy already took that information into account when determining her own belief, you don’t want to double count it.
Treat it the same way as you’d treat hearing Bob tell you that in Faery, the sky is Blue, and Daisy telling you that in Faery, the sky is Green.
It’s not that different from saying “I believe it will rain tomorrow, and the fact that I believe that is evidence that it is rain tomorrow, so I’ll increase my degree of belief. But wait, that makes the evidence even stronger!”.
This is completely different. My belief about the rain tomorrow is in no way evidence for actual rain tomorrow, as you point out—it’s already factored in. Tomorrow’s rain is in no way able to affect my beliefs, whereas a sorcerer can, even without mind tampering. He can, for instance, manufacture evidence so as to mislead me, and if he is sufficiently clever, I’ll be misled. But I am also aware that my belief state about sorcerers is not as reliable because of possible tampering.
Here, by me, I mean a person living in Faerie, not “me” as in the original post.
He can, for instance, manufacture evidence so as to mislead me, and if he is sufficiently clever, I’ll be misled. But I am also aware that my belief state about sorcerers is not as reliable because of possible tampering.
Seems you can calculate P(evidence | Dark) by taking Dark’s tampering into account (basically he’ll try to get that value as close as possible to P(evidence | no Dark) ), and update based on that. Your belief may not be reliable in that you may still be wrong, but it still already takes all the information you have (i.e. P(evidence | Dark) ) into account.
Like cousin_it, I’m assuming no mind tampering is involved, only evidence tampering
I don’t think the feedback loops exist. Bob saying “the fact that I believe in bright is evidence of Bright’s existence” is double-counting the evidence; deducing “and therefore, Bright exist” doesn’t bring in any new information.
It’s not that different from saying “I believe it will rain tomorrow, and the fact that I believe that is evidence that it is rain tomorrow, so I’ll increase my degree of belief. But wait, that makes the evidence even stronger!”.
Just ignore the whole “belief in dark is evidence against dark” thing, Daisy already took that information into account when determining her own belief, you don’t want to double count it.
Treat it the same way as you’d treat hearing Bob tell you that in Faery, the sky is Blue, and Daisy telling you that in Faery, the sky is Green.
This is completely different. My belief about the rain tomorrow is in no way evidence for actual rain tomorrow, as you point out—it’s already factored in. Tomorrow’s rain is in no way able to affect my beliefs, whereas a sorcerer can, even without mind tampering. He can, for instance, manufacture evidence so as to mislead me, and if he is sufficiently clever, I’ll be misled. But I am also aware that my belief state about sorcerers is not as reliable because of possible tampering.
Here, by me, I mean a person living in Faerie, not “me” as in the original post.
Seems you can calculate P(evidence | Dark) by taking Dark’s tampering into account (basically he’ll try to get that value as close as possible to P(evidence | no Dark) ), and update based on that. Your belief may not be reliable in that you may still be wrong, but it still already takes all the information you have (i.e. P(evidence | Dark) ) into account.