People can counteract that trick and other similar tricks by constantly regenerating their beliefs from their original prior and remembered evidence. Can you make a more watertight model?
I think we can combine your [cousin_it’s] suggestion with MrMind’s for an Option 2 scenario.
Suppose Bob finds that he has a stored belief in Bright with an apparent memory of having based it on evidence A, but no memory of what evidence A was. That does constitute some small evidence in favor of Bright existing.
But if Bob then goes out in search of evidence about whether Bright exists, and finds some evidence A in favor, he is unable to know whether it’s the same evidence as before that he had forgotten, or if it’s different evidence. Another way of saying that is that Bob can’t tell whether or not A and A are independent. I suppose the ideal reasoner’s response would be to assign a probability density distribution over a range from full independence to full dependence and proceed with any belief updates taking that distribution into account.
The distribution should be formed by consideration of how Bob got the evidence. If Bob found his new evidence A in some easily repeatable way, like hearing it from Bright apologists, then Bob would probably think dependence on A is much more likely than independence, and so he would take into account mostly just A and not A. But if Bob got A by some means that he probably wouldn’t have had access to in the past, like an experiment requiring brand new technology to perform, then he would probably think independence was more likely, and so he would take into account A and A mostly separately.
People can counteract that trick and other similar tricks by constantly regenerating their beliefs from their original prior and remembered evidence. Can you make a more watertight model?
I think we can combine your [cousin_it’s] suggestion with MrMind’s for an Option 2 scenario.
Suppose Bob finds that he has a stored belief in Bright with an apparent memory of having based it on evidence A, but no memory of what evidence A was. That does constitute some small evidence in favor of Bright existing.
But if Bob then goes out in search of evidence about whether Bright exists, and finds some evidence A in favor, he is unable to know whether it’s the same evidence as before that he had forgotten, or if it’s different evidence. Another way of saying that is that Bob can’t tell whether or not A and A are independent. I suppose the ideal reasoner’s response would be to assign a probability density distribution over a range from full independence to full dependence and proceed with any belief updates taking that distribution into account.
The distribution should be formed by consideration of how Bob got the evidence. If Bob found his new evidence A in some easily repeatable way, like hearing it from Bright apologists, then Bob would probably think dependence on A is much more likely than independence, and so he would take into account mostly just A and not A. But if Bob got A by some means that he probably wouldn’t have had access to in the past, like an experiment requiring brand new technology to perform, then he would probably think independence was more likely, and so he would take into account A and A mostly separately.