“Suppose you ask your friend Naomi to roll a die without letting you see the result… Having rolled the die Naomi must write down the result on a piece of paper (without showing you) and place it in an envelope...
So some people are happy to accept that there is genuine uncertainty about the number before it is thrown (because its existence is ‘not a fact’), but not after it is thrown. This is despite the fact that our knowledge of the number after it is thrown is as incomplete as it was before.”
Risk Assessment and Decision Analysis with Bayesian Networks, Norman Fenton and Martin Neil, Chapter 1
Ah.… “genuine uncertainty” the term reminds me of “no true scotsman argument”.
My point being, there’s an uncertainty reduction before and after the die was rolled, not to say this means, I should update my belief about the die’s rolled/winning value.
Simply put my friend Naomi’s beliefs have been updated and uncertainty in her mind has been eliminated.
I think the author was trying to point out that most people conflate the two differences. It definitely is well worded for rhetoric, but not for pedagogy(in Feynman sense).
What does it mean to have uncertainty reduction taking place outside of the frame of reference of the person being asked for a decision?
You’re assuming humans are rational(as in the AI definition of a rational agent). We’re not. So this knowledge that other person knows something for sure, that we don’t know about, colours/biases one’s judgement.
I am not saying one should update their beliefs based on another person knowing or not knowing, but that we do anyway, as part of perception. I would argue, that we should be learning to notice the confusion between the rational side of us vs the perceptive side which notes (the other agent’s) confidence/lack there of. I know it is a hand-wavy explanation, but my point stands nevertheless. I agree with the OP that one shouldn’t update their beliefs on the basis of Naomi/camera having no certainty about the outcome(of coin toss). Simply say that if it is Naomi, there could be cases where it is rational to update, though hard to actually observe/be-aware of these updations and therefore, safer to not update.
OK, so “there could be cases where it is rational to update.” How would you do so?
(I can’t understand what an update could reasonably change. You aren’t going to make the probability of any particular side more than 1⁄6, so what is the new probability?)
OK, so “there could be cases where it is rational to update.” How would you do so?
(I can’t understand what an update could reasonably change. You aren’t going to make the probability of any particular side more than 1⁄6, so what is the new probability?)
I don’t know either. I can make up a scenario, based on a series of die throws, history of win-losses and guesses based on that, but that would simply be conjecture, and still may not produce a reasonable process. However, this discussion reminded me of a scene in HPMOR. (The scene where HP’s critic part judges that Miss Camblebunker was not a Doctor, but an actor. (After Bellatrix is broken out of prison.))
My claim is that you can’t come up with such a conjecture where it makes sense to change the probability away from 1⁄6. That is why you should not update.
I disagree. I’m not sure it’s provable(maybe in professional poker players??), but if you’ve played the bet a lot of times, you could have come up with cues* about whether your friend has got the same roll(or number on the die) as the last time or not.
-- not sure how verbalizable or not it is .(which implies harder to teach to someone else).
So you should update after you see that she rolled some number, and saw her reaction—but this says nothing about updating again because she wrote the number down,
“Suppose you ask your friend Naomi to roll a die without letting you see the result… Having rolled the die Naomi must write down the result on a piece of paper (without showing you) and place it in an envelope...
So some people are happy to accept that there is genuine uncertainty about the number before it is thrown (because its existence is ‘not a fact’), but not after it is thrown. This is despite the fact that our knowledge of the number after it is thrown is as incomplete as it was before.”
Risk Assessment and Decision Analysis with Bayesian Networks, Norman Fenton and Martin Neil, Chapter 1
Ah.… “genuine uncertainty” the term reminds me of “no true scotsman argument”. My point being, there’s an uncertainty reduction before and after the die was rolled, not to say this means, I should update my belief about the die’s rolled/winning value.
Simply put my friend Naomi’s beliefs have been updated and uncertainty in her mind has been eliminated. I think the author was trying to point out that most people conflate the two differences. It definitely is well worded for rhetoric, but not for pedagogy(in Feynman sense).
What does it mean to have uncertainty reduction taking place outside of the frame of reference of the person being asked for a decision?
In other terms, the discussion would have been the same if they replaced Naomi with a camera that is automatically used to take a picture.
You’re assuming humans are rational(as in the AI definition of a rational agent). We’re not. So this knowledge that other person knows something for sure, that we don’t know about, colours/biases one’s judgement.
I am not saying one should update their beliefs based on another person knowing or not knowing, but that we do anyway, as part of perception. I would argue, that we should be learning to notice the confusion between the rational side of us vs the perceptive side which notes (the other agent’s) confidence/lack there of. I know it is a hand-wavy explanation, but my point stands nevertheless. I agree with the OP that one shouldn’t update their beliefs on the basis of Naomi/camera having no certainty about the outcome(of coin toss). Simply say that if it is Naomi, there could be cases where it is rational to update, though hard to actually observe/be-aware of these updations and therefore, safer to not update.
OK, so “there could be cases where it is rational to update.” How would you do so?
(I can’t understand what an update could reasonably change. You aren’t going to make the probability of any particular side more than 1⁄6, so what is the new probability?)
(I can’t understand what an update could reasonably change. You aren’t going to make the probability of any particular side more than 1⁄6, so what is the new probability?)
I don’t know either. I can make up a scenario, based on a series of die throws, history of win-losses and guesses based on that, but that would simply be conjecture, and still may not produce a reasonable process. However, this discussion reminded me of a scene in HPMOR. (The scene where HP’s critic part judges that Miss Camblebunker was not a Doctor, but an actor. (After Bellatrix is broken out of prison.))
My claim is that you can’t come up with such a conjecture where it makes sense to change the probability away from 1⁄6. That is why you should not update.
I disagree. I’m not sure it’s provable(maybe in professional poker players??), but if you’ve played the bet a lot of times, you could have come up with cues* about whether your friend has got the same roll(or number on the die) as the last time or not.
-- not sure how verbalizable or not it is .(which implies harder to teach to someone else).
So you should update after you see that she rolled some number, and saw her reaction—but this says nothing about updating again because she wrote the number down,