value of information calculations say that decision is not likely enough to change to make further investigation worth it
If you think it’s a good idea to do a test when the VoI calculation has a negative EV, either you’re doing the VoI calculation wrong, or you’re mistaken about it being a good idea. I think another way to look at this is “if the VoI calculation says the value of the test is small, that means you should still do it if the cost of the test is small.”
For example, I have the habit of trying to report two pieces of information about dates- “Saturday the 20th,” for example, and when I see a date like that I pull out a calendar to check. (Turns out the 20th is a Sunday this month.) This makes it easier for me to quickly catch others’ mistakes, and for others to quickly catch my mistakes. Building that sort of redundancy into a system seems like a good idea, and I think the VoI numbers agree.
One of us may have dropped the “naive” modifier from the VOI. I meant that doing a straightforward VOI the way you would have learned to by eg implementing CFAR’s advice, would not automatically include the value of simpler models and self-calibration and other general-utility effects that are negligible in the current decision, but significant when added up outside the current context.
Also, good example of the value of redundant information. I agree that a well done VOI should catch that, but I assert that this has to be learned.
I meant that doing a straightforward VOI the way you would have learned to by eg implementing CFAR’s advice, would not automatically include the value of simpler models and self-calibration and other general-utility effects that are negligible in the current decision, but significant when added up outside the current context.
Hm. I’m not sure I agree with this, but that’s partly because I’m not sure exactly what you’re saying. A charitable read is that there’s reason to expect overconfidence when considering each situation individually and more correct confidence when considering all situations together because of a psychological quirk. An uncharitable read is that you can have a recurring situation where Policy A chooses an option with negative EV every time, and Policy B chooses an option with positive EV every time, but Policy A has a total higher EV than Policy B. (This can only happen with weird dependencies between the variables and naive EV calculations.)
I do agree that the way to urgify information-seeking and confusion-reducing actions and goals is to have a self-image of someone who gets things right, and to value precision as well as just calibration, and that this is probably more effective in shifting behavior and making implicit VoI calculations come out correctly.
Also, good example of the value of redundant information. I agree that a well done VOI should catch that, but I assert that this has to be learned.
I think this is an outside view/inside view distinction. By “straightforward VOI” I think we’re talking about an inside view VOI. So the thesis here could be restated as “outside view VOI is usually higher than inside view VOI, especially in situations with lots of uncertainty.”
EDIT: Now that I’m thinking about it, I bet that could be formalized.
If you think it’s a good idea to do a test when the VoI calculation has a negative EV, either you’re doing the VoI calculation wrong, or you’re mistaken about it being a good idea. I think another way to look at this is “if the VoI calculation says the value of the test is small, that means you should still do it if the cost of the test is small.”
For example, I have the habit of trying to report two pieces of information about dates- “Saturday the 20th,” for example, and when I see a date like that I pull out a calendar to check. (Turns out the 20th is a Sunday this month.) This makes it easier for me to quickly catch others’ mistakes, and for others to quickly catch my mistakes. Building that sort of redundancy into a system seems like a good idea, and I think the VoI numbers agree.
One of us may have dropped the “naive” modifier from the VOI. I meant that doing a straightforward VOI the way you would have learned to by eg implementing CFAR’s advice, would not automatically include the value of simpler models and self-calibration and other general-utility effects that are negligible in the current decision, but significant when added up outside the current context.
Also, good example of the value of redundant information. I agree that a well done VOI should catch that, but I assert that this has to be learned.
Hm. I’m not sure I agree with this, but that’s partly because I’m not sure exactly what you’re saying. A charitable read is that there’s reason to expect overconfidence when considering each situation individually and more correct confidence when considering all situations together because of a psychological quirk. An uncharitable read is that you can have a recurring situation where Policy A chooses an option with negative EV every time, and Policy B chooses an option with positive EV every time, but Policy A has a total higher EV than Policy B. (This can only happen with weird dependencies between the variables and naive EV calculations.)
I do agree that the way to urgify information-seeking and confusion-reducing actions and goals is to have a self-image of someone who gets things right, and to value precision as well as just calibration, and that this is probably more effective in shifting behavior and making implicit VoI calculations come out correctly.
I certainly learned it the hard way!
I think this is an outside view/inside view distinction. By “straightforward VOI” I think we’re talking about an inside view VOI. So the thesis here could be restated as “outside view VOI is usually higher than inside view VOI, especially in situations with lots of uncertainty.”
EDIT: Now that I’m thinking about it, I bet that could be formalized.