Part of your post assumes a contradiction. If forced to choose between 1⁄2 and zero then zero no longer means can’t possibly happen and 1⁄2 no longer means will happen 50% of the time.
The only way your analysis works is if you are forced to choose between zero and 1⁄2 knowing that in the future you will forget that your choices were limited to zero and 1⁄2.
When I’m choosing between approximations, I haven’t actually started using the approximation yet. I’m predicting, based on the full knowledge I have now, the cost of replacing that full knowledge with an approximation.
So to calculate the expected utility of changing my beliefs (to the approximation), I use the approximation to calculate my hypothetical actions, but I use my current beliefs as probabilities for the expected utility calculation.
So you are assuming that in the future you will be forced to act on the belief that the probability can’t be something other than 0 or 1⁄2 even though in the future you will know that the probability will almost certainly be something other than 0 or 1⁄2.
But isn’t this the same as assuming that in the future you will forget that your choices had been limited to zero and 1/2?
Hrm, I think you might be ignoring the cost of actually doing the calculations, unless I’m missing something. The value of simplifying assumptions comes from how much easier it makes a situation to model. I guess the question would be, is the effort saved in modeling this thing with an approximation rather than exact figures worth the risks of modeling this thing with an approximation rather than exact figures? Especially if you have to do many models like this, or model a lot of other factors as well. Such as trying to sort out what are the best ways to spend your time overall, including possibly meteorite preparations.
It seems to me you use wrong wording. In contrary to the epistemic rationalist, the instrumental rationalist does not “gain” any “utility” from changing his beliefs. He is gaining utility from changing his action. Since he can either prepare or not prepare for a meteoritic catastrophe and not “half prepare”, I think the numbers you should choose are 0 and 1 and not 0 and 0.5.
I’m not entirely sure what different numbers it will yield, but I think it’s worth mentioning.
Why does it sound more like 1 than .5? If I believed the probability of my home getting struck by a meteorite was as high as .5, I would definitely make preparations.
Part of your post assumes a contradiction. If forced to choose between 1⁄2 and zero then zero no longer means can’t possibly happen and 1⁄2 no longer means will happen 50% of the time.
The only way your analysis works is if you are forced to choose between zero and 1⁄2 knowing that in the future you will forget that your choices were limited to zero and 1⁄2.
When I’m choosing between approximations, I haven’t actually started using the approximation yet. I’m predicting, based on the full knowledge I have now, the cost of replacing that full knowledge with an approximation.
So to calculate the expected utility of changing my beliefs (to the approximation), I use the approximation to calculate my hypothetical actions, but I use my current beliefs as probabilities for the expected utility calculation.
So you are assuming that in the future you will be forced to act on the belief that the probability can’t be something other than 0 or 1⁄2 even though in the future you will know that the probability will almost certainly be something other than 0 or 1⁄2.
But isn’t this the same as assuming that in the future you will forget that your choices had been limited to zero and 1/2?
Hrm, I think you might be ignoring the cost of actually doing the calculations, unless I’m missing something. The value of simplifying assumptions comes from how much easier it makes a situation to model. I guess the question would be, is the effort saved in modeling this thing with an approximation rather than exact figures worth the risks of modeling this thing with an approximation rather than exact figures? Especially if you have to do many models like this, or model a lot of other factors as well. Such as trying to sort out what are the best ways to spend your time overall, including possibly meteorite preparations.
It seems to me you use wrong wording. In contrary to the epistemic rationalist, the instrumental rationalist does not “gain” any “utility” from changing his beliefs. He is gaining utility from changing his action. Since he can either prepare or not prepare for a meteoritic catastrophe and not “half prepare”, I think the numbers you should choose are 0 and 1 and not 0 and 0.5. I’m not entirely sure what different numbers it will yield, but I think it’s worth mentioning.
Why does it sound more like 1 than .5? If I believed the probability of my home getting struck by a meteorite was as high as .5, I would definitely make preparations.