In fact, all you know is that your credence of event H is somewhere in the interval [0.4, 0.6]
This really isn’t how I understand credences to work. Firstly, they don’t take ranges, and secondly, they aren’t dictated to me by the background information, they’re calculated from it. This isn’t immediately fatal, because you can say something like:
The coin was flipped one quintillion times, and the proportion of times it came up heads was A, where A lies in the range [0.4, 0.6]
This is something you could actually tell me, and would have the effect that I think is intended. Under this background information X, my credence P(H | X) is just 0.5, but I have that P(H | X, A=a) = a for any a in [0.4, 0.6].
This is more than just a nitpick. We’ve demoted the range [0.4, 0.6] from being a priori privileged as the credence, to just another unknown value in the background information. When you then say “I’m maximising minimum expected utility”, the obvious objection is then—why have you chosen to minimise only over A, rather than any of the other unknown values in the background information? In particular, why aren’t you minimising over the value C, which represents the side the coin lands on?
But of course, if you minimise over all the unknowns, it’s a lot less interesting as a decision framework, because as far as I can tell it reduces to “never accept any risk of a loss, no matter how small the risk or the loss”.
This really isn’t how I understand credences to work. Firstly, they don’t take ranges, and secondly, they aren’t dictated to me by the background information, they’re calculated from it. This isn’t immediately fatal, because you can say something like:
This is something you could actually tell me, and would have the effect that I think is intended. Under this background information X, my credence P(H | X) is just 0.5, but I have that P(H | X, A=a) = a for any a in [0.4, 0.6].
This is more than just a nitpick. We’ve demoted the range [0.4, 0.6] from being a priori privileged as the credence, to just another unknown value in the background information. When you then say “I’m maximising minimum expected utility”, the obvious objection is then—why have you chosen to minimise only over A, rather than any of the other unknown values in the background information? In particular, why aren’t you minimising over the value C, which represents the side the coin lands on?
But of course, if you minimise over all the unknowns, it’s a lot less interesting as a decision framework, because as far as I can tell it reduces to “never accept any risk of a loss, no matter how small the risk or the loss”.