What AlexMennen said. For a Bayesian there’s no difference in principle between ignorance and risk.
One wrinkle is that even Bayesians shouldn’t have prior probabilities for everything, because if you assign a prior probability to something that could indirectly depend on your decision, you might lose out.
A good example is the absent-minded driver problem. While driving home from work, you pass two identical-looking intersections. At the first one you’re supposed to go straight, at the second one you’re supposed to turn. If you do everything correctly, you get utility 4. If you goof and turn at the first intersection, you never arrive at the second one, and get utility 0. If you goof and go straight at the second, you get utility 1. Unfortunately, by the time you get to the second one, you forget whether you’d already been at the first, which means at both intersections you’re uncertain about your location.
If you treat your uncertainty about location as a probability and choose the Bayesian-optimal action, you’ll get demonstrably worse results than if you’d planned your actions in advance or used UDT. The reason, as pointed out by taw and pengvado, is that your probability of arriving at the second intersection depends on your decision to go straight or turn at the first one, so treating it as unchangeable leads to weird errors.
One wrinkle is that even Bayesians shouldn’t have prior probabilities for everything, because if you assign a prior probability to something that could indirectly depend on your decision, you might lose out.
… your probability of arriving at the second intersection depends on your decision to go straight or turn at the first one, so treating it as unchangeable leads to weird errors.
“Unchangeable” is a bad word for this, as it might well be thought of as unchangeable, if you won’t insist on knowing what it is. So a Bayesian may “have probabilities for everything”, whatever that means, if it’s understood that those probabilities are not logically transparent and some of the details about them won’t necessarily be available when making any given decision. After you do make a decision that controls certain details of your prior, those details become more readily available for future decisions.
In other words, the problem is not in assigning probabilities to too many things, but in assigning them arbitrarily and thus incorrectly. If the correct assignment of probability is such that the probability depends on your future decisions, you won’t be able to know this probability, so if you’ve “assigned” it in such a way that you do know what it is, you must have assigned a wrong thing. Prior probability is not up for grabs etc.
so treating it as unchangeable leads to weird errors.
The prior probability is unchangeable. It’s just that you make your decision based on the posterior probability taking into account each decision. At least, that’s what you do if you use EDT. I’m not entirely familiar with the other decision theories, but I’m pretty sure they all have prior probabilities for everything.
What AlexMennen said. For a Bayesian there’s no difference in principle between ignorance and risk.
One wrinkle is that even Bayesians shouldn’t have prior probabilities for everything, because if you assign a prior probability to something that could indirectly depend on your decision, you might lose out.
A good example is the absent-minded driver problem. While driving home from work, you pass two identical-looking intersections. At the first one you’re supposed to go straight, at the second one you’re supposed to turn. If you do everything correctly, you get utility 4. If you goof and turn at the first intersection, you never arrive at the second one, and get utility 0. If you goof and go straight at the second, you get utility 1. Unfortunately, by the time you get to the second one, you forget whether you’d already been at the first, which means at both intersections you’re uncertain about your location.
If you treat your uncertainty about location as a probability and choose the Bayesian-optimal action, you’ll get demonstrably worse results than if you’d planned your actions in advance or used UDT. The reason, as pointed out by taw and pengvado, is that your probability of arriving at the second intersection depends on your decision to go straight or turn at the first one, so treating it as unchangeable leads to weird errors.
“Unchangeable” is a bad word for this, as it might well be thought of as unchangeable, if you won’t insist on knowing what it is. So a Bayesian may “have probabilities for everything”, whatever that means, if it’s understood that those probabilities are not logically transparent and some of the details about them won’t necessarily be available when making any given decision. After you do make a decision that controls certain details of your prior, those details become more readily available for future decisions.
In other words, the problem is not in assigning probabilities to too many things, but in assigning them arbitrarily and thus incorrectly. If the correct assignment of probability is such that the probability depends on your future decisions, you won’t be able to know this probability, so if you’ve “assigned” it in such a way that you do know what it is, you must have assigned a wrong thing. Prior probability is not up for grabs etc.
The prior probability is unchangeable. It’s just that you make your decision based on the posterior probability taking into account each decision. At least, that’s what you do if you use EDT. I’m not entirely familiar with the other decision theories, but I’m pretty sure they all have prior probabilities for everything.