Yeah I totally sidestepped that issue because I don’t know how to solve it. I don’t think anyone knows, actually. Preference uncertainty is an open problem, AFAIK.
Specifically, will I die of radiation poisoning if I use VNM utility to make decisions when I’m uncertain about what my preferences even are? I.e., maximize expected utility, where the expectation is taken over my uncertainty about preferences in addition to any other uncertainty.
Yes. You can’t compare or aggregate utilities from different utility functions. So at present, you basically have to pick one and hope for the best.
Eventually someone will have to build a new thing for preference uncertainty. It will almost surely degenerate to VNM when you know your utility function.
There are other problems that also sink naive decision theory, like acausal stuff, which is what UDT and TDT try to solve, and anthropics, which screw up probabilities. There’s a lot more work on those than on preference uncertainty, AFAIK.
Specifically, will I die of radiation poisoning if I use VNM utility to make decisions when I’m uncertain about what my preferences even are? I.e., maximize expected utility, where the expectation is taken over my uncertainty about preferences in addition to any other uncertainty.
Yes. You can’t compare or aggregate utilities from different utility functions. So at present, you basically have to pick one and hope for the best.
This is exactly what my brain claimed you said :) Now I can make my comment.
Game theorists do this all the time—at least economists. They’ll create a game, then say something like “now let’s introduce noise into the payoffs” but the noise ends up being in the utility function. Then they go and find an equilibrium or something using expected utility.
Now every practical example I can think of off the top of my hand, you can reinterpret the uncertainty as uncertainty about actual outcomes with utilities associated with those outcomes and the math goes through. Usually the situation is something like letting U($)=$ for simplicity because risk aversion is orthogonal to what they’re interested in, so you can easily think about the uncertainty as being over $ rather than U($). This simplicity allows them to play fast and loose with VNM utility and get away with it, but I wouldn’t be surprised if someone made a model where they really do mean for the uncertainty to be over one’s own preferences and went ahead and used VNM utility. In any case, no one ever emphasized this point in any of the econ or game theory courses I’ve taken, grad or otherwise.
you can reinterpret the uncertainty as uncertainty about actual outcomes with utilities associated with those outcomes and the math goes through.
If you can do that, it seems to work; Noise in the payoffs is not preference uncertainty, just plain old uncertainty. So I guess my question is what does it look like when you can’t do that, and what do we do instead?
you can reinterpret the uncertainty as uncertainty about actual outcomes with utilities associated with those outcomes and the math goes through.
If you can do that, it seems to work; Noise in the payoffs is not preference uncertainty, just plain old uncertainty. So I guess my question is what does it look like when you can’t do that, and what do we do instead?
Yeah I totally sidestepped that issue because I don’t know how to solve it. I don’t think anyone knows, actually. Preference uncertainty is an open problem, AFAIK.
Yes. You can’t compare or aggregate utilities from different utility functions. So at present, you basically have to pick one and hope for the best.
Eventually someone will have to build a new thing for preference uncertainty. It will almost surely degenerate to VNM when you know your utility function.
There are other problems that also sink naive decision theory, like acausal stuff, which is what UDT and TDT try to solve, and anthropics, which screw up probabilities. There’s a lot more work on those than on preference uncertainty, AFAIK.
This is exactly what my brain claimed you said :) Now I can make my comment.
Game theorists do this all the time—at least economists. They’ll create a game, then say something like “now let’s introduce noise into the payoffs” but the noise ends up being in the utility function. Then they go and find an equilibrium or something using expected utility.
Now every practical example I can think of off the top of my hand, you can reinterpret the uncertainty as uncertainty about actual outcomes with utilities associated with those outcomes and the math goes through. Usually the situation is something like letting U($)=$ for simplicity because risk aversion is orthogonal to what they’re interested in, so you can easily think about the uncertainty as being over $ rather than U($). This simplicity allows them to play fast and loose with VNM utility and get away with it, but I wouldn’t be surprised if someone made a model where they really do mean for the uncertainty to be over one’s own preferences and went ahead and used VNM utility. In any case, no one ever emphasized this point in any of the econ or game theory courses I’ve taken, grad or otherwise.
In case you’re still interested
Thanks!
If you can do that, it seems to work; Noise in the payoffs is not preference uncertainty, just plain old uncertainty. So I guess my question is what does it look like when you can’t do that, and what do we do instead?
If you can do that, it seems to work; Noise in the payoffs is not preference uncertainty, just plain old uncertainty. So I guess my question is what does it look like when you can’t do that, and what do we do instead?