Yes, that’s true but makes your conclusion a bit misleading because not all sets of outcomes correspond to possible actions. It can easily happen that any preference ordering on actions is rationalizable by tweaking utility under a given prior.
The math in the example is clear enough, I just don’t understand the motivation for it. If you reduce everything to a preference relation on subsets of a sigma algebra, it’s trivially true that you can tweak it with any monotonic function, not just mixing p and q with alpha and beta. So what.
It can also happen that the prior happens to be the right one, but it isn’t guaranteed. This is a red flag, a possible flaw, something to investigate.
The question of which events are “possible actions” is a many-faceted one, and solving this problem “by definition” doesn’t work. For example, if you can pick the best strategy, it doesn’t matter what the preference order says for all events except the best strategy, even what it says for “possible actions” which won’t actually happen.
Strictly speaking, I don’t even trust (any) expected utility (and so Bayesian math) to represent preference. Any solution has to also work in a discrete deterministic setting.
It seems to me that you’re changing the subject, or maybe making inferential jumps that are too long for me.
The information to determine which events are possible actions is absent from your model. You can’t calculate it within your setting, only postulate.
If the overarching goal of this post was finding ways to represent human preference (did you imply that? I can’t tell), then I don’t understand how it brings us closer to that goal.
The Hofstadter’s Law of Inferential Distance: What you are saying is always harder to understand than you expect, even when you take into account Hofstadter’s Law of Inferential Distance.
Of course this post is only a small side-node, and it tells nothing about which events mean what. Human preference is a preference, so even without details the discussion of preference-in-general has some implications for human preference, which the last paragraph of the post alluded to, with regards to picking priors for Bayesian math.
Expected utility is usually written for actions, but it can be written as in the post as well, it’s formally equivalent.
However, the ratios of the conditional probabilities of those outcomes, given that you take a certain action, will not always equal the rations of the unconditional probabilities, as in your formula.
Any action can be identified with a set of outcomes consistent with the action. See my reply to JGWeissman.
Is the example after mixing unclear? In what way?
Yes, that’s true but makes your conclusion a bit misleading because not all sets of outcomes correspond to possible actions. It can easily happen that any preference ordering on actions is rationalizable by tweaking utility under a given prior.
The math in the example is clear enough, I just don’t understand the motivation for it. If you reduce everything to a preference relation on subsets of a sigma algebra, it’s trivially true that you can tweak it with any monotonic function, not just mixing p and q with alpha and beta. So what.
It can also happen that the prior happens to be the right one, but it isn’t guaranteed. This is a red flag, a possible flaw, something to investigate.
The question of which events are “possible actions” is a many-faceted one, and solving this problem “by definition” doesn’t work. For example, if you can pick the best strategy, it doesn’t matter what the preference order says for all events except the best strategy, even what it says for “possible actions” which won’t actually happen.
Strictly speaking, I don’t even trust (any) expected utility (and so Bayesian math) to represent preference. Any solution has to also work in a discrete deterministic setting.
It seems to me that you’re changing the subject, or maybe making inferential jumps that are too long for me.
The information to determine which events are possible actions is absent from your model. You can’t calculate it within your setting, only postulate.
If the overarching goal of this post was finding ways to represent human preference (did you imply that? I can’t tell), then I don’t understand how it brings us closer to that goal.
The Hofstadter’s Law of Inferential Distance: What you are saying is always harder to understand than you expect, even when you take into account Hofstadter’s Law of Inferential Distance.
Of course this post is only a small side-node, and it tells nothing about which events mean what. Human preference is a preference, so even without details the discussion of preference-in-general has some implications for human preference, which the last paragraph of the post alluded to, with regards to picking priors for Bayesian math.
However, the ratios of the conditional probabilities of those outcomes, given that you take a certain action, will not always equal the rations of the unconditional probabilities, as in your formula.