I may be missing your point, but to me, it looks like the summary would be:
If you bundle utility with probability, you can do the same maths, which is nice since it simplifies other things. You cannot prefer certain expected outcomes no matter what your utility function is [neat result, btw].
Since the probability math works, I now call the new thing “probability” and show that you can’t find prior “probability” (new definition) without considering the normal definition of probability.
This doesn’t change anything about regular probability, or finding priors. It just says that you cannot find out what you instrumentally want apriori without knowing your utility function, which is trivially true.
As I said in the first phrase, this is but a “simple transformation of standard expected utility formula that I found conceptually interesting”. I don’t quite understand the second part of your comment (starting from “Since the probability...”).
That argument says that if you pick a prior, you can’t “patch” it to become an arbitrary preference by finding a fitting utility function. It’s not particularly related to the shouldness/probability representation, and it isn’t well-understood, but it’s easy to demonstrate by example in this setting, and I think it’s an interesting point as well, possibly worth exploring.
The new version of the post still loses me at about the point where mixing comes in. (What’s your motivation for introducing mixing at all?) I would’ve been happier if it went on about geometry instead of those huge inferential leaps at the end.
And JGWeissman is right, expected utility is a property of actions not outcomes which seems to make the whole post invalid unless you fix it somehow.
Yes, that’s true but makes your conclusion a bit misleading because not all sets of outcomes correspond to possible actions. It can easily happen that any preference ordering on actions is rationalizable by tweaking utility under a given prior.
The math in the example is clear enough, I just don’t understand the motivation for it. If you reduce everything to a preference relation on subsets of a sigma algebra, it’s trivially true that you can tweak it with any monotonic function, not just mixing p and q with alpha and beta. So what.
It can also happen that the prior happens to be the right one, but it isn’t guaranteed. This is a red flag, a possible flaw, something to investigate.
The question of which events are “possible actions” is a many-faceted one, and solving this problem “by definition” doesn’t work. For example, if you can pick the best strategy, it doesn’t matter what the preference order says for all events except the best strategy, even what it says for “possible actions” which won’t actually happen.
Strictly speaking, I don’t even trust (any) expected utility (and so Bayesian math) to represent preference. Any solution has to also work in a discrete deterministic setting.
It seems to me that you’re changing the subject, or maybe making inferential jumps that are too long for me.
The information to determine which events are possible actions is absent from your model. You can’t calculate it within your setting, only postulate.
If the overarching goal of this post was finding ways to represent human preference (did you imply that? I can’t tell), then I don’t understand how it brings us closer to that goal.
The Hofstadter’s Law of Inferential Distance: What you are saying is always harder to understand than you expect, even when you take into account Hofstadter’s Law of Inferential Distance.
Of course this post is only a small side-node, and it tells nothing about which events mean what. Human preference is a preference, so even without details the discussion of preference-in-general has some implications for human preference, which the last paragraph of the post alluded to, with regards to picking priors for Bayesian math.
Expected utility is usually written for actions, but it can be written as in the post as well, it’s formally equivalent.
However, the ratios of the conditional probabilities of those outcomes, given that you take a certain action, will not always equal the rations of the unconditional probabilities, as in your formula.
I may be missing your point, but to me, it looks like the summary would be:
This doesn’t change anything about regular probability, or finding priors. It just says that you cannot find out what you instrumentally want apriori without knowing your utility function, which is trivially true.
As I said in the first phrase, this is but a “simple transformation of standard expected utility formula that I found conceptually interesting”. I don’t quite understand the second part of your comment (starting from “Since the probability...”).
I agree that it is an interesting transformation, but I think your conclusion (“No simple morality, no simple probability.”) does not follow.
That argument says that if you pick a prior, you can’t “patch” it to become an arbitrary preference by finding a fitting utility function. It’s not particularly related to the shouldness/probability representation, and it isn’t well-understood, but it’s easy to demonstrate by example in this setting, and I think it’s an interesting point as well, possibly worth exploring.
The new version of the post still loses me at about the point where mixing comes in. (What’s your motivation for introducing mixing at all?) I would’ve been happier if it went on about geometry instead of those huge inferential leaps at the end.
And JGWeissman is right, expected utility is a property of actions not outcomes which seems to make the whole post invalid unless you fix it somehow.
Any action can be identified with a set of outcomes consistent with the action. See my reply to JGWeissman.
Is the example after mixing unclear? In what way?
Yes, that’s true but makes your conclusion a bit misleading because not all sets of outcomes correspond to possible actions. It can easily happen that any preference ordering on actions is rationalizable by tweaking utility under a given prior.
The math in the example is clear enough, I just don’t understand the motivation for it. If you reduce everything to a preference relation on subsets of a sigma algebra, it’s trivially true that you can tweak it with any monotonic function, not just mixing p and q with alpha and beta. So what.
It can also happen that the prior happens to be the right one, but it isn’t guaranteed. This is a red flag, a possible flaw, something to investigate.
The question of which events are “possible actions” is a many-faceted one, and solving this problem “by definition” doesn’t work. For example, if you can pick the best strategy, it doesn’t matter what the preference order says for all events except the best strategy, even what it says for “possible actions” which won’t actually happen.
Strictly speaking, I don’t even trust (any) expected utility (and so Bayesian math) to represent preference. Any solution has to also work in a discrete deterministic setting.
It seems to me that you’re changing the subject, or maybe making inferential jumps that are too long for me.
The information to determine which events are possible actions is absent from your model. You can’t calculate it within your setting, only postulate.
If the overarching goal of this post was finding ways to represent human preference (did you imply that? I can’t tell), then I don’t understand how it brings us closer to that goal.
The Hofstadter’s Law of Inferential Distance: What you are saying is always harder to understand than you expect, even when you take into account Hofstadter’s Law of Inferential Distance.
Of course this post is only a small side-node, and it tells nothing about which events mean what. Human preference is a preference, so even without details the discussion of preference-in-general has some implications for human preference, which the last paragraph of the post alluded to, with regards to picking priors for Bayesian math.
However, the ratios of the conditional probabilities of those outcomes, given that you take a certain action, will not always equal the rations of the unconditional probabilities, as in your formula.