I’m trying very hard to understand the vector valued stuff in your links but I just cannot get it. Even reading about the risk-neutral probability thing—doesn’t make any sense. Can you suggest some resources to get me up to speed on the reasoning behind all that?
I’ve just fixed LaTeX formatting in my post on Jeffrey-Bolker rotation (didn’t notice it completely broke by now when including the link). Its relevance here is as a self-contained mathematically legible illustration to Wei Dai’s point on how probability can be understood as an aspect of an agent’s decision algorithm. The point itself is more general, doesn’t depend on this illustration.
Specifically, both utility function and prior probability distribution are data determining preference ordering, and mix on equal footing through Jeffrey-Bolker rotation. Informally reframed, neither utility nor probability is more fundamentally objective than the other, and both are “a matter on preference”. At the same time, given particular preference, there is no freedom to use probability that disagrees with it, that’s determined by “more objective” considerations. This applies when we start with a decision algorithm already given (even if by normative extrapolation), rather than only with a world and a vague idea of how to act in it, where probability would be much more of its own thing.
I’m trying very hard to understand the vector valued stuff in your links but I just cannot get it. Even reading about the risk-neutral probability thing—doesn’t make any sense. Can you suggest some resources to get me up to speed on the reasoning behind all that?
I’ve just fixed LaTeX formatting in my post on Jeffrey-Bolker rotation (didn’t notice it completely broke by now when including the link). Its relevance here is as a self-contained mathematically legible illustration to Wei Dai’s point on how probability can be understood as an aspect of an agent’s decision algorithm. The point itself is more general, doesn’t depend on this illustration.
Specifically, both utility function and prior probability distribution are data determining preference ordering, and mix on equal footing through Jeffrey-Bolker rotation. Informally reframed, neither utility nor probability is more fundamentally objective than the other, and both are “a matter on preference”. At the same time, given particular preference, there is no freedom to use probability that disagrees with it, that’s determined by “more objective” considerations. This applies when we start with a decision algorithm already given (even if by normative extrapolation), rather than only with a world and a vague idea of how to act in it, where probability would be much more of its own thing.