Truth is just that which it is useful to believe in order to maximize one’s current values. Given that our values rely upon things that may not be “objectively real”… so much the worse for objective reality. I agree with other commenters that values are probably robust to changes in ontology, but let’s not forget that we have the ability to simply refuse to change our ontology if doing so decreases our expected value. Rationality is about winning, not about being right.
This is kinda-correct for reflectively stable goals, understood as including prior state of knowledge, pursued updatelessly: you form expectations about what you care about, plan about what you care about, even if it’s not the physical reality, even if that gets you destroyed in the physical reality. Probability is degree of caring, and it’s possible to care about things other than reality. Still, probably such policies respond to observations of reality with sensible behaviors that appear to indicate awareness of reality, even if in some technical sense that’s not what’s going on. But not necessarily.
This only works for sufficiently strong consequentialists that can overcome limitations of their cognitive architecture that call for specialization in its parts, so that concluding that it’s useful to form motivated beliefs is actually correct and doesn’t just breakcognition.
Reflectively stable goals are not what’s being discussed in this post. And probably agents with reflectively stable goals are always misaligned.
I’m trying very hard to understand the vector valued stuff in your links but I just cannot get it. Even reading about the risk-neutral probability thing—doesn’t make any sense. Can you suggest some resources to get me up to speed on the reasoning behind all that?
I’ve just fixed LaTeX formatting in my post on Jeffrey-Bolker rotation (didn’t notice it completely broke by now when including the link). Its relevance here is as a self-contained mathematically legible illustration to Wei Dai’s point on how probability can be understood as an aspect of an agent’s decision algorithm. The point itself is more general, doesn’t depend on this illustration.
Specifically, both utility function and prior probability distribution are data determining preference ordering, and mix on equal footing through Jeffrey-Bolker rotation. Informally reframed, neither utility nor probability is more fundamentally objective than the other, and both are “a matter on preference”. At the same time, given particular preference, there is no freedom to use probability that disagrees with it, that’s determined by “more objective” considerations. This applies when we start with a decision algorithm already given (even if by normative extrapolation), rather than only with a world and a vague idea of how to act in it, where probability would be much more of its own thing.
Truth is just that which it is useful to believe in order to maximize one’s current values. Given that our values rely upon things that may not be “objectively real”… so much the worse for objective reality. I agree with other commenters that values are probably robust to changes in ontology, but let’s not forget that we have the ability to simply refuse to change our ontology if doing so decreases our expected value. Rationality is about winning, not about being right.
This is kinda-correct for reflectively stable goals, understood as including prior state of knowledge, pursued updatelessly: you form expectations about what you care about, plan about what you care about, even if it’s not the physical reality, even if that gets you destroyed in the physical reality. Probability is degree of caring, and it’s possible to care about things other than reality. Still, probably such policies respond to observations of reality with sensible behaviors that appear to indicate awareness of reality, even if in some technical sense that’s not what’s going on. But not necessarily.
This only works for sufficiently strong consequentialists that can overcome limitations of their cognitive architecture that call for specialization in its parts, so that concluding that it’s useful to form motivated beliefs is actually correct and doesn’t just break cognition.
Reflectively stable goals are not what’s being discussed in this post. And probably agents with reflectively stable goals are always misaligned.
I’m trying very hard to understand the vector valued stuff in your links but I just cannot get it. Even reading about the risk-neutral probability thing—doesn’t make any sense. Can you suggest some resources to get me up to speed on the reasoning behind all that?
I’ve just fixed LaTeX formatting in my post on Jeffrey-Bolker rotation (didn’t notice it completely broke by now when including the link). Its relevance here is as a self-contained mathematically legible illustration to Wei Dai’s point on how probability can be understood as an aspect of an agent’s decision algorithm. The point itself is more general, doesn’t depend on this illustration.
Specifically, both utility function and prior probability distribution are data determining preference ordering, and mix on equal footing through Jeffrey-Bolker rotation. Informally reframed, neither utility nor probability is more fundamentally objective than the other, and both are “a matter on preference”. At the same time, given particular preference, there is no freedom to use probability that disagrees with it, that’s determined by “more objective” considerations. This applies when we start with a decision algorithm already given (even if by normative extrapolation), rather than only with a world and a vague idea of how to act in it, where probability would be much more of its own thing.