If you care about X, if you want X to happen, then your goal as a rational actor should be to figure out what set of steps you can take that increase the odds of X happening? If a student wants to pass his history test tomorrow, and he thinks there’s a 60% chance he will if he doesn’t study and a 80% chance he will if he does study, then he should study. I’m not sure how you figure that out if you have “caring” and “probability” confused, though.
Probability is how likely something is to happen given certain circumstances; values are what you want to happen. If you confuse the two, it seems to me you’re probably going to lose a lot of poker games.
(Maybe I’m just missing something here; I’m just not seeing how you can conflate the two without losing the decision-making value of understanding probability).
If a student wants to pass his history test tomorrow, and he thinks there’s a 60% chance he will if he doesn’t study and a 80% chance he will if he does study, then he should study.
Let’s work out this example. “A student wants to pass his history test tomorrow.” What does that even mean? The student doesn’t have any immediate experience of the history test tomorrow, it’s only grasped as an abstract concept. Without any further grounding he might as well want to be post-utopian. “he thinks there’s a 60% chance he will if he doesn’t study and a 80% chance he will if he does study” Ahh, there’s how the concept is grounded in terms of actions. The student considers not studying as equivalent to 0.6 times passing the history test, and studying as 0.8 times passing. Now his preferences translate into preferences over actions. “Then he should study.” Because that’s what his preference over actions tells him to prefer.
In other words, probability estimates are a method for turning preferences over abstract concepts into preferences over immediate experiences. This is the method people prefer to use for many abstractions, particularly abstractions about “the future” with the presumption that these abstraction will in “the future” become immediate experience, but it is not necessary and people may prefer to use other methods for different abstractions.
That example and pretty much everything else that comes up outside contrived corner cases is embedded in complex webs of cause and effect, where indeed probability and values are very different in practice. But when you consider entire universes at a time that can not causally interact then you gain a degree of freedom and if you want to keep the distinction you have to make an arbitrary choice, which is a) unaesthetic, and b) different agents will make differently making it harder to reason about them. But really it’s semantics; the models are isomorphic as far as I can tell.
I’m still not sure that makes sense.
If you care about X, if you want X to happen, then your goal as a rational actor should be to figure out what set of steps you can take that increase the odds of X happening? If a student wants to pass his history test tomorrow, and he thinks there’s a 60% chance he will if he doesn’t study and a 80% chance he will if he does study, then he should study. I’m not sure how you figure that out if you have “caring” and “probability” confused, though.
Probability is how likely something is to happen given certain circumstances; values are what you want to happen. If you confuse the two, it seems to me you’re probably going to lose a lot of poker games.
(Maybe I’m just missing something here; I’m just not seeing how you can conflate the two without losing the decision-making value of understanding probability).
Let’s work out this example. “A student wants to pass his history test tomorrow.” What does that even mean? The student doesn’t have any immediate experience of the history test tomorrow, it’s only grasped as an abstract concept. Without any further grounding he might as well want to be post-utopian. “he thinks there’s a 60% chance he will if he doesn’t study and a 80% chance he will if he does study” Ahh, there’s how the concept is grounded in terms of actions. The student considers not studying as equivalent to 0.6 times passing the history test, and studying as 0.8 times passing. Now his preferences translate into preferences over actions. “Then he should study.” Because that’s what his preference over actions tells him to prefer.
In other words, probability estimates are a method for turning preferences over abstract concepts into preferences over immediate experiences. This is the method people prefer to use for many abstractions, particularly abstractions about “the future” with the presumption that these abstraction will in “the future” become immediate experience, but it is not necessary and people may prefer to use other methods for different abstractions.
That example and pretty much everything else that comes up outside contrived corner cases is embedded in complex webs of cause and effect, where indeed probability and values are very different in practice. But when you consider entire universes at a time that can not causally interact then you gain a degree of freedom and if you want to keep the distinction you have to make an arbitrary choice, which is a) unaesthetic, and b) different agents will make differently making it harder to reason about them. But really it’s semantics; the models are isomorphic as far as I can tell.