I want a description of my expected future experiences; if that means that I have expected variables in it rather than forks in a road, that actually makes it better because the “fork in the road” metaphor is agenty whereas the “random variable” metaphor is uncontrollable.
I can imagine purposes for which envisioning multiple different hypotheticals is useful for decision-making, so I will concede this point. My original opinion was simply that I have different criteria for what makes me sleep better at night than I thought I did, anyway.
Decision-theoretically, what matters is consequences, not experiences.
I’m confused by this distinction. Can you give me an example of an experience that is not a consequence and therefore doesn’t matter decision-theoretically? Can you give me an example of a consequence that is not an experience and therefore matters decision-theoretically?
For example, if you make a decision and then die, there will be consequences, but no future experiences. While future experiences are part of consequences, they don’t paint a balanced picture, as (predictable) things outside experiences are going to happen as well. You can send $X to charity, and expected consequences will predictably depend on specific (moderate) value of X, but you won’t expect differing future experiences depending on X.
Whence this should? That is my point.
I want a description of my expected future experiences; if that means that I have expected variables in it rather than forks in a road, that actually makes it better because the “fork in the road” metaphor is agenty whereas the “random variable” metaphor is uncontrollable.
For what purpose? Decision-theoretically, what matters is consequences, not experiences.
I can imagine purposes for which envisioning multiple different hypotheticals is useful for decision-making, so I will concede this point. My original opinion was simply that I have different criteria for what makes me sleep better at night than I thought I did, anyway.
I’m confused by this distinction. Can you give me an example of an experience that is not a consequence and therefore doesn’t matter decision-theoretically? Can you give me an example of a consequence that is not an experience and therefore matters decision-theoretically?
For example, if you make a decision and then die, there will be consequences, but no future experiences. While future experiences are part of consequences, they don’t paint a balanced picture, as (predictable) things outside experiences are going to happen as well. You can send $X to charity, and expected consequences will predictably depend on specific (moderate) value of X, but you won’t expect differing future experiences depending on X.
Gotcha! Sure, that makes sense. Thanks.