I haven’t really followed what the consensus in academia is, I’ve mostly picked up my views by osmosis from LessWrong/thinking on my own.
My view is that probabilities are ultimately used to make decisions. In particular, we can define an agent’s ‘effective probability’ that an event has occurred by the price at which it would buy a coupon paying out $1 if the event occurs. If Beauty is trying to maximize her expected income, her policy should be to buy a coupon for Tails for <$0.50 before going to sleep, and <$0.66 after waking(because her decision will be duplicated in the Tails world). You can also get different ‘effective probabilities’ if you are a total/average utilitarian towards copies of yourself(as explained in the paper I linked).
Once you’ve accepted that one’s ‘effective probabilities’ can change like this, you go on to throw away the notion of objective probability altogether(as an ontological primitive), and instead just figure out the best policy to execute for a given utility function/world. See UDT for an quasi-mathematical formalization of this, as applied to a level IV multiverse.
But there is a real, objective probability that can be proven, and it has nothing to do with SB’s subjective, anthropic probability
But why? In this view, there doesn’t have to be an ‘objective probability’, rather there are only different decision problems and the best algorithms to solve them.
I haven’t really followed what the consensus in academia is, I’ve mostly picked up my views by osmosis from LessWrong/thinking on my own.
My view is that probabilities are ultimately used to make decisions. In particular, we can define an agent’s ‘effective probability’ that an event has occurred by the price at which it would buy a coupon paying out $1 if the event occurs. If Beauty is trying to maximize her expected income, her policy should be to buy a coupon for Tails for <$0.50 before going to sleep, and <$0.66 after waking(because her decision will be duplicated in the Tails world). You can also get different ‘effective probabilities’ if you are a total/average utilitarian towards copies of yourself(as explained in the paper I linked).
Once you’ve accepted that one’s ‘effective probabilities’ can change like this, you go on to throw away the notion of objective probability altogether(as an ontological primitive), and instead just figure out the best policy to execute for a given utility function/world. See UDT for an quasi-mathematical formalization of this, as applied to a level IV multiverse.
But why? In this view, there doesn’t have to be an ‘objective probability’, rather there are only different decision problems and the best algorithms to solve them.