Just gonna jot down some thoughts here. First a layout of the problem.
Expected utility is a product of two numbers, probability of the event times utility generated by the event.
Traditionally speaking, when the event is claimed to affect 3^^^3 people, the utility generated is on the order of 3^^^3
Traditionally speaking, there’s nothing about the 3^^^3 people that requires a super-exponentially large extension to the complexity of the system (the univers/multivers/etc). So the probability of the event does not scale like 1/(3^^^3)
Thus Expected Payoff becomes enormous, and you should pay the dude $5.
If you actually follow this, you’ll be mugged by random strangers offerring to save 3^^^3 people or whatever super-exponential numbers they can come up with.
In order to avoid being mugged, your suggestion is to apply a scale penalty (leverage penalty) to the probability. You then notice that this has some very strange effects on your epistemology—you become incapable of ever believing the 5$ will actually help no matter how much evidence you’re given, even though evidence can make the expected payoff large. You then respond to this problem with what appears to be an excuse to be illogical and/or non-bayesian at times (due to finite computing power).
It seems to me that an alternative would be to rescale the untility value, instead of the probability. This way, you wouldn’t run into any epistemic issues anywhere because you aren’t messing with the epistemics.
I’m not proposing we rescale Utility(save X people) by a factor 1/X, as that would make Utility(save X people) = Utility(save 1 person) all the time, which is obviously problematic. Rather, my idea is to make Utility a per capita quantity. That way, when the random hobo tells you he’ll save 3^^^3 people, he’s making a claim that requires there to be at least 3^^^3 people to save. If this does turn out to be true, keeping your Utility as a per capita quantity will require a rescaling on the order of 1/(3^^^3) to account for the now-much-larger population. This gives you a small expected payoff without requiring problematically small prior probabilities.
It seems we humans may already do a rescaling of this kind anyway. We tend to value rare things more than we would if they were common, tend to protect an endangered species more than we would if it weren’t endangered, and so on. But I’ll be honest and say that I haven’t really thought the consequences of this utility re-scaling through very much. It just seems that if you need to rescale a product of two numbers and rescaling one of the numbers causes problems, we may as well try rescaling the other and see where it leads.
Just gonna jot down some thoughts here. First a layout of the problem.
Expected utility is a product of two numbers, probability of the event times utility generated by the event.
Traditionally speaking, when the event is claimed to affect 3^^^3 people, the utility generated is on the order of 3^^^3
Traditionally speaking, there’s nothing about the 3^^^3 people that requires a super-exponentially large extension to the complexity of the system (the univers/multivers/etc). So the probability of the event does not scale like 1/(3^^^3)
Thus Expected Payoff becomes enormous, and you should pay the dude $5.
If you actually follow this, you’ll be mugged by random strangers offerring to save 3^^^3 people or whatever super-exponential numbers they can come up with.
In order to avoid being mugged, your suggestion is to apply a scale penalty (leverage penalty) to the probability. You then notice that this has some very strange effects on your epistemology—you become incapable of ever believing the 5$ will actually help no matter how much evidence you’re given, even though evidence can make the expected payoff large. You then respond to this problem with what appears to be an excuse to be illogical and/or non-bayesian at times (due to finite computing power).
It seems to me that an alternative would be to rescale the untility value, instead of the probability. This way, you wouldn’t run into any epistemic issues anywhere because you aren’t messing with the epistemics.
I’m not proposing we rescale Utility(save X people) by a factor 1/X, as that would make Utility(save X people) = Utility(save 1 person) all the time, which is obviously problematic. Rather, my idea is to make Utility a per capita quantity. That way, when the random hobo tells you he’ll save 3^^^3 people, he’s making a claim that requires there to be at least 3^^^3 people to save. If this does turn out to be true, keeping your Utility as a per capita quantity will require a rescaling on the order of 1/(3^^^3) to account for the now-much-larger population. This gives you a small expected payoff without requiring problematically small prior probabilities.
It seems we humans may already do a rescaling of this kind anyway. We tend to value rare things more than we would if they were common, tend to protect an endangered species more than we would if it weren’t endangered, and so on. But I’ll be honest and say that I haven’t really thought the consequences of this utility re-scaling through very much. It just seems that if you need to rescale a product of two numbers and rescaling one of the numbers causes problems, we may as well try rescaling the other and see where it leads.
Any thoughts?