Since a human mind really can’t naturally conceive of the difference between huge numbers like these, wouldn’t it follow that our utility functions are bounded by an horizontal asymptote? And shouldn’t that solve this problem?
I mean, if the amount of utility gained from saving x amount of people is no longer allowed to increase boundlessly, you don’t need such improbable leverage penalties. You’d still of course have the property that it’s better to save more people, just not linearly better.
I find that unsatisfactory for the following reasons—first, I am a great believer in life and love without bound; second, I suspect that the number of people in the multiverse is already great enough to max out that sort of asymptote and yet I still care; third, if this number is not already maxed out, I find it counterintuitive that someone another universe over could cause me to experience preference reversals in this universe by manipulating the number of people who already exist inside a box.
Ok, I might have formulated myself badly. My argument is that any agent of bounded computational power is forced to use two utility functions. The one they wish they had (i.e. the unbounded linear version) and the one they are forced to use in their calculations because of their limitations (i.e. an asymptotically bounded approximation).
For those agents capable of self-modification, just add a clause to increase their computational power (and thereby increasing the bound of their approximation) whenever the utilities of the “scales they’re working on” differ by more than some small specified number.
So, my answer to this person would be “stick around until I can safely modify myself into dealing with your request”, or alternatively, if he wants an answer right now after seeing his evidence, “here’s 5 dollars”.
Why can’t you increase your asymptote with new evidence? If, for instance, your utility was bounded at 2^160 utilons before the mugger opened the sky then just increase your bound according to that evidence and then shut up and multiply to decide whether to pay $5. You can’t update to a bound of 3^^^3 in one step since you can’t receive enough evidence at once, which is a handy feature for avoiding muggings, but your utility at a distant point in the future is essentially unbounded given enough evidential updates over time.
Useful utility bounds should be derivable from our knowledge of the universe. If we can theoretically create 10^80 unique, just-worth-living lives with the estimated matter and energy in the universe then that provides a minimum bound, although it’s probably desirable to choose the bound large enough that the 10^80th life is worth nearly as much as the 1st or 10^11th life. When we have evidence for a change in our estimate of the available matter and energy or a change in the efficiency of turning matter and energy into utility we scale the bound appropriately.
Since a human mind really can’t naturally conceive of the difference between huge numbers like these, wouldn’t it follow that our utility functions are bounded by an horizontal asymptote? And shouldn’t that solve this problem?
I mean, if the amount of utility gained from saving x amount of people is no longer allowed to increase boundlessly, you don’t need such improbable leverage penalties. You’d still of course have the property that it’s better to save more people, just not linearly better.
I find that unsatisfactory for the following reasons—first, I am a great believer in life and love without bound; second, I suspect that the number of people in the multiverse is already great enough to max out that sort of asymptote and yet I still care; third, if this number is not already maxed out, I find it counterintuitive that someone another universe over could cause me to experience preference reversals in this universe by manipulating the number of people who already exist inside a box.
Ok, I might have formulated myself badly. My argument is that any agent of bounded computational power is forced to use two utility functions. The one they wish they had (i.e. the unbounded linear version) and the one they are forced to use in their calculations because of their limitations (i.e. an asymptotically bounded approximation).
For those agents capable of self-modification, just add a clause to increase their computational power (and thereby increasing the bound of their approximation) whenever the utilities of the “scales they’re working on” differ by more than some small specified number.
So, my answer to this person would be “stick around until I can safely modify myself into dealing with your request”, or alternatively, if he wants an answer right now after seeing his evidence, “here’s 5 dollars”.
Why can’t you increase your asymptote with new evidence? If, for instance, your utility was bounded at 2^160 utilons before the mugger opened the sky then just increase your bound according to that evidence and then shut up and multiply to decide whether to pay $5. You can’t update to a bound of 3^^^3 in one step since you can’t receive enough evidence at once, which is a handy feature for avoiding muggings, but your utility at a distant point in the future is essentially unbounded given enough evidential updates over time.
Useful utility bounds should be derivable from our knowledge of the universe. If we can theoretically create 10^80 unique, just-worth-living lives with the estimated matter and energy in the universe then that provides a minimum bound, although it’s probably desirable to choose the bound large enough that the 10^80th life is worth nearly as much as the 1st or 10^11th life. When we have evidence for a change in our estimate of the available matter and energy or a change in the efficiency of turning matter and energy into utility we scale the bound appropriately.