I really don’t see why I can’t say “the negative utility of a dust speck is 1 over Graham’s Number.”
You can say anything, but Graham’s number is very large; if the disutility of an air molecule slamming into your eye were 1 over Graham’s number, enough air pressure to kill you would have negligible disutility.
or “I am not obligated to have my utility function make sense in contexts like those involving 3^^^^3 participants, because my utility function is intended to be used in This World, and that number is a physical impossibility in This World.”
If your utility function ceases to correspond to utility at extreme values, isn’t it more of an approximation of utility than actual utility? Sure, you don’t need a model that works at the extremes—but when a model does hold for extreme values, that’s generally a good sign for the accuracy of the model.
An addendum: 2 more things. The difference between a life with n dust specks hitting your eye and n+1 dust specks is not worth considering, given how large n is in any real life. Furthermore, if we allow for possible immortality, n could literally be infinity, so the difference would be literally 0.
If utility is to be compared relative to lifetime utility, i.e. as (LifetimeUtility + x / LifetimeUtility), doesn’t that assign higher impact to five seconds of pain for a twenty-year old who will die at 40 than to a twenty-year old who will die at 120? Does that make sense?
Secondly, by virtue of your asserting that there exists an action with minimal disutility, you’ve shown that the Field of Utility is very different from the field of, say, the Real numbers, and so I am incredulous that we can simply “multiply” in the usual sense.
Eliezer’s point does not seem to me predicated on the existence of such a value; I see no need to assume multiplication has been broken.
if the disutility of an air molecule slamming into your eye were 1 over Graham’s number, enough air pressure to kill you would have negligible disutility.
Yes, this seems like a good argument that we can’t add up disutility for things like “being bumped into by particle type X” linearly. In fact, it seems like having 1, or even (whatever large number I breathe in a day) molecules of air bumping into me is a good thing, and so we can’t just talk about things like “the disutility of being bumped into by kinds of particles”.
If your utility function ceases to correspond to utility at extreme values, isn’t it more of an approximation of utility than actual utility?
Yeah, of course. Why, do you know of some way to accurately access someone’s actually-existing Utility Function in a way that doesn’t just produce an approximation of an idealization of how ape brains work? Because me, I’m sitting over here using an ape brain to model itself, and this particular ape doesn’t even really expect to leave this planet or encounter or affect more than a few billion people, much less 3^^^3. So it’s totally fine using something accurate to a few significant figures, trying to minimize errors that would have noticeable effects on these scales.
Sure, you don’t need a model that works at the extremes—but when a model does hold for extreme values, that’s generally a good sign for the accuracy of the model.
Yes, I agree. Given that your model is failing at these extreme values and telling you to torture people instead of blink, I think that’s a bad sign for your model.
doesn’t that assign higher impact to five seconds of pain for a twenty-year old who will die at 40 than to a twenty-year old who will die at 120? Does that make sense?
You can say anything, but Graham’s number is very large; if the disutility of an air molecule slamming into your eye were 1 over Graham’s number, enough air pressure to kill you would have negligible disutility.
If your utility function ceases to correspond to utility at extreme values, isn’t it more of an approximation of utility than actual utility? Sure, you don’t need a model that works at the extremes—but when a model does hold for extreme values, that’s generally a good sign for the accuracy of the model.
If utility is to be compared relative to lifetime utility, i.e. as (LifetimeUtility + x / LifetimeUtility), doesn’t that assign higher impact to five seconds of pain for a twenty-year old who will die at 40 than to a twenty-year old who will die at 120? Does that make sense?
Eliezer’s point does not seem to me predicated on the existence of such a value; I see no need to assume multiplication has been broken.
Yes, this seems like a good argument that we can’t add up disutility for things like “being bumped into by particle type X” linearly. In fact, it seems like having 1, or even (whatever large number I breathe in a day) molecules of air bumping into me is a good thing, and so we can’t just talk about things like “the disutility of being bumped into by kinds of particles”.
Yeah, of course. Why, do you know of some way to accurately access someone’s actually-existing Utility Function in a way that doesn’t just produce an approximation of an idealization of how ape brains work? Because me, I’m sitting over here using an ape brain to model itself, and this particular ape doesn’t even really expect to leave this planet or encounter or affect more than a few billion people, much less 3^^^3. So it’s totally fine using something accurate to a few significant figures, trying to minimize errors that would have noticeable effects on these scales.
Yes, I agree. Given that your model is failing at these extreme values and telling you to torture people instead of blink, I think that’s a bad sign for your model.
Yeah, absolutely, I definitely agree with that.
That would be failing, but 3^^^3 people blinking != you blinking. You just don’t comprehend the size of 3^^^3.
Well it’s self evident that that’s silly. So, there’s that.