When I write U(N,I,T), I was trying to refer to the preferences of the person being presented with the scenario; if the person being asked the question was a wicked sadist, he might prefer more suffering to less suffering. Specifically, I was trying to come up with a “least common denominator” list of relevant factors that can matter in this kind of scenario. Apparently “how close I am to the person who suffers the pain” is another significant factor in the preferences, at least for Richard.
If we stipulate that, say, the pain is to be experienced by a human living on a planet orbiting Alpha Centauri 100,000,000 years from now, then does it make sense that N, I, and T provide enough information to fully define a preference function for the individual answering the question? [For example, all else being equal, I prefer the world in which you (a stranger) don’t stub your toe tomorrow at 11:00 AM to the one in which you do stub your toe but is otherwise identical in every way I care about.] If you literally don’t care at all about humans near Alpha Centauri living 100,000,000 years in the future, then your preference function would be constant.
There also seem to be some relevant bounds on N, I, and T. There are only so many humans that exist (or will exist), which bounds N. There is a worst possible pain that a human brain can experience, which provides an upper bound maximum for I. Finally, a human has a finite lifespan, which bounds T. (In the extreme case, T is bounded by the lifetime of the universe.)
When I write U(N,I,T), I was trying to refer to the preferences of the person being presented with the scenario; if the person being asked the question was a wicked sadist, he might prefer more suffering to less suffering. Specifically, I was trying to come up with a “least common denominator” list of relevant factors that can matter in this kind of scenario. Apparently “how close I am to the person who suffers the pain” is another significant factor in the preferences, at least for Richard.
If we stipulate that, say, the pain is to be experienced by a human living on a planet orbiting Alpha Centauri 100,000,000 years from now, then does it make sense that N, I, and T provide enough information to fully define a preference function for the individual answering the question? [For example, all else being equal, I prefer the world in which you (a stranger) don’t stub your toe tomorrow at 11:00 AM to the one in which you do stub your toe but is otherwise identical in every way I care about.] If you literally don’t care at all about humans near Alpha Centauri living 100,000,000 years in the future, then your preference function would be constant.
There also seem to be some relevant bounds on N, I, and T. There are only so many humans that exist (or will exist), which bounds N. There is a worst possible pain that a human brain can experience, which provides an upper bound maximum for I. Finally, a human has a finite lifespan, which bounds T. (In the extreme case, T is bounded by the lifetime of the universe.)