More generally, you might say “My utility function is inverse whatever function you use to make big numbers.” If Omega starts chatting about the Busy Beaver, fine, your utility function in that region is inverse-Busy-Beaver. Again, this is probably not very smart, but I suspect it’s psychologically realistic; this seems to be something like the way human minds actually deal with such big numbers. In some sense that’s why we invented logarithms in the first place!
This (being psychologically realistic, not being my actual utility function) seems very plausible.
It seems to me that they will almost all say that yes, they are willing to tolerate a SPECK to save someone else from TORTURE. (Obviously in such a number you’re going to find any amount of trillions who won’t, but no worries, they’ll be at most a few percent of the whole.) Ought you not to take their preference into account?
This form of the question considers the other people’s speckings to be held fixed. (What if each is willing to suffer 25 years of torture to spare the other guy 50?)
This form of the question considers the other people’s speckings to be held fixed.
(What if each is willing to suffer 25 years of torture to spare the other guy 50?)
I didn’t say that their preference should be the only criterion, just that it’s something to think about. As a practical matter, I do think that not many humans are going to volunteer for 25 years of torture whatever the payoff, except perhaps parents stepping in for their children.
I don’t think holding other speckings constant is a bug. If you ask the 3^^^^3 people “should I choose TORTURE or SPECKS”, you are basically just delegating the decision to the standard human discounting mechanisms, and likely going to get back SPECKS. That’s a quite separate question from “Are you, personally, willing to suffer SPECKS to avoid TORTURE”. But perhaps it can be modified a bit, like so: “Are you, personally, willing to suffer SPECKS, given that there will be no TORTURE if, and only if, at least 90% of the population answers yes?”
This (being psychologically realistic, not being my actual utility function) seems very plausible.
This form of the question considers the other people’s speckings to be held fixed. (What if each is willing to suffer 25 years of torture to spare the other guy 50?)
I didn’t say that their preference should be the only criterion, just that it’s something to think about. As a practical matter, I do think that not many humans are going to volunteer for 25 years of torture whatever the payoff, except perhaps parents stepping in for their children.
I don’t think holding other speckings constant is a bug. If you ask the 3^^^^3 people “should I choose TORTURE or SPECKS”, you are basically just delegating the decision to the standard human discounting mechanisms, and likely going to get back SPECKS. That’s a quite separate question from “Are you, personally, willing to suffer SPECKS to avoid TORTURE”. But perhaps it can be modified a bit, like so: “Are you, personally, willing to suffer SPECKS, given that there will be no TORTURE if, and only if, at least 90% of the population answers yes?”