Am I correct thinking that you welcome money pumps?
A partial order isn’t the same thing as a cyclical ordering, and the existence of a money pump would certainly tend to disambiguate a human’s preferences in its vicinity, thereby creating a total ordering within that local part of the preference graph. ;-)
Hypothetically, would it cause a problem if a human somehow disambiguated her entire preference graph?
If conscious processing is required to do that, you probably don’t want to disambiguate all possible tortures where you’re not really sure which one is worse, exactly.
(I mean, unless the choice is actually going to come up, is there really a reason to know for sure which kind of pliers you’d prefer to have your fingernails ripped out with?)
Now, if you limit that preference graph to pleasant experiences, that would at least be an improvement. But even then, you still get the subjective experience of a lifetime of doing nothing but making difficult decisions!
These problems go away if you leave the preference graph ambiguous (wherever it’s currently ambiguous), because then you can definitely avoid simulating conscious experiences.
(Note that this also isn’t a problem if all you want to do is get a rough idea of what positive and/or negative reactions someone will initially have to a given world state, which is not the same as computing their totally ordered preference over some set of possible world states.)
Am I correct thinking that you welcome money pumps?
A partial order isn’t the same thing as a cyclical ordering, and the existence of a money pump would certainly tend to disambiguate a human’s preferences in its vicinity, thereby creating a total ordering within that local part of the preference graph. ;-)
Hypothetically, would it cause a problem if a human somehow disambiguated her entire preference graph?
If conscious processing is required to do that, you probably don’t want to disambiguate all possible tortures where you’re not really sure which one is worse, exactly.
(I mean, unless the choice is actually going to come up, is there really a reason to know for sure which kind of pliers you’d prefer to have your fingernails ripped out with?)
Now, if you limit that preference graph to pleasant experiences, that would at least be an improvement. But even then, you still get the subjective experience of a lifetime of doing nothing but making difficult decisions!
These problems go away if you leave the preference graph ambiguous (wherever it’s currently ambiguous), because then you can definitely avoid simulating conscious experiences.
(Note that this also isn’t a problem if all you want to do is get a rough idea of what positive and/or negative reactions someone will initially have to a given world state, which is not the same as computing their totally ordered preference over some set of possible world states.)