I suppose I will go with statements, rather than a question: I suspect the returns to caring about ems are low, I suspect that defining, let alone preventing, torture of ems will be practically difficult or impossible; I suspect that value systems that simply seek to minimize pain are poor value systems.
I suspect that value systems that simply seek to minimize pain are poor value systems.
Fair enough, as long as you’re not presupposing that our value systems—which are probably better than “minimize pain”—are unlikely to have strong anti-torture preferences.
As for the other two points: you might have already argued for them somewhere else, but if not, feel free to say more here. It’s at least obvious that anti-em-torture is harder to enforce, but are you thinking it’s also probably too hard to even know whether a computation creates a person being tortured? Or that our notion of torture is probably confused with respect to ems (and possibly with respect to us animals too)?
If you express the preferences in terms of tradeoffs, it does not seem likely that the preference against the torture of ems will or should be ‘strong.’
Both. It seems difficult to define torture (and decide what tradeoffs are worthwhile), and even if you could define torture it seems like there is no torture-free way to determine whether or not particular code is torturous.
I suppose I will go with statements, rather than a question: I suspect the returns to caring about ems are low, I suspect that defining, let alone preventing, torture of ems will be practically difficult or impossible; I suspect that value systems that simply seek to minimize pain are poor value systems.
Fair enough, as long as you’re not presupposing that our value systems—which are probably better than “minimize pain”—are unlikely to have strong anti-torture preferences.
As for the other two points: you might have already argued for them somewhere else, but if not, feel free to say more here. It’s at least obvious that anti-em-torture is harder to enforce, but are you thinking it’s also probably too hard to even know whether a computation creates a person being tortured? Or that our notion of torture is probably confused with respect to ems (and possibly with respect to us animals too)?
If you express the preferences in terms of tradeoffs, it does not seem likely that the preference against the torture of ems will or should be ‘strong.’
Both. It seems difficult to define torture (and decide what tradeoffs are worthwhile), and even if you could define torture it seems like there is no torture-free way to determine whether or not particular code is torturous.