Are you unsure about whether em torture is as bad as non-em torture? Or do you just mean to express that we take em torture too seriously? Or is this a question about how much we should pay to prevent torture (of ems or not), given that there are other worthy causes that need our efforts?
Or, to ask all those questions at once: do you know which empirical facts you need to know in order to answer this?
I think you’re right that many of the relevant empirical facts will be about your preferences. At risk of repeating myself, though, there are other facts that matter, like whether ems are conscious, how much it costs to prevent torture, and what better things we could be directing our efforts towards.
To partially answer your question (“how much effort is it worth to prevent the torture of ems?”): I sure do want torture to not happen, unless I’m hugely wrong about my preferences. So if preventing em torture turns out to not be worth a lot of effort, I predict it’s because there are other bad things that can be more efficiently prevented with our efforts.
But I’m still not sure how you wanted your question interpreted. Are you, for example, wondering whether you care about ems as much as non-em people? Or whether you care about torture at all? Or whether the best strategy requires putting our efforts somewhere else, given that you care about torture and ems?
I suppose I will go with statements, rather than a question: I suspect the returns to caring about ems are low, I suspect that defining, let alone preventing, torture of ems will be practically difficult or impossible; I suspect that value systems that simply seek to minimize pain are poor value systems.
I suspect that value systems that simply seek to minimize pain are poor value systems.
Fair enough, as long as you’re not presupposing that our value systems—which are probably better than “minimize pain”—are unlikely to have strong anti-torture preferences.
As for the other two points: you might have already argued for them somewhere else, but if not, feel free to say more here. It’s at least obvious that anti-em-torture is harder to enforce, but are you thinking it’s also probably too hard to even know whether a computation creates a person being tortured? Or that our notion of torture is probably confused with respect to ems (and possibly with respect to us animals too)?
If you express the preferences in terms of tradeoffs, it does not seem likely that the preference against the torture of ems will or should be ‘strong.’
Both. It seems difficult to define torture (and decide what tradeoffs are worthwhile), and even if you could define torture it seems like there is no torture-free way to determine whether or not particular code is torturous.
How much effort is it worth to prevent the torture of ems?
Are you unsure about whether em torture is as bad as non-em torture? Or do you just mean to express that we take em torture too seriously? Or is this a question about how much we should pay to prevent torture (of ems or not), given that there are other worthy causes that need our efforts?
Or, to ask all those questions at once: do you know which empirical facts you need to know in order to answer this?
Are there empirical facts that can answer that question? It looks like a question about preferences to me, which are difficult to measure.
I think you’re right that many of the relevant empirical facts will be about your preferences. At risk of repeating myself, though, there are other facts that matter, like whether ems are conscious, how much it costs to prevent torture, and what better things we could be directing our efforts towards.
To partially answer your question (“how much effort is it worth to prevent the torture of ems?”): I sure do want torture to not happen, unless I’m hugely wrong about my preferences. So if preventing em torture turns out to not be worth a lot of effort, I predict it’s because there are other bad things that can be more efficiently prevented with our efforts.
But I’m still not sure how you wanted your question interpreted. Are you, for example, wondering whether you care about ems as much as non-em people? Or whether you care about torture at all? Or whether the best strategy requires putting our efforts somewhere else, given that you care about torture and ems?
I suppose I will go with statements, rather than a question: I suspect the returns to caring about ems are low, I suspect that defining, let alone preventing, torture of ems will be practically difficult or impossible; I suspect that value systems that simply seek to minimize pain are poor value systems.
Fair enough, as long as you’re not presupposing that our value systems—which are probably better than “minimize pain”—are unlikely to have strong anti-torture preferences.
As for the other two points: you might have already argued for them somewhere else, but if not, feel free to say more here. It’s at least obvious that anti-em-torture is harder to enforce, but are you thinking it’s also probably too hard to even know whether a computation creates a person being tortured? Or that our notion of torture is probably confused with respect to ems (and possibly with respect to us animals too)?
If you express the preferences in terms of tradeoffs, it does not seem likely that the preference against the torture of ems will or should be ‘strong.’
Both. It seems difficult to define torture (and decide what tradeoffs are worthwhile), and even if you could define torture it seems like there is no torture-free way to determine whether or not particular code is torturous.