Why is this a good idea in any way other than the general position that “torturing other people for your own profit is a good idea so long as you don’t care about people?” Most of human history is based around the many being exploited for the benefit of the few. Why is this different?
I suppose people should have the right to willingly submit to torture for some small benefit to another person, which is what you’re saying you’d be willing to do. But the fact that a copy gets erased doesn’t make the experience any less real, and the fact that an identical copy gets to live doesn’t in any way help the copies that were being tortured.
It’s different because (1) I’m not hurting other people, only myself, and (2) I’m not depriving the world of my victim’s potential contributions as a free person.
I don’t actually care about the avoidance of torture as a terminal moral value.
But after the fork, your copy will quickly become another person, won’t he? After all, he’s being tortured and you’re not, and he is probably very angry at you for making this decision.
So I guess the question is: If I donate $1 to charity for every hour you get waterboarded, and make provisions to balance out the contributions you would have made as a free person, would you do it?
In thought experiment land… maybe. I’d have to think carefully about what value I place on myself as a special case. In practice, I don’t believe that you can fully compensate for all of the unknown accomplishments I might have made to society.
Hurting others is ethically problematic, not morally. For example, I would probably be okay with hurting someone else at their own request. Avoidance of torture is a question of an entirely different type: what I value, not how I think it’s appropriate to go about getting it.
I don’t have a formalization of my terminal values, but roughly:
I have noticed that sometimes I feel more conscious than other times—not just awake/dreaming/sleeping, but between different “awake” times. I infer that consciousness/sentience/sapience/personhood/whatever you want to call it, you know, that thing we care about is not a binary predicate, but a scalar. I want to maximize the degree of personhood that exists in the universe.
By morals, I mean terminal values. By ethics, I mean advanced forms of strategy involving things like Hofstadter’s superrationality. I’m not sure what the standard LW jargon is for this sort of thing, but I think I remember reading something about deciding as though you were deciding on behalf of everyone who shares your decision theory.
I want to maximize the degree of personhood that exists in the universe.
So, if you create a person, and torture them for their entire life, that’s worth it?
If the most conscious person possible would be unhappy, I’d rather create them than not. The consensus among science fiction writers seems to be with me on this: a drug that makes you happy at the expense of your creative genius is generally treated as a bad thing.
By ethics, I mean advanced forms of strategy involving things like Hofstadter’s superrationality. I’m not sure what the standard LW jargon is for this sort of thing
Do you mean to equate here the degree to which something is a person, the degree to which a person is conscious, and the degree to which a person is a creative genius?
That’s what it reads like, but perhaps I’m reading too much into your comment.
Why is this a good idea in any way other than the general position that “torturing other people for your own profit is a good idea so long as you don’t care about people?” Most of human history is based around the many being exploited for the benefit of the few. Why is this different?
I suppose people should have the right to willingly submit to torture for some small benefit to another person, which is what you’re saying you’d be willing to do. But the fact that a copy gets erased doesn’t make the experience any less real, and the fact that an identical copy gets to live doesn’t in any way help the copies that were being tortured.
It’s different because (1) I’m not hurting other people, only myself, and (2) I’m not depriving the world of my victim’s potential contributions as a free person.
I don’t actually care about the avoidance of torture as a terminal moral value.
But after the fork, your copy will quickly become another person, won’t he? After all, he’s being tortured and you’re not, and he is probably very angry at you for making this decision. So I guess the question is: If I donate $1 to charity for every hour you get waterboarded, and make provisions to balance out the contributions you would have made as a free person, would you do it?
In thought experiment land… maybe. I’d have to think carefully about what value I place on myself as a special case. In practice, I don’t believe that you can fully compensate for all of the unknown accomplishments I might have made to society.
Pavitra is a he? I must have guessed wrong.
It’s complicated.
What are your terminal moral values?
Also, why is hurting yourself different from hurting other people? And why is not hurting others a moral value, but not avoidance of torture?
Hurting others is ethically problematic, not morally. For example, I would probably be okay with hurting someone else at their own request. Avoidance of torture is a question of an entirely different type: what I value, not how I think it’s appropriate to go about getting it.
I don’t have a formalization of my terminal values, but roughly:
I have noticed that sometimes I feel more conscious than other times—not just awake/dreaming/sleeping, but between different “awake” times. I infer that consciousness/sentience/sapience/personhood/whatever you want to call it, you know, that thing we care about is not a binary predicate, but a scalar. I want to maximize the degree of personhood that exists in the universe.
What’s the difference between ethics and morals?
So, if you create a person, and torture them for their entire life, that’s worth it?
By morals, I mean terminal values. By ethics, I mean advanced forms of strategy involving things like Hofstadter’s superrationality. I’m not sure what the standard LW jargon is for this sort of thing, but I think I remember reading something about deciding as though you were deciding on behalf of everyone who shares your decision theory.
If the most conscious person possible would be unhappy, I’d rather create them than not. The consensus among science fiction writers seems to be with me on this: a drug that makes you happy at the expense of your creative genius is generally treated as a bad thing.
Sounds like decision theory.
That link was what I needed. By ethics I mean, roughly, the difference between causal decision theory and the right answer.
Do you mean to equate here the degree to which something is a person, the degree to which a person is conscious, and the degree to which a person is a creative genius?
That’s what it reads like, but perhaps I’m reading too much into your comment.
That seems unjustified to me.
I don’t mean to equate them. They’re each a rough approximation to the thing I actually care about.