Fair point. If they’re slightly different, it should be a slight problem, and TDT would help that. If they’re significantly different, it would be a significant problem, and you might be able to make a case that one is evil.
If you can call someone “evil” even though they may altruistically work for the increase of the well-being of others, as they perceive it to be, then what’s the word you’d use to describe people who are sadists and actively seek to hurt others, or people who would sacrifice the wellbeing of millions people for their own selfish benefit?
Your labelling scheme doesn’t serve me in treating people appropriately, realizing which people I ought consider enemies and which I ought treat as potential allies—nor which people strive to increase total (or average) utility and which people strive to decrease it.
So what’s its point? Why consider these people “evil”? It almost seems to me as if you’re working backwards from a conclusion, starting with the assumption that all good people must have the same goals, and therefore someone who differs must be evil.
It depends on if you interpret “good” and “evil” as words derived from “should,” as I was doing. Good people are those that act as they should behave, and evil people as those that act as they shouldn’t behave. There is only one right thing to do.
But if you want to define evil another way, honestly, you’re probably right. I would note that I think “might be able to make the case that” is enough qualification.
So, more clearly:
If everyone’s extrapolated values are in accordance with my extrapolated values, information is our only problem, which we don’t need moral and decision theories to deal with.
If our extrapolated values differ, then they may differ a bit, in which case we have a small problem, or a medium amount, in which case there’s a big problem, or a lot, in which case there’s a huge problem. I can rate them on a continuous scale as to how well they accord with my extrapolated values. The ones at the top, I can work with, and those at the bottom, I can work against. However TDT states that we should be nicer to those at the bottom so that they’ll be nicer than us, whereas CDT does not, and therein lies the difference.
Fair point. If they’re slightly different, it should be a slight problem, and TDT would help that. If they’re significantly different, it would be a significant problem, and you might be able to make a case that one is evil.
If you can call someone “evil” even though they may altruistically work for the increase of the well-being of others, as they perceive it to be, then what’s the word you’d use to describe people who are sadists and actively seek to hurt others, or people who would sacrifice the wellbeing of millions people for their own selfish benefit?
Your labelling scheme doesn’t serve me in treating people appropriately, realizing which people I ought consider enemies and which I ought treat as potential allies—nor which people strive to increase total (or average) utility and which people strive to decrease it.
So what’s its point? Why consider these people “evil”? It almost seems to me as if you’re working backwards from a conclusion, starting with the assumption that all good people must have the same goals, and therefore someone who differs must be evil.
It depends on if you interpret “good” and “evil” as words derived from “should,” as I was doing. Good people are those that act as they should behave, and evil people as those that act as they shouldn’t behave. There is only one right thing to do.
But if you want to define evil another way, honestly, you’re probably right. I would note that I think “might be able to make the case that” is enough qualification.
So, more clearly:
If everyone’s extrapolated values are in accordance with my extrapolated values, information is our only problem, which we don’t need moral and decision theories to deal with.
If our extrapolated values differ, then they may differ a bit, in which case we have a small problem, or a medium amount, in which case there’s a big problem, or a lot, in which case there’s a huge problem. I can rate them on a continuous scale as to how well they accord with my extrapolated values. The ones at the top, I can work with, and those at the bottom, I can work against. However TDT states that we should be nicer to those at the bottom so that they’ll be nicer than us, whereas CDT does not, and therein lies the difference.