So, elsewhere someone just brought up moral luck. I’m wondering how this relates to the Yudkowskian view on morality (I forget what he called it), and I’d like to invite someone to think about it and perhaps post on it. If no one else does so, I might be motivated to do so eventually. There might be some potential to shed some real light on the issue of moral luck—specifically the extent of the validity or otherwise of the Control Principle—with reference to Yudkowsky’s framework.
Let’s say someone gravely declares, of some moral dilemma [...] that there is no moral answer; both options are wrong and blamable; whoever faces the dilemma has had poor moral luck. Fine, let’s suppose this is the case: then when you cannot be innocent, justified, or praiseworthy, what will you choose anyway?
Lately I’ve actually been thinking that maybe we should split up morality into two concepts, and deal with them separately: one referring to moral sentiments, and another referring to what we actually do. It seems like a lot of discussions of utilitarianism versus deontology treat them as two arbitrary viewpoints or positions, but insofar as my thinking has trended utilitarian lately, it hasn’t been because I’m attracted to a utilitarian position, but because Cox’s theorem [edit: sic] forces it. Even if I draw up a set of rights that I think must not be violated, I’m still going to have to make decisions under uncertainty, which I would guess means acting to minimize the expected number of rights-violations.
Isn’t that what people have always done? Maybe not explicitly. To explicitly make the split you’re speaking of would just help people to deny reality, and do what they need to do, albeit in highly suboptimal and destructive ways, while still holding on to incoherent moral codes that continue to harm them in other ways.
But it beats letting ourselves be wiped out. I worry about the fact that Western civilization is saying that an increasing number of rights must not be violated under any circumstances, at a time when we are facing an increasing number of existential risks. There are some things that we don’t let ourselves see, because seeing them would mean acknowledging that somebody’s rights will have to be violated.
For instance, plenty of people simultaneously believe that Israel must stay where it is, and that Israel must not commit genocide. Reality might accommodate them (eg., if we discover an alternative energy source that impoverishes the other middle eastern states). But I think it’s more likely that it won’t.
As technology advances, it takes fewer and fewer resources to wreak an equivalent amount of devastation. Soon, small groups of people will be able to annihilate nations. In most cultures, only a very small percentage of people would like to do so; trying to detect and control those individuals may be a workable strategy.
Israel, however, is near several cultures where most people would like to kill everyone in Israel (based on, among other things, public rejoicing instead of statements of regret when Israelis are killed for any reason, opinion polls showing that most people in some countries say they have positive opinions of Al Qaeda, and the success in popular elections of groups including Hezbollah and Hamas which have the destruction of Israel as part of their platform). The annihilation of Israel is not a goal for a few crazy individuals, but a mainstream cultural goal.
Demographic threat. Twenty-seven words: if Israel stays where it is, the growth of Arab citizenry will pose a threat to its existence as a Jewish state with a Jewish demographic majority.
I would consider that one of the better possible outcomes. As long as it leads to a conversion from a race-based state to a pluralistic society, rather than cattle cars and smokestacks.
It’s not really a race-based state, in the sense that one can’t arbitrarily choose one’s race, but under the Law of Return one can choose to convert to Judaism and instantly gain Israeli citizenship upon immigrating.
I don’t think that’s quite the same usage of “moral luck”. According to the technical term, it’s when you, for example, judge someone who was driving drunk and hit a person more harshly than someone who was driving drunk and didn’t hit anyone, all else being equal. In other words, things entirely outside of your control that make the same action more or less blameworthy. Another example, from the link:
For example, consider Nazi collaborators in 1930′s Germany who are condemned for committing morally atrocious acts, even though their very presence in Nazi Germany was due to factors beyond their control (Nagel 1979). Had those very people been transferred by the companies for which they worked to Argentina in 1929, perhaps they would have led exemplary lives. If we correctly morally assess the Nazi collaborators differently from their imaginary counterparts in Argentina, then we have a case of circumstantial moral luck.
I don’t see the difference between this usage and Zack’s/Eliezer’s: the definition given in the SEP link is:
Moral luck occurs when an agent can be correctly treated as an object of moral judgment despite the fact that a significant aspect of what she is assessed for depends on factors beyond her control.
A situation where all of an agent’s options are blameworthy seems quite clearly to fall within this category.
OK, I suppose it counts as an instance, though I’m not convinced Eliezer intended the phrase in that sense. But it’s certainly one of the instances I’m less interested in.
And if you cannot act such that 0 rights are violated? Your function would seem to suggest that you are indifferent between killing a dictator and committing the genocide he would have caused, since the number of rights violations is (arguably, of course) in both cases positive.
It does occur to me that I wasn’t objecting to the hypothetical existence of said function, only that rights aren’t especially useful if we give up on caring about them in any world where we cannot prevent literally all violations.
I was connecting it to and agreeing with Zack M Davis’ thought about utilitarianism. Even with Roko’s utility function, if you have to choose between two lotteries over outcomes, you are still minimizing the expected number of rights violations. If you make your utility function lexicographic in rights, then once you’ve done the best you can with rights, you’re still a utilitarian in the usual sense within the class of choices that minimizes rights violations.
Hey, exactly 500 comments.
So, elsewhere someone just brought up moral luck. I’m wondering how this relates to the Yudkowskian view on morality (I forget what he called it), and I’d like to invite someone to think about it and perhaps post on it. If no one else does so, I might be motivated to do so eventually. There might be some potential to shed some real light on the issue of moral luck—specifically the extent of the validity or otherwise of the Control Principle—with reference to Yudkowsky’s framework.
Yudkowsky briefly addressed moral luck:
Lately I’ve actually been thinking that maybe we should split up morality into two concepts, and deal with them separately: one referring to moral sentiments, and another referring to what we actually do. It seems like a lot of discussions of utilitarianism versus deontology treat them as two arbitrary viewpoints or positions, but insofar as my thinking has trended utilitarian lately, it hasn’t been because I’m attracted to a utilitarian position, but because Cox’s theorem [edit: sic] forces it. Even if I draw up a set of rights that I think must not be violated, I’m still going to have to make decisions under uncertainty, which I would guess means acting to minimize the expected number of rights-violations.
Isn’t that what people have always done? Maybe not explicitly. To explicitly make the split you’re speaking of would just help people to deny reality, and do what they need to do, albeit in highly suboptimal and destructive ways, while still holding on to incoherent moral codes that continue to harm them in other ways.
But it beats letting ourselves be wiped out. I worry about the fact that Western civilization is saying that an increasing number of rights must not be violated under any circumstances, at a time when we are facing an increasing number of existential risks. There are some things that we don’t let ourselves see, because seeing them would mean acknowledging that somebody’s rights will have to be violated.
For instance, plenty of people simultaneously believe that Israel must stay where it is, and that Israel must not commit genocide. Reality might accommodate them (eg., if we discover an alternative energy source that impoverishes the other middle eastern states). But I think it’s more likely that it won’t.
Interesting. Do you have 20 words on why these are mutually exclusive?
As technology advances, it takes fewer and fewer resources to wreak an equivalent amount of devastation. Soon, small groups of people will be able to annihilate nations. In most cultures, only a very small percentage of people would like to do so; trying to detect and control those individuals may be a workable strategy.
Israel, however, is near several cultures where most people would like to kill everyone in Israel (based on, among other things, public rejoicing instead of statements of regret when Israelis are killed for any reason, opinion polls showing that most people in some countries say they have positive opinions of Al Qaeda, and the success in popular elections of groups including Hezbollah and Hamas which have the destruction of Israel as part of their platform). The annihilation of Israel is not a goal for a few crazy individuals, but a mainstream cultural goal.
Demographic threat. Twenty-seven words: if Israel stays where it is, the growth of Arab citizenry will pose a threat to its existence as a Jewish state with a Jewish demographic majority.
I would consider that one of the better possible outcomes. As long as it leads to a conversion from a race-based state to a pluralistic society, rather than cattle cars and smokestacks.
It’s not really a race-based state, in the sense that one can’t arbitrarily choose one’s race, but under the Law of Return one can choose to convert to Judaism and instantly gain Israeli citizenship upon immigrating.
Cox’s theorem doesn’t deal with utility, only plausibility. The utility stuff comes from looking at preference relations—some big names there are von Neumann, Morgenstern and L.J. Savage.
Also keyword, “Dutch book”.
Right, I knew that. Thanks.
I don’t think that’s quite the same usage of “moral luck”. According to the technical term, it’s when you, for example, judge someone who was driving drunk and hit a person more harshly than someone who was driving drunk and didn’t hit anyone, all else being equal. In other words, things entirely outside of your control that make the same action more or less blameworthy. Another example, from the link:
I don’t see the difference between this usage and Zack’s/Eliezer’s: the definition given in the SEP link is:
A situation where all of an agent’s options are blameworthy seems quite clearly to fall within this category.
OK, I suppose it counts as an instance, though I’m not convinced Eliezer intended the phrase in that sense. But it’s certainly one of the instances I’m less interested in.
Agreed.
Utility functions can be very flexible. E.g. U=1 iff 0 rights violations, U=0 otherwise.
Then you really will try to make sure no rights get violated.
And if you cannot act such that 0 rights are violated? Your function would seem to suggest that you are indifferent between killing a dictator and committing the genocide he would have caused, since the number of rights violations is (arguably, of course) in both cases positive.
Correct. But it’s still an implementable policy. I didn’t say it was sensible!
It seems as though you’re reading this hypothetical utility function properly.
It does occur to me that I wasn’t objecting to the hypothetical existence of said function, only that rights aren’t especially useful if we give up on caring about them in any world where we cannot prevent literally all violations.
It seems like a non-sequitur in response to Roko’s illustration of what a utility function can be used to represent.
I was connecting it to and agreeing with Zack M Davis’ thought about utilitarianism. Even with Roko’s utility function, if you have to choose between two lotteries over outcomes, you are still minimizing the expected number of rights violations. If you make your utility function lexicographic in rights, then once you’ve done the best you can with rights, you’re still a utilitarian in the usual sense within the class of choices that minimizes rights violations.