Consequentialism. In Yvain’s Consequentialism FAQ, he argues that consequentialism follows from the intuitively obvious principles “Morality Lives In The World” and “Others Have Non Zero Value” upon reflection. Rationality seems useful for recognizing that there’s a tension between these principles and other common moral intuitions, but this doesn’t necessarily translate into a desire to resolve the tension nor a choice to resolve the tension in favor of these principles over others. So it seems that increased rationality does increase the likelihood that one will be a consequentialist, but that it’s also not sufficient.
Expected value maximization. In Circular Altruism and elsewhere, Eliezer describes cognitive biases that people employ in scenarios with a probabilistic element, and how reflection can lead one to the notion that one should organize one’s altruistic efforts to maximize expected value (in the technical sense), rather than making decisions based on these biases. Here too, rationality seems useful for recognizing that one’s intuitions are in conflict because of cognitive biases, without necessarily entailing an inclination to resolve the tension. However, in this case, if one does seek to resolve the tension, the choice of expected value maximization over other alternatives is canonical, so rationality seems to take one further toward expected value maximization than to consequentialism.
This part seems a bit mixed up to me. This is partly because Yvain’s Consequentialism FAQ is itself a bit mixed up, often conflating consequentialism with utilitarianism. “Others have nonzero value” really has nothing to do with consequentialism; one can be a consequentialist and be purely selfish, one can be non-consequentialist and be altruistic. “Morality lives in the world” is a pretty good argument for consequentialism all by itself; “others have nonzero value” is just about what type of consequences you should favor.
What’s really mixed up here though is the end. When one talks about expected value maximization, one is always talking about the expected value over consequences; if you accept expected value maximization (for moral matters, anyway), you’re already a consequentialist. Basically, what you’ve written is kind of backwards. If, on the other hand, we assume that by “consequentialism” you really meant “utilitarianism” (which, for those who have forgotten, does not mean maximizing expected utility in the sense discussed here but rather something else entirely[0]), then it would make sense; it takes you further towards maximizing expected value (consequentialism) than utilitarianism.
[0]Though it still is a flavor of consequentialism.
I mean, kind of? It’s still all pretty mixed-up though. Enough people get consequentialism, expected utility maximization, and utilitarianism mixed up that I really don’t think it’s a good thing to further confuse them.
This part seems a bit mixed up to me. This is partly because Yvain’s Consequentialism FAQ is itself a bit mixed up, often conflating consequentialism with utilitarianism. “Others have nonzero value” really has nothing to do with consequentialism; one can be a consequentialist and be purely selfish, one can be non-consequentialist and be altruistic. “Morality lives in the world” is a pretty good argument for consequentialism all by itself; “others have nonzero value” is just about what type of consequences you should favor.
What’s really mixed up here though is the end. When one talks about expected value maximization, one is always talking about the expected value over consequences; if you accept expected value maximization (for moral matters, anyway), you’re already a consequentialist. Basically, what you’ve written is kind of backwards. If, on the other hand, we assume that by “consequentialism” you really meant “utilitarianism” (which, for those who have forgotten, does not mean maximizing expected utility in the sense discussed here but rather something else entirely[0]), then it would make sense; it takes you further towards maximizing expected value (consequentialism) than utilitarianism.
[0]Though it still is a flavor of consequentialism.
Good points. Is my intended meaning clear?
I mean, kind of? It’s still all pretty mixed-up though. Enough people get consequentialism, expected utility maximization, and utilitarianism mixed up that I really don’t think it’s a good thing to further confuse them.