I’m looking forward to live discussion of this topic at the Paris meetup. :)
Meanwhile, I’ve read through it, more closely. Much of it seems, not necessarily right, but at least unobjectionable—it raises few red flags. On the other hand, I don’t think it makes me much the wiser about the advantages of consequentialism.
Paras 7.2 and 7.3 (the slavery and gladiator questions) left me with an odd impression. The “test” you propose in both cases is more or less the same as Rawls’ Veil of Ignorance. So at that point I was wondering, if you apply Rawls’ procedure to determine what is a preferable social contract, perhaps you’re a Rawlsian more than you’re a consequentialist. :) BTW, are you familiar with Rawls’ objections to (classical) utilitarianism?
Para 8.2 comes across as terribly naive, and “politics has been reduced to math” in particular seems almost designed to cause people to dismiss you. (A nitpick: the links at the end of 8.2 are broken.)
One thing that makes the essay confusing for me is the absence of a clear distinction between the questions “how do I decide what to do next” and “what makes for a desirable set of agreements among a large number of people”—between evaluating the morality of individual actions and choosing a social contract.
Another thing that’s left out is the issue of comparing or aggregating happiness, or “utility”, across different people. The one place where you touch on it, your response to the “utility monster” argument, does not match my own understanding of how a “utility monster” might be a problem. As I understood it, a “utility monster” isn’t someone who is to you as you are to an ant, but someone just like you. They just happen to insist that an ice cream makes them a thousand times happier than they make you, so in all cases where it must be decided which of you should get an ice cream they should always get it.
Your analogy with optical illusions is apt, and it gives a good guideline for evaluating a proposed system of morality: in what cases does the proposed system lead me to change my mind on something that I previously did or avoided doing because of a moral judgement.
Interestingly, though, you give more examples that have to do with the social contract (gun control, birth control, organ donation policy, public funding of art, discrimination, slavery, etc.) than you give examples that have to do with personal decisions (giving to charities, trolley problems).
My own positions are contractarian, much more than they are deontological or consequentialist. I’m generally truthful, not because it is “wrong” to lie or because I have a rule against it (for instance I’m OK with lying in the context of a game, say Diplomacy, where the usual social conventions are known to be suspended—though I’d be careful about hurting others’ feelings through my play even in such a context). I don’t trust myself to compute the complete consequences of lying vs. not lying in each case, and so a literal consequentialism isn’t an option for me.
However, I would prefer to live in a world where people can be relied upon to tell the truth, and for that I am willing to sacrifice the dubious advantage of being able to put a fast one over on other people from time to time. It is “wrong” to lie in the sense that if you didn’t know ahead of time what particular position you’d end up occupying in the world (e.g. a politician with power) but only knew some general facts about the world, you would find a contract that banned lying acceptable, and would be willing to let this contract sanction lying with penalties. (At the same time, and for the same reason, I also put some value on privacy: being able to lie by omission about some things.)
I find XiXiDu’s remarks interesting. It seems to me that at present something like “might makes right” is descriptively true of us humans: we could describe a morality only in terms of agreements and generally reliable penalties for violating these agreements. “If you injure others, you can expect to be put in prison, because that’s the way society is currently set up; so if you’re rational, you’ll curb your desires to hurt others because your expected utility for doing so is negative”.
However this sort of description doesn’t help in finding out what the social contract “should” be—it doesn’t help us find what agreements we currently have that are wrong because they result from the moral equivalent of optical illusions “fooling us” into believing something that isn’t the case.
It also doesn’t help us in imagining what the social contract could be if we weren’t the sort of beings we are: if the agreements we enter were binding for reasons other than fear of penalties. This is a current limitation of our cognitive architectures but not a necessary one.
(I find this a very exciting question, and at present the only place I’ve seen where it can even be discussed is LW: what kind of moral philosophy would apply to beings who can “change their own source code”.)
EDIT: having read Vladimir_M’s reply below, his comments capture much of what I wanted to say, only better.
“Paras 7.2 and 7.3 (the slavery and gladiator questions) left me with an odd impression. The “test” you propose in both cases is more or less the same as Rawls’ Veil of Ignorance. So at that point I was wondering, if you apply Rawls’ procedure to determine what is a preferable social contract, perhaps you’re a Rawlsian more than you’re a consequentialist. :) BTW, are you familiar with Rawls’ objections to (classical) utilitarianism?”
I can’t speak for Yvain but as someone who fully agreed with his use of that test, I would describe myself as both a Rawlsian (in the sense of liking the “veil of ignorance” concept) and a Utilitarian. I don’t really see any conflict between the two.
I think maybe the difference between my view and that of Rawls is that I apply something like the Hedonic Treadmill fully (despite being a Preference Utilitarian), which essentially leads to Yvain’s responses.
...Actually I suppose I practically define the amount of Utility in a world by whether it would be better to live there, so maybe it would in fact be better to describe me as a Rawslian. I still prefer to think of myself as a Utilitarian with a Rawlsian basis for my utility function, though (essentially I define the amount of utility in a world as “how desirable it would be to be born as a random person in that world).
I think it’s that Utilitarianism sounds easier to use as a heuristic for decisions, whereas calling yourself a Rawlsian requires you to go one step further back every time you analyze a thought experiment.
I’ve responded to some of Vladmir’s comments, but just a few things you touched on that he didn’t:
Utility monsters: if a utility monster just means someone who gets the same amount of pleasure from an ice cream that I get from an orgasm, then it just doesn’t seem that controversial to me that giving them an ice cream is as desirable as giving me an orgasm. Once we get to things like “their very experience is a million times stronger and more vivid than you could ever imagine” we’re talking a completely different neurological makeup that can actually hold more qualia, which is where the ant comes in.
I don’t see a philosophical distinction between the morality an individual should use and the morality a government should use (although there’s a very big practical distinction since governments are single actors in their own territories and so can afford to ignore some game theoretic and decision theoretic principles that individuals have to take into account). The best state of the world is the best state of the world, no matter who’s considering it.
I use mostly examples from government because moral dilemmas on the individual level are less common, less standardized, and less well-known.
I’m looking forward to live discussion of this topic at the Paris meetup. :)
Meanwhile, I’ve read through it, more closely. Much of it seems, not necessarily right, but at least unobjectionable—it raises few red flags. On the other hand, I don’t think it makes me much the wiser about the advantages of consequentialism.
Paras 7.2 and 7.3 (the slavery and gladiator questions) left me with an odd impression. The “test” you propose in both cases is more or less the same as Rawls’ Veil of Ignorance. So at that point I was wondering, if you apply Rawls’ procedure to determine what is a preferable social contract, perhaps you’re a Rawlsian more than you’re a consequentialist. :) BTW, are you familiar with Rawls’ objections to (classical) utilitarianism?
Para 8.2 comes across as terribly naive, and “politics has been reduced to math” in particular seems almost designed to cause people to dismiss you. (A nitpick: the links at the end of 8.2 are broken.)
One thing that makes the essay confusing for me is the absence of a clear distinction between the questions “how do I decide what to do next” and “what makes for a desirable set of agreements among a large number of people”—between evaluating the morality of individual actions and choosing a social contract.
Another thing that’s left out is the issue of comparing or aggregating happiness, or “utility”, across different people. The one place where you touch on it, your response to the “utility monster” argument, does not match my own understanding of how a “utility monster” might be a problem. As I understood it, a “utility monster” isn’t someone who is to you as you are to an ant, but someone just like you. They just happen to insist that an ice cream makes them a thousand times happier than they make you, so in all cases where it must be decided which of you should get an ice cream they should always get it.
Your analogy with optical illusions is apt, and it gives a good guideline for evaluating a proposed system of morality: in what cases does the proposed system lead me to change my mind on something that I previously did or avoided doing because of a moral judgement.
Interestingly, though, you give more examples that have to do with the social contract (gun control, birth control, organ donation policy, public funding of art, discrimination, slavery, etc.) than you give examples that have to do with personal decisions (giving to charities, trolley problems).
My own positions are contractarian, much more than they are deontological or consequentialist. I’m generally truthful, not because it is “wrong” to lie or because I have a rule against it (for instance I’m OK with lying in the context of a game, say Diplomacy, where the usual social conventions are known to be suspended—though I’d be careful about hurting others’ feelings through my play even in such a context). I don’t trust myself to compute the complete consequences of lying vs. not lying in each case, and so a literal consequentialism isn’t an option for me.
However, I would prefer to live in a world where people can be relied upon to tell the truth, and for that I am willing to sacrifice the dubious advantage of being able to put a fast one over on other people from time to time. It is “wrong” to lie in the sense that if you didn’t know ahead of time what particular position you’d end up occupying in the world (e.g. a politician with power) but only knew some general facts about the world, you would find a contract that banned lying acceptable, and would be willing to let this contract sanction lying with penalties. (At the same time, and for the same reason, I also put some value on privacy: being able to lie by omission about some things.)
I find XiXiDu’s remarks interesting. It seems to me that at present something like “might makes right” is descriptively true of us humans: we could describe a morality only in terms of agreements and generally reliable penalties for violating these agreements. “If you injure others, you can expect to be put in prison, because that’s the way society is currently set up; so if you’re rational, you’ll curb your desires to hurt others because your expected utility for doing so is negative”.
However this sort of description doesn’t help in finding out what the social contract “should” be—it doesn’t help us find what agreements we currently have that are wrong because they result from the moral equivalent of optical illusions “fooling us” into believing something that isn’t the case.
It also doesn’t help us in imagining what the social contract could be if we weren’t the sort of beings we are: if the agreements we enter were binding for reasons other than fear of penalties. This is a current limitation of our cognitive architectures but not a necessary one.
(I find this a very exciting question, and at present the only place I’ve seen where it can even be discussed is LW: what kind of moral philosophy would apply to beings who can “change their own source code”.)
EDIT: having read Vladimir_M’s reply below, his comments capture much of what I wanted to say, only better.
“Paras 7.2 and 7.3 (the slavery and gladiator questions) left me with an odd impression. The “test” you propose in both cases is more or less the same as Rawls’ Veil of Ignorance. So at that point I was wondering, if you apply Rawls’ procedure to determine what is a preferable social contract, perhaps you’re a Rawlsian more than you’re a consequentialist. :) BTW, are you familiar with Rawls’ objections to (classical) utilitarianism?”
I can’t speak for Yvain but as someone who fully agreed with his use of that test, I would describe myself as both a Rawlsian (in the sense of liking the “veil of ignorance” concept) and a Utilitarian. I don’t really see any conflict between the two. I think maybe the difference between my view and that of Rawls is that I apply something like the Hedonic Treadmill fully (despite being a Preference Utilitarian), which essentially leads to Yvain’s responses.
...Actually I suppose I practically define the amount of Utility in a world by whether it would be better to live there, so maybe it would in fact be better to describe me as a Rawslian. I still prefer to think of myself as a Utilitarian with a Rawlsian basis for my utility function, though (essentially I define the amount of utility in a world as “how desirable it would be to be born as a random person in that world). I think it’s that Utilitarianism sounds easier to use as a heuristic for decisions, whereas calling yourself a Rawlsian requires you to go one step further back every time you analyze a thought experiment.
This later piece is perhaps relevant.
I’ve responded to some of Vladmir’s comments, but just a few things you touched on that he didn’t:
Utility monsters: if a utility monster just means someone who gets the same amount of pleasure from an ice cream that I get from an orgasm, then it just doesn’t seem that controversial to me that giving them an ice cream is as desirable as giving me an orgasm. Once we get to things like “their very experience is a million times stronger and more vivid than you could ever imagine” we’re talking a completely different neurological makeup that can actually hold more qualia, which is where the ant comes in.
I don’t see a philosophical distinction between the morality an individual should use and the morality a government should use (although there’s a very big practical distinction since governments are single actors in their own territories and so can afford to ignore some game theoretic and decision theoretic principles that individuals have to take into account). The best state of the world is the best state of the world, no matter who’s considering it.
I use mostly examples from government because moral dilemmas on the individual level are less common, less standardized, and less well-known.