Each of these issues could be the subject of a separate lengthy discussion, but I’ll try to address them as succinctly as possible:
Re: phlogiston. Yes, Eliezer’s account is inaccurate, though it seems like you have inadvertently made even more out of it. Generally, one recurring problem in the writings of EY (and various other LW contributors) is that they’re often too quick to proclaim various beliefs and actions as silly and irrational, without adequate fact-checking and analysis.
Re: interpersonal utility aggregation/comparison. I don’t think you can handwave this away—it’s a fundamental issue on which everything hinges. For comparison, imagine someone saying that your consequentialism is wrong because it’s contrary to God’s commands, and when you ask how we know that God exists and what his commands are, they handwave it by saying that theologians have some ideas on how to answer these questions. In fact, your appeal to authority is worse in an important sense, since people are well aware that theologians are in disagreement on these issues and have nothing like definite unbiased answers backed by evidence, whereas your answer will leave many people thinking falsely that it’s a well-understood issue where experts can provide adequate answers.
Re: economists and statisticians. Yes, nowadays it’s hard to deny that central planning was a disaster after it crumbled spectacularly everywhere, but read what they were saying before that. Academics are just humans, and if an ideology says that the world is a chaotic inefficient mess and experts like them should be put in charge instead, well, it will be hard for them to resist its pull. Nowadays this folly is finally buried, but a myriad other ones along similar lines are actively being pursued, whose only redeeming value is that they are not as destructive in the short to medium run. (They still make the world uglier and more dysfunctional, and life more joyless and burdensome, in countless ways.) Generally, the idea that you can put experts in charge and expect that they their standards of expertise won’t be superseded by considerations of power and status is naively utopian.
Re: procedures in place for violating heuristics. My problem is not with the lack of elegant philosophical rules. On the contrary, my objections are purely practical. The world is complicated and the law of unintended consequences is merciless and unforgiving. What’s more, humans are scarily good at coming up with seemingly airtight arguments that are in fact pure rationalizations or expressions of intellectual vanity. So, yes, the heuristics must be violated sometimes when the stakes are high enough, but given these realistic limitations, I think you’re way overestimating our ability to identify such situations reliably and the prudence of doing so when the stakes are less than enormous.
Re: Section 7. Basically, you don’t take the least convenient possible world into account. In this case, the LCPW is considering the most awful thing imaginable, assuming that enough people assign it positive enough value that the scales tip in their favor, and then giving a clear answer whether you bite the bullet. Anything less is skirting around the real problem.
Re: welfare of some more than others. I’m confused by your position: are you actually biting the bullet that caring about some people more than others is immoral? I don’t understand why you think it’s weird to ask such a question, since utility maximization is at least prima facie in conflict with both egoism and any sort of preferential altruism, both of which are fundamental to human nature, so it’s unclear how you can resolve this essential problem. In any case, this issue is important and fundamental enough that it definitely should be addressed in your FAQ.
Re: game theory and the thought process. The trouble is that consequentialism, or at least your approach to it, encourages thought processes leading to reckless action based on seemingly sophisticated and logical, but in reality sorely inadequate models and arguments. For example, the idea that you can assess the real-world issue of mass immigration with spherical-cow models like the one to which you link approvingly is every bit as delusional as the idea—formerly as popular among economists as models like this one are nowadays—that you can use their sophisticated models to plan the economy centrally with results far superior to those nasty and messy markets.
General summary: I think your FAQ should at the very least include some discussion of (2) and (6), since these are absolutely fundamental problems. Also, I think you should research more thoroughly the concrete examples you use. If you’ve taken the time to write this FAQ, surely you don’t want people dismissing it because parts of it are inaccurate, even if this isn’t relevant to the main point you’re making.
Regarding the other issues, most of them revolve around the general issues of practical applicability of consequentialist ideas, the law of unintended consequences (of which game-theoretic complications are just one special case), the reliability of experts when they are in positions where their ideas matter in terms of power, status, and wealth, etc. However you choose to deal with them, I think that even in the most basic discussion of this topic, they deserve more concern than your present FAQ gives them.
I will replace the phlogiston section with something else, maybe along the lines of the example of a medicine putting someone to sleep because it has a “dormitive potency”.
I agree with you that there are lots of complex and messy calculations that stand between consequentialism and correct results, and that at best these are difficult and at worst they are not humanly feasible. However, this idea seems to me fundamentally consequentialist—to make this objection, one starts by assuming consequentialist principles, but then saying they can’t be put into action and so we should retreat from pure consequentialism on consequentialist grounds. The target audience of this FAQ is people who are not even at this level yet—people who don’t even understand that you need to argue against certain “consequentialist” ideas on consequentialist grounds, but rather that they can be dismissed by definition because consequences don’t matter. Someone who accepts consequentialism on a base level but then retreats from it on a higher level is already better informed than the people I am aiming this FAQ at. I will make this clearer.
This gets into the political side of things as well. I still don’t understand why you think consequentialism implies or even suggests centralized economic planning when we both agree centralized economic planning would have bad consequences. Certain decisions have to be made, and making them on consequentialist grounds will produce the best results—even if those consequentialist grounds are “never give the government the power to make these decisions because they will screw them up and that will have bad consequences”. I continue to think prediction markets allow something slightly more interesting than that, and I think if you disagree we can resolve that disagreement only on consequentialist grounds—eg would a government that tried to intervene where prediction markets recommended intervention create better consequences than one that didn’t. Nevertheless, I’ll probably end up deleting a lot of this section since it seemed to give everyone an impression I don’t endorse.
Hopefully the changes I listed in my other comment on this thread should help with some of your other worries.
However, this idea seems to me fundamentally consequentialist—to make this objection, one starts by assuming consequentialist principles, but then saying they can’t be put into action and so we should retreat from pure consequentialism on consequentialist grounds.
Fair enough. Though I can grant this only for consequentialism in general, not utilitarianism—unless you have a solution to the fundamental problem of interpersonal utility comparison and aggregation. (In which case I’d be extremely curious to hear it.)
I still don’t understand why you think consequentialism implies or even suggests centralized economic planning when we both agree centralized economic planning would have bad consequences.
I gave it as a historical example of a once wildly popular bad idea that was a product of consequentialist thinking. Of course, as you point out, that was an instance of flawed consequentialist thinking, since the consequences were in fact awful. The problem however is that these same patterns of thinking are by no means dead and gone—it is only that some of their particular instances have been so decisively discredited in practice that nobody serious supports them any more. (And in many other instances, gross failures are still being rationalized away.)
The patterns of thinking I have in mind are more or less what you yourself propose as a seemingly attractive consequentialist approach to problems of public concern: let’s employ accredited experts who will use their sophisticated models to do a cost-benefit analysis and figure out a welfare-maximizing policy. Yes, this really sounds much more rational and objective compared to resolving issues via traditional customs and institutions, which appear to be largely antiquated, irrational, and arbitrary. It also seems far more rational than debating issues in terms of metaphysical constructs such as “liberties,” “rights,” “justice,” “constitutionality,” etc. Trouble is, with very few exceptions, it is usually a recipe for disaster.
Traditional institutions and metaphysical decision-making heuristics are far from perfect, but with a bit of luck, at least they can provide for a functional society. They are a product of cultural (and to some degree biological) evolution, as as such they are quite robust against real-world problems. In contrast, the experts’ models will sooner or later turn out to be flawed one way or another—the difficulty of the problems and the human biases that immediately rear their heads as soon as power and status are at stake practically guarantee this outcome.
Ultimately, when science is used to create policy, the practical outcome is that official science will be debased and corrupted to make it conform to ideological and political pressures. It will not result in elevation of public discourse to a real scientific standard (what you call reducing politics to math) -- that is an altogether utopian idea. So, for example, when that author whose article you linked uses sophisticated-looking math to “analyze” a controversial political issue (in this case immigration), he’s not bringing mathematical clarity and precision of thought into the public discourse. Rather, he is debasing science by concocting a shoddy spherical-cow model with no connection to reality that has some superficial trappings of scientific discourse; the end product is nothing more than Dark Arts. Of course, that was just a blog post, but the situation with real accredited expert output is often not much better.
Now, you can say that I have in fact been making a consequentialist argument all along. In some sense, I agree, but what I wrote certainly applies even to the minimalist interpretation of your positions stated in the FAQ.
Each of these issues could be the subject of a separate lengthy discussion, but I’ll try to address them as succinctly as possible:
Re: phlogiston. Yes, Eliezer’s account is inaccurate, though it seems like you have inadvertently made even more out of it. Generally, one recurring problem in the writings of EY (and various other LW contributors) is that they’re often too quick to proclaim various beliefs and actions as silly and irrational, without adequate fact-checking and analysis.
Re: interpersonal utility aggregation/comparison. I don’t think you can handwave this away—it’s a fundamental issue on which everything hinges. For comparison, imagine someone saying that your consequentialism is wrong because it’s contrary to God’s commands, and when you ask how we know that God exists and what his commands are, they handwave it by saying that theologians have some ideas on how to answer these questions. In fact, your appeal to authority is worse in an important sense, since people are well aware that theologians are in disagreement on these issues and have nothing like definite unbiased answers backed by evidence, whereas your answer will leave many people thinking falsely that it’s a well-understood issue where experts can provide adequate answers.
Re: economists and statisticians. Yes, nowadays it’s hard to deny that central planning was a disaster after it crumbled spectacularly everywhere, but read what they were saying before that. Academics are just humans, and if an ideology says that the world is a chaotic inefficient mess and experts like them should be put in charge instead, well, it will be hard for them to resist its pull. Nowadays this folly is finally buried, but a myriad other ones along similar lines are actively being pursued, whose only redeeming value is that they are not as destructive in the short to medium run. (They still make the world uglier and more dysfunctional, and life more joyless and burdensome, in countless ways.) Generally, the idea that you can put experts in charge and expect that they their standards of expertise won’t be superseded by considerations of power and status is naively utopian.
Re: procedures in place for violating heuristics. My problem is not with the lack of elegant philosophical rules. On the contrary, my objections are purely practical. The world is complicated and the law of unintended consequences is merciless and unforgiving. What’s more, humans are scarily good at coming up with seemingly airtight arguments that are in fact pure rationalizations or expressions of intellectual vanity. So, yes, the heuristics must be violated sometimes when the stakes are high enough, but given these realistic limitations, I think you’re way overestimating our ability to identify such situations reliably and the prudence of doing so when the stakes are less than enormous.
Re: Section 7. Basically, you don’t take the least convenient possible world into account. In this case, the LCPW is considering the most awful thing imaginable, assuming that enough people assign it positive enough value that the scales tip in their favor, and then giving a clear answer whether you bite the bullet. Anything less is skirting around the real problem.
Re: welfare of some more than others. I’m confused by your position: are you actually biting the bullet that caring about some people more than others is immoral? I don’t understand why you think it’s weird to ask such a question, since utility maximization is at least prima facie in conflict with both egoism and any sort of preferential altruism, both of which are fundamental to human nature, so it’s unclear how you can resolve this essential problem. In any case, this issue is important and fundamental enough that it definitely should be addressed in your FAQ.
Re: game theory and the thought process. The trouble is that consequentialism, or at least your approach to it, encourages thought processes leading to reckless action based on seemingly sophisticated and logical, but in reality sorely inadequate models and arguments. For example, the idea that you can assess the real-world issue of mass immigration with spherical-cow models like the one to which you link approvingly is every bit as delusional as the idea—formerly as popular among economists as models like this one are nowadays—that you can use their sophisticated models to plan the economy centrally with results far superior to those nasty and messy markets.
General summary: I think your FAQ should at the very least include some discussion of (2) and (6), since these are absolutely fundamental problems. Also, I think you should research more thoroughly the concrete examples you use. If you’ve taken the time to write this FAQ, surely you don’t want people dismissing it because parts of it are inaccurate, even if this isn’t relevant to the main point you’re making.
Regarding the other issues, most of them revolve around the general issues of practical applicability of consequentialist ideas, the law of unintended consequences (of which game-theoretic complications are just one special case), the reliability of experts when they are in positions where their ideas matter in terms of power, status, and wealth, etc. However you choose to deal with them, I think that even in the most basic discussion of this topic, they deserve more concern than your present FAQ gives them.
Okay, thank you.
I will replace the phlogiston section with something else, maybe along the lines of the example of a medicine putting someone to sleep because it has a “dormitive potency”.
I agree with you that there are lots of complex and messy calculations that stand between consequentialism and correct results, and that at best these are difficult and at worst they are not humanly feasible. However, this idea seems to me fundamentally consequentialist—to make this objection, one starts by assuming consequentialist principles, but then saying they can’t be put into action and so we should retreat from pure consequentialism on consequentialist grounds. The target audience of this FAQ is people who are not even at this level yet—people who don’t even understand that you need to argue against certain “consequentialist” ideas on consequentialist grounds, but rather that they can be dismissed by definition because consequences don’t matter. Someone who accepts consequentialism on a base level but then retreats from it on a higher level is already better informed than the people I am aiming this FAQ at. I will make this clearer.
This gets into the political side of things as well. I still don’t understand why you think consequentialism implies or even suggests centralized economic planning when we both agree centralized economic planning would have bad consequences. Certain decisions have to be made, and making them on consequentialist grounds will produce the best results—even if those consequentialist grounds are “never give the government the power to make these decisions because they will screw them up and that will have bad consequences”. I continue to think prediction markets allow something slightly more interesting than that, and I think if you disagree we can resolve that disagreement only on consequentialist grounds—eg would a government that tried to intervene where prediction markets recommended intervention create better consequences than one that didn’t. Nevertheless, I’ll probably end up deleting a lot of this section since it seemed to give everyone an impression I don’t endorse.
Hopefully the changes I listed in my other comment on this thread should help with some of your other worries.
Fair enough. Though I can grant this only for consequentialism in general, not utilitarianism—unless you have a solution to the fundamental problem of interpersonal utility comparison and aggregation. (In which case I’d be extremely curious to hear it.)
I gave it as a historical example of a once wildly popular bad idea that was a product of consequentialist thinking. Of course, as you point out, that was an instance of flawed consequentialist thinking, since the consequences were in fact awful. The problem however is that these same patterns of thinking are by no means dead and gone—it is only that some of their particular instances have been so decisively discredited in practice that nobody serious supports them any more. (And in many other instances, gross failures are still being rationalized away.)
The patterns of thinking I have in mind are more or less what you yourself propose as a seemingly attractive consequentialist approach to problems of public concern: let’s employ accredited experts who will use their sophisticated models to do a cost-benefit analysis and figure out a welfare-maximizing policy. Yes, this really sounds much more rational and objective compared to resolving issues via traditional customs and institutions, which appear to be largely antiquated, irrational, and arbitrary. It also seems far more rational than debating issues in terms of metaphysical constructs such as “liberties,” “rights,” “justice,” “constitutionality,” etc. Trouble is, with very few exceptions, it is usually a recipe for disaster.
Traditional institutions and metaphysical decision-making heuristics are far from perfect, but with a bit of luck, at least they can provide for a functional society. They are a product of cultural (and to some degree biological) evolution, as as such they are quite robust against real-world problems. In contrast, the experts’ models will sooner or later turn out to be flawed one way or another—the difficulty of the problems and the human biases that immediately rear their heads as soon as power and status are at stake practically guarantee this outcome.
Ultimately, when science is used to create policy, the practical outcome is that official science will be debased and corrupted to make it conform to ideological and political pressures. It will not result in elevation of public discourse to a real scientific standard (what you call reducing politics to math) -- that is an altogether utopian idea. So, for example, when that author whose article you linked uses sophisticated-looking math to “analyze” a controversial political issue (in this case immigration), he’s not bringing mathematical clarity and precision of thought into the public discourse. Rather, he is debasing science by concocting a shoddy spherical-cow model with no connection to reality that has some superficial trappings of scientific discourse; the end product is nothing more than Dark Arts. Of course, that was just a blog post, but the situation with real accredited expert output is often not much better.
Now, you can say that I have in fact been making a consequentialist argument all along. In some sense, I agree, but what I wrote certainly applies even to the minimalist interpretation of your positions stated in the FAQ.