Woah there. I think we might have a containment failure across an abstraction barrier.
Modelling moral propositions as facts that can be true or false is useful (same as with physical propositions). Then, within that model, utilitarianism is false.
“Utilitarianism is false because it is useful to believe it is false” is a confusion of levels, IMO.
Modelling moral propositions as facts that can be true or false is useful
Sure, sometimes it is, depending on your goals. For example, if you start a religion, modeling certain moral proposition as true is useful. If you run a country, proclaiming the patriotic duty as a moral truth is very useful.
In the sense that we might want to use it or not use it as the driving principle of a superpowerful genie or whatever.
I don’t see how this answers my question. And certainly not the original question
What experiences what you anticipate in a world where utilitarianism is true that you wouldn’t anticipate in a world where it is false?
Sure, sometimes it is, depending on your goals. For example, if you start a religion, modeling certain moral proposition as true is useful. If you run a country, proclaiming the patriotic duty as a moral truth is very useful.
I meant model::useful, not memetic::useful.
I don’t see how this answers my question. And certainly not the original question
It doesn’t answer the original question. You asked in what sense it could be true or false, and I answered that it being “true” corresponds to it being a good idea to hand it off to a powerful genie, as a proxy test for whether it is the preference structure we would want. I think that does answer your question, albeit with some clarification. Did I misunderstand you?
As for the original question, in a world where utilitarianism were “true”, I would expect moral philosophers to make judgments that agreed with it, for my intuitions to find it appealing as opposed to stupid, and so on.
Naturally, this correspondence between “is” facts and “ought” facts is artificial and no more or less justified than eg induction; we think it works.
Not explicitly, but most people tend to believe what their evolutionary and cultural adaptations tell them it’s useful to believe and don’t think too hard about whether it’s actually true.
Be careful with that word. You seem to be using it to refer to consequentialism, but “utilitarianism” usually refers to a much more specific theory that you would not want to endorse simply because it’s consequentialist.
I mean that the genie makes his decisions based on the consequences of his actions. I guess consequentialism is technically more accurate. According to Wikipedia, utilitarianism is a subset of it, but I’m not really sure what the difference is.
Ok. Yeah, “Consequentialism” or “VNM utilitarianism” is usually used for that concepts to distinguish from the moral theory that says you should make choices consistent with a utility function constructed by some linear aggregation of “welfare” or whatever across all agents.
It would be a tragedy to adopt Utilitarianism just because it is consequentialist.
Right, they are different. A creative rereading of my post could interpret it as talking about two concepts DanielLC might have meant by “utilitarianism”.
It seems to me that people who find utilitarianism intuitive do so because they understand the strong mathematical underpinnings. Sort of like how Bayesian networks determine the probability of complex events, in that Bayes theorem proves that a probability derived any other way forces a logical contradiction. Probability has to be Bayesian, even if it’s hard to demonstrate why; it takes more than a few math classes.
In that sense, it’s as possible for utilitarianism to be false as it is for probability theory (based on Bayesian reasoning) to be false. If you know the math, it’s all true by definition, even if some people have arguments (or to be LW-sympathetic, think they do).
Utilitarianism would be false is such arguments existed. Most people try to create them by concocting scenarios in which the results obtained by utilitarian thinking lead to bad moral conclusions. But the claim of utilitarianism is that each time this happens, somebody is doing the math wrong, or else it wouldn’t, by definition and maths galore, be the conclusion of utilitarianism.
In the former world, I anticipate that making decisions using utilitarianism would leave me satisfied upon sufficient reflection, and more reflection after that wouldn’t change my opinion. In the latter world, I don’t.
So you defined true as satisfactory? What if you run into a form of repugnant conclusion, as most forms of utilitarianism do, does it mean that utilitarianism is false? Furthermore, if you compare consequentialism, virtue ethics and deontology by this criteria, some or all of them can turn out to be “true” or “false”, depending on where your reflection leads you.
A large chocolate industry in the former, and chocolate desserts as well. In the latter, there might be a chocolate industry if people discover that chocolate is useful as a supplement, but chocolate extracts would be sold in such a way as to conceal their flavor.
What experiences what you anticipate in a world where utilitarianism is true that you wouldn’t anticipate in a world where it is false?
In what sense can utiliarianism be true or false?
In the sense that we might want to use it or not use it as the driving principle of a superpowerful genie or whatever.
Casting morality as facts that can be true or false is a very convenient model.
I don’t think most people agree that useful = true.
Woah there. I think we might have a containment failure across an abstraction barrier.
Modelling moral propositions as facts that can be true or false is useful (same as with physical propositions). Then, within that model, utilitarianism is false.
“Utilitarianism is false because it is useful to believe it is false” is a confusion of levels, IMO.
Sure, sometimes it is, depending on your goals. For example, if you start a religion, modeling certain moral proposition as true is useful. If you run a country, proclaiming the patriotic duty as a moral truth is very useful.
I don’t see how this answers my question. And certainly not the original question
I meant model::useful, not memetic::useful.
It doesn’t answer the original question. You asked in what sense it could be true or false, and I answered that it being “true” corresponds to it being a good idea to hand it off to a powerful genie, as a proxy test for whether it is the preference structure we would want. I think that does answer your question, albeit with some clarification. Did I misunderstand you?
As for the original question, in a world where utilitarianism were “true”, I would expect moral philosophers to make judgments that agreed with it, for my intuitions to find it appealing as opposed to stupid, and so on.
Naturally, this correspondence between “is” facts and “ought” facts is artificial and no more or less justified than eg induction; we think it works.
Not explicitly, but most people tend to believe what their evolutionary and cultural adaptations tell them it’s useful to believe and don’t think too hard about whether it’s actually true.
If we use deontology, we can control the genie. If we use utilitarianism, we can control the world. I’m more interested in the world than the genie.
Be careful with that word. You seem to be using it to refer to consequentialism, but “utilitarianism” usually refers to a much more specific theory that you would not want to endorse simply because it’s consequentialist.
?
What do you mean by utilitarianism?
I mean that the genie makes his decisions based on the consequences of his actions. I guess consequentialism is technically more accurate. According to Wikipedia, utilitarianism is a subset of it, but I’m not really sure what the difference is.
Ok. Yeah, “Consequentialism” or “VNM utilitarianism” is usually used for that concepts to distinguish from the moral theory that says you should make choices consistent with a utility function constructed by some linear aggregation of “welfare” or whatever across all agents.
It would be a tragedy to adopt Utilitarianism just because it is consequentialist.
I get consequentialism. It’s Utilitarianism that I don’t understand.
Minor nitpick: Consequentialism =/= VNM utilitarianism
Right, they are different. A creative rereading of my post could interpret it as talking about two concepts DanielLC might have meant by “utilitarianism”.
It seems to me that people who find utilitarianism intuitive do so because they understand the strong mathematical underpinnings. Sort of like how Bayesian networks determine the probability of complex events, in that Bayes theorem proves that a probability derived any other way forces a logical contradiction. Probability has to be Bayesian, even if it’s hard to demonstrate why; it takes more than a few math classes.
In that sense, it’s as possible for utilitarianism to be false as it is for probability theory (based on Bayesian reasoning) to be false. If you know the math, it’s all true by definition, even if some people have arguments (or to be LW-sympathetic, think they do).
Utilitarianism would be false is such arguments existed. Most people try to create them by concocting scenarios in which the results obtained by utilitarian thinking lead to bad moral conclusions. But the claim of utilitarianism is that each time this happens, somebody is doing the math wrong, or else it wouldn’t, by definition and maths galore, be the conclusion of utilitarianism.
In the former world, I anticipate that making decisions using utilitarianism would leave me satisfied upon sufficient reflection, and more reflection after that wouldn’t change my opinion. In the latter world, I don’t.
So you defined true as satisfactory? What if you run into a form of repugnant conclusion, as most forms of utilitarianism do, does it mean that utilitarianism is false? Furthermore, if you compare consequentialism, virtue ethics and deontology by this criteria, some or all of them can turn out to be “true” or “false”, depending on where your reflection leads you.
Yep. Yep. Yep.
What experiences would you anticipate in a world where chocolate being tasty is true that you wouldn’t anticipate in a world where it is false?
A large chocolate industry in the former, and chocolate desserts as well. In the latter, there might be a chocolate industry if people discover that chocolate is useful as a supplement, but chocolate extracts would be sold in such a way as to conceal their flavor.
A tasty experience whenever I eat chocolate.