Modelling moral propositions as facts that can be true or false is useful
Sure, sometimes it is, depending on your goals. For example, if you start a religion, modeling certain moral proposition as true is useful. If you run a country, proclaiming the patriotic duty as a moral truth is very useful.
In the sense that we might want to use it or not use it as the driving principle of a superpowerful genie or whatever.
I don’t see how this answers my question. And certainly not the original question
What experiences what you anticipate in a world where utilitarianism is true that you wouldn’t anticipate in a world where it is false?
Sure, sometimes it is, depending on your goals. For example, if you start a religion, modeling certain moral proposition as true is useful. If you run a country, proclaiming the patriotic duty as a moral truth is very useful.
I meant model::useful, not memetic::useful.
I don’t see how this answers my question. And certainly not the original question
It doesn’t answer the original question. You asked in what sense it could be true or false, and I answered that it being “true” corresponds to it being a good idea to hand it off to a powerful genie, as a proxy test for whether it is the preference structure we would want. I think that does answer your question, albeit with some clarification. Did I misunderstand you?
As for the original question, in a world where utilitarianism were “true”, I would expect moral philosophers to make judgments that agreed with it, for my intuitions to find it appealing as opposed to stupid, and so on.
Naturally, this correspondence between “is” facts and “ought” facts is artificial and no more or less justified than eg induction; we think it works.
Sure, sometimes it is, depending on your goals. For example, if you start a religion, modeling certain moral proposition as true is useful. If you run a country, proclaiming the patriotic duty as a moral truth is very useful.
I don’t see how this answers my question. And certainly not the original question
I meant model::useful, not memetic::useful.
It doesn’t answer the original question. You asked in what sense it could be true or false, and I answered that it being “true” corresponds to it being a good idea to hand it off to a powerful genie, as a proxy test for whether it is the preference structure we would want. I think that does answer your question, albeit with some clarification. Did I misunderstand you?
As for the original question, in a world where utilitarianism were “true”, I would expect moral philosophers to make judgments that agreed with it, for my intuitions to find it appealing as opposed to stupid, and so on.
Naturally, this correspondence between “is” facts and “ought” facts is artificial and no more or less justified than eg induction; we think it works.