Name a third alternative that is actually an answer
“Doesn’t matter”.
First of all you’re ignoring the existence of morally neutral questions. Should I scratch my butt? Lessee, would an omniscient perfectly moral being scratch his/her/its butt? Oh dear, I think we’re in trouble now… X-D
Second, you’re assuming atomicity of actions and that’s a bad assumption. In your world actions are very limited—they can be done or not done, but they cannot be done partially, they cannot be slightly modified or just done in a few different ways.
Third, you’re assuming away the uncertainty of the future and that also is a bad assumption. Proper actions for an omniscient being can very well be different from proper actions for someone who has to face uncertainty with respect to consequences.
Fourth, for the great majority of dilemmas in life (e.g. “Should I take this job?”, “Should I marry him/her?”, “Should I buy a new phone?”) the answer “what an omniscient moral being would choose” is perfectly useless.
Which part do you object to?
The concept of CEV seems to me to be the direct equivalent of “God’s will”—handwaveable in any direction you wish while retaining enough vagueness to make specific discussions difficult or pretty much impossible. I think my biggest objection is to the “coherent” part while also having great doubts about the “extrapolated” part as well.
would an omniscient perfectly moral being scratch his/her/its butt?
(Side note: this conversation is taking a rather strange turn, but whatever.)
If its butt feels itchy, and it would prefer for its butt to not feel itchy, and the best way to make its butt not feel itchy is to scratch it, and there are no external moral consequences to its decision (like, say, someone threatening to kill 3^^^3 people iff it scratches its butt)… well, it’s increasing its own utility by scratching its butt, isn’t it? If it increases its own utility by doing so and doesn’t decrease net utility elsewhere, then that’s a net increase in utility. Scratch away, I say.
Second, you’re assuming atomicity of actions and that’s a bad assumption. In your world actions are very limited—they can be done or not done, but they cannot be done partially, they cannot be slightly modified or just done in a few different ways.
Sure. I agree I did just handwave a lot of stuff with respect to what an “action” is… but would you agree that, conditional on having a good definition of “action”, we can evaluate “actions” morally? (Moral by human standards, of course, not Pebblesorter standards.)
Third, you’re assuming away the uncertainty of the future and that also is a bad assumption. Proper actions for an omniscient being can very well be different from proper actions for someone who has to face uncertainty with respect to consequences.
Agreed, but if you come up with a way to make good/moral decisions in the idealized situation of omniscience, you can generalize to uncertain situations simply by applying probability theory.
Fourth, for the great majority of dilemmas in life (e.g. “Should I take this job?”, “Should I marry him/her?”, “Should I buy a new phone?”) the answer “what an omniscient moral being would choose” is perfectly useless.
Again, I agree… but then, knowledge of the Banach-Tarski paradox isn’t of much use to most people.
The concept of CEV seems to me to be the direct equivalent of “God’s will”—handwaveable in any direction you wish while retaining enough vagueness to make specific discussions difficult or pretty much impossible. I think my biggest objection is to the “coherent” part while also having great doubts about the “extrapolated” part as well.
Fair enough. I don’t have enough domain expertise to really analyze your position in depth, but at a glance, it seems reasonable.
The assumption that morality boils down to utility is a rather huge assumption :-)
would you agree that, conditional on having a good definition of “action”, we can evaluate “actions” morally?
Conditional on having a good definition of “action” and on having a good definition of “morally”.
you can generalize to uncertain situations simply by applying probability theory
I don’t think so, at least not “simply”. An omniscient being has no risk and no risk aversion, for example.
isn’t of much use to most people
Morality is supposed to be useful for practical purposes. Heated discussions over how many angels can dance on the head of a pin got a pretty bad rap over the last few centuries… :-)
The assumption that morality boils down to utility is a rather huge assumption :-)
It’s not an assumption; it’s a normative statement I choose to endorse. If you have some other system, feel free to endorse that… but then we’ll be discussing morality, and not meta-morality or whatever system originally produced your objection to Jiro’s distinction between good and bad.
on having a good definition of “morally”
Agree.
An omniscient being has no risk and no risk aversion, for example.
Well, it could have risk aversion. It’s just that risk aversion never comes into play during its decision-making process due to its omniscience. Strip away that omniscience, and risk aversion very well might rear its head.
Morality is supposed to be useful for practical purposes. Heated discussions over how many angels can dance on the head of a pin got a pretty bad rap over the last few centuries… :-)
I disagree. Take the following two statements:
Morality, properly formalized, would be useful for practical purposes.
Morality is not currently properly formalized.
There is no contradiction in these two statements.
“Doesn’t matter”.
First of all you’re ignoring the existence of morally neutral questions. Should I scratch my butt? Lessee, would an omniscient perfectly moral being scratch his/her/its butt? Oh dear, I think we’re in trouble now… X-D
Second, you’re assuming atomicity of actions and that’s a bad assumption. In your world actions are very limited—they can be done or not done, but they cannot be done partially, they cannot be slightly modified or just done in a few different ways.
Third, you’re assuming away the uncertainty of the future and that also is a bad assumption. Proper actions for an omniscient being can very well be different from proper actions for someone who has to face uncertainty with respect to consequences.
Fourth, for the great majority of dilemmas in life (e.g. “Should I take this job?”, “Should I marry him/her?”, “Should I buy a new phone?”) the answer “what an omniscient moral being would choose” is perfectly useless.
The concept of CEV seems to me to be the direct equivalent of “God’s will”—handwaveable in any direction you wish while retaining enough vagueness to make specific discussions difficult or pretty much impossible. I think my biggest objection is to the “coherent” part while also having great doubts about the “extrapolated” part as well.
(Side note: this conversation is taking a rather strange turn, but whatever.)
If its butt feels itchy, and it would prefer for its butt to not feel itchy, and the best way to make its butt not feel itchy is to scratch it, and there are no external moral consequences to its decision (like, say, someone threatening to kill 3^^^3 people iff it scratches its butt)… well, it’s increasing its own utility by scratching its butt, isn’t it? If it increases its own utility by doing so and doesn’t decrease net utility elsewhere, then that’s a net increase in utility. Scratch away, I say.
Sure. I agree I did just handwave a lot of stuff with respect to what an “action” is… but would you agree that, conditional on having a good definition of “action”, we can evaluate “actions” morally? (Moral by human standards, of course, not Pebblesorter standards.)
Agreed, but if you come up with a way to make good/moral decisions in the idealized situation of omniscience, you can generalize to uncertain situations simply by applying probability theory.
Again, I agree… but then, knowledge of the Banach-Tarski paradox isn’t of much use to most people.
Fair enough. I don’t have enough domain expertise to really analyze your position in depth, but at a glance, it seems reasonable.
The assumption that morality boils down to utility is a rather huge assumption :-)
Conditional on having a good definition of “action” and on having a good definition of “morally”.
I don’t think so, at least not “simply”. An omniscient being has no risk and no risk aversion, for example.
Morality is supposed to be useful for practical purposes. Heated discussions over how many angels can dance on the head of a pin got a pretty bad rap over the last few centuries… :-)
It’s not an assumption; it’s a normative statement I choose to endorse. If you have some other system, feel free to endorse that… but then we’ll be discussing morality, and not meta-morality or whatever system originally produced your objection to Jiro’s distinction between good and bad.
Agree.
Well, it could have risk aversion. It’s just that risk aversion never comes into play during its decision-making process due to its omniscience. Strip away that omniscience, and risk aversion very well might rear its head.
I disagree. Take the following two statements:
Morality, properly formalized, would be useful for practical purposes.
Morality is not currently properly formalized.
There is no contradiction in these two statements.
But they have a consequence: Morality currently is not useful for practical purposes.
That’s… an interesting position. Are you willing to live with it? X-)
You can, of course define morality in this particular way, but why would you do that?