You can call a subset of your preferences moral, that’s fine. Say, eating chocolate icecream, or helping a starving child. Let’s take a randomly chosen “morally right action” A.
That, given your second paragraph, would have to be a preference which, what, maximizes Mark’s utility, regardless of what the rest of his utility function actually looks like?
It seems to be trivial to construct a utility function (given any such action A) such as that doing A does not maximize said utility function. Give Mark a such a utility function and you got yourself a reductio ad absurdum.
So, if you define a subset of preferences named “morally right” thus that any such action needs to maximize (edit: or even ‘not minimize’) an arbitrary utility function, then obviously that subset is empty.
That, given your second paragraph, would have to be a preference which, what, maximizes Mark’s utility, regardless of what the rest of his utility function actually looks like?
If Mark is capable of acting morally, he would have a preference for moral action which is strong enough to override other preferences. However,t hat is not really the point. Even if he is too weak-willed to do what
the FAI says, he has no grounds to object to the FAI.
It seems to be trivial to construct a utility function (given any such action A) such as that doing A does not maximize said utility function. Give Mark a such a utility function and you got yourself a reductio ad absurdum.
I can’t see how that amount to more than the observation that not every agetn is capable of acting morally. Ho hum.
So, if you define a subset of preferences named “morally right” thus that any such action needs to maximize (edit: or even ‘not minimize’) an arbitrary utility function, then obviously that subset is empty.
I don’t see why. An agent should want to do what is morally right, but that doesn’t mean an agent would want to.
Their utility funciton might not allow them. But how could they object to be told what is right? The fault, surely, lies in themselves.
An agent should want to do what is morally right, but that doesn’t mean an agent would want to. Their utility funciton might not allow them. But how could they object to be told what is right? The fault, surely, lies in themselves.
They can object because their preferences are defined by their utility function, full stop. That’s it. They are not “at fault”, or “in error”, for not adopting some other preferences that some other agents deem to be “morally correct”. They are following their programming, as you follow yours. Different groups of agents share different parts of their preferences, think Venn diagram.
If the oracle tells you “this action maximizes your own utility function, you cannot understand how”, then yes the agent should follow the advice.
If the oracle told an agent “do this, it is morally right”, the non-confused agent would ask “do you mean it maximizes my own utility function?”. If yes, “thanks, I’ll do that”, if no “go eff yourself!”.
You can call an agent “incapable of acting morally” because you don’t like what it’s doing, it needn’t care. It might just as well call you “incapable of acting morally” if your circles of supposedly “morally correct actions” don’t intersect.
You can call a subset of your preferences moral, that’s fine. Say, eating chocolate icecream, or helping a starving child. Let’s take a randomly chosen “morally right action” A.
That, given your second paragraph, would have to be a preference which, what, maximizes Mark’s utility, regardless of what the rest of his utility function actually looks like?
It seems to be trivial to construct a utility function (given any such action A) such as that doing A does not maximize said utility function. Give Mark a such a utility function and you got yourself a reductio ad absurdum.
So, if you define a subset of preferences named “morally right” thus that any such action needs to maximize (edit: or even ‘not minimize’) an arbitrary utility function, then obviously that subset is empty.
If Mark is capable of acting morally, he would have a preference for moral action which is strong enough to override other preferences. However,t hat is not really the point. Even if he is too weak-willed to do what the FAI says, he has no grounds to object to the FAI.
I can’t see how that amount to more than the observation that not every agetn is capable of acting morally. Ho hum.
I don’t see why. An agent should want to do what is morally right, but that doesn’t mean an agent would want to. Their utility funciton might not allow them. But how could they object to be told what is right? The fault, surely, lies in themselves.
They can object because their preferences are defined by their utility function, full stop. That’s it. They are not “at fault”, or “in error”, for not adopting some other preferences that some other agents deem to be “morally correct”. They are following their programming, as you follow yours. Different groups of agents share different parts of their preferences, think Venn diagram.
If the oracle tells you “this action maximizes your own utility function, you cannot understand how”, then yes the agent should follow the advice.
If the oracle told an agent “do this, it is morally right”, the non-confused agent would ask “do you mean it maximizes my own utility function?”. If yes, “thanks, I’ll do that”, if no “go eff yourself!”.
You can call an agent “incapable of acting morally” because you don’t like what it’s doing, it needn’t care. It might just as well call you “incapable of acting morally” if your circles of supposedly “morally correct actions” don’t intersect.