Suppose that you gained the power to both discern objective morality, and to alter your own source code. You use the former ability, and find that the basic morally correct principle is maximizing the suffering of sentient beings. Do you alter your source code to be in accordance with this?
I’ve never understood this argument.
It’s like a slaveowner having a conversation with a time-traveler, and declaring that they don’t want to be nice to slaves, so any proof they could show is by definition invalid.
If the slaveowner is an ordinary human being, they already have values regarding how to treat people in their in-groups which they navigate around with respect to slaves by not treating them as in-group members. If they could be induced to see slaves as in-group members, they would probably become nicer to slaves whether they intended to or not (although I don’t think it’s necessarily the case that everyone who’s sufficiently acculturated to slavery could be induced to see slaves as in-group members.)
If the agent has no preexisting values which can be called into service of the ethics they’ve being asked to adopt, I don’t think that they could be induced to want to adopt them.
I don’t think it’s true that if there’s an objective morality, agents necessarily value it whether they realize it or not though. Why couldn’t there be inherently immoral or amoral agents?
… because the whole point of an “objective” morality is that rational agents will update to believe they should follow it? Otherwise we might as easily be such “inherently immoral or amoral agents”, and we wouldn’t want to discover such objective “morality”.
Well, if it turned out that something like “maximize suffering of intelligent agents” were written into the fabric of the universe, I think we’d have to conclude that we were inherently immoral agents.
The same evidence that persuades you that we don’t want to maximize suffering in real life is evidence that it wouldn’t be, I guess.
Side note: I’ve never seen anyone try to defend the position that we should be maximizing suffering, whereas I’ve seen all sorts of eloquent and mutually contradictory defenses of more, um, traditional ethical frameworks.
I’ve never understood this argument.
It’s like a slaveowner having a conversation with a time-traveler, and declaring that they don’t want to be nice to slaves, so any proof they could show is by definition invalid.
If the slaveowner is an ordinary human being, they already have values regarding how to treat people in their in-groups which they navigate around with respect to slaves by not treating them as in-group members. If they could be induced to see slaves as in-group members, they would probably become nicer to slaves whether they intended to or not (although I don’t think it’s necessarily the case that everyone who’s sufficiently acculturated to slavery could be induced to see slaves as in-group members.)
If the agent has no preexisting values which can be called into service of the ethics they’ve being asked to adopt, I don’t think that they could be induced to want to adopt them.
Sure, but if there’s an objective morality, it’s inherently valuable, right? So you already value it. You just haven’t realized it yet.
It gets even worse when people try to refute wireheading arguments with this. Or statements like “if it were moral to [bad thing], would you do it?”
What evidence would suggest that objective morality in such a sense could or does exist?
I’m not saying moral realism is coherent, merely that this objection isn’t.
I don’t think it’s true that if there’s an objective morality, agents necessarily value it whether they realize it or not though. Why couldn’t there be inherently immoral or amoral agents?
… because the whole point of an “objective” morality is that rational agents will update to believe they should follow it? Otherwise we might as easily be such “inherently immoral or amoral agents”, and we wouldn’t want to discover such objective “morality”.
Well, if it turned out that something like “maximize suffering of intelligent agents” were written into the fabric of the universe, I think we’d have to conclude that we were inherently immoral agents.
The same evidence that persuades you that we don’t want to maximize suffering in real life is evidence that it wouldn’t be, I guess.
Side note: I’ve never seen anyone try to defend the position that we should be maximizing suffering, whereas I’ve seen all sorts of eloquent and mutually contradictory defenses of more, um, traditional ethical frameworks.