I see no reason to think a paperclip maximizer would need to have any particular meta-ethics. There are possible paperclip maximizers that are and one’s that aren’t. As rule of thumb, an agent’s normative ethics, that is, what it cares about, be it human flourishing or paperclips does not logically constrain it’s meta-ethical views.
That’s a nice and unexpected answer, so I’ll continue asking questions I have no clue about :-)
If metaethics doesn’t influence paperclip maximization, then why do I need metaethics? Can we point out the precise difference between humans and paperclippers that gives humans the need for metaethics? Is it the fact that we’re not logically omniscient about our own minds, or is it something deeper?
Perhaps I misunderstood. There are definitely possible scenarios in which metaethics could matter to a paperclip maximizer. It’s just that answering “what meta-ethics would the best paperclip maximizer have?” isn’t any easier than answering “what is the ideal metaethics?”. Varying an agent’s goal structure doesn’t change the question.
That said, if you think humans are just like paperclip maximizers except they’re trying to maximize something else than you’re already 8/10ths of the way to moral anti-realism (Come! Take those last two steps the water is fine!).
Of course it’s also the case that meta-ethics probably matters more to humans than paperclip maximizers: In particular metaethics matters for humans because of individual moral uncertainty, group and individual moral change, differences in between individual moralities, and the overall complexity of our values. There are probably similar possible issues for paperclip maximizers—like how should they resolve uncertainty over what counts as a paperclip or deal with agents that are ignorant of the ultimate value of paperclips—and thinking about them pumps my anti-realist intuitions.
Is it the fact that we’re not logically omniscient about our own minds, or is it something deeper?
Well, there’s certainly that. Also, human algorithms for decision-making can feel different from simply looking up a utility—the algorithm can be something more like a “treasure map” for locating morality, looking out at the world in a way that can feel as if morality was a light shining from outside.
Consider dealings with agents that have morals that conflict with your own. Obviously, major value conflicts preclude co-existence. Let’s assume it is a minor conflict—Bob believes eating cow milk and beef at the same meal is immoral.
It is possible to develop instrumental or terminal values to resolve how much you tolerate Bob’s different value—without reference to any meta-ethical theory. But I think that meta-ethical considerations play a large role in how tolerance of value conflict is resolved—for some people, at least.
I see no reason to think a paperclip maximizer would need to have any particular meta-ethics. There are possible paperclip maximizers that are and one’s that aren’t. As rule of thumb, an agent’s normative ethics, that is, what it cares about, be it human flourishing or paperclips does not logically constrain it’s meta-ethical views.
That’s a nice and unexpected answer, so I’ll continue asking questions I have no clue about :-)
If metaethics doesn’t influence paperclip maximization, then why do I need metaethics? Can we point out the precise difference between humans and paperclippers that gives humans the need for metaethics? Is it the fact that we’re not logically omniscient about our own minds, or is it something deeper?
Perhaps I misunderstood. There are definitely possible scenarios in which metaethics could matter to a paperclip maximizer. It’s just that answering “what meta-ethics would the best paperclip maximizer have?” isn’t any easier than answering “what is the ideal metaethics?”. Varying an agent’s goal structure doesn’t change the question.
That said, if you think humans are just like paperclip maximizers except they’re trying to maximize something else than you’re already 8/10ths of the way to moral anti-realism (Come! Take those last two steps the water is fine!).
Of course it’s also the case that meta-ethics probably matters more to humans than paperclip maximizers: In particular metaethics matters for humans because of individual moral uncertainty, group and individual moral change, differences in between individual moralities, and the overall complexity of our values. There are probably similar possible issues for paperclip maximizers—like how should they resolve uncertainty over what counts as a paperclip or deal with agents that are ignorant of the ultimate value of paperclips—and thinking about them pumps my anti-realist intuitions.
Well, there’s certainly that. Also, human algorithms for decision-making can feel different from simply looking up a utility—the algorithm can be something more like a “treasure map” for locating morality, looking out at the world in a way that can feel as if morality was a light shining from outside.
Consider dealings with agents that have morals that conflict with your own. Obviously, major value conflicts preclude co-existence. Let’s assume it is a minor conflict—Bob believes eating cow milk and beef at the same meal is immoral.
It is possible to develop instrumental or terminal values to resolve how much you tolerate Bob’s different value—without reference to any meta-ethical theory. But I think that meta-ethical considerations play a large role in how tolerance of value conflict is resolved—for some people, at least.
Not obvious. (How does this “preclusion” work? Is it the best decision available to both agents?)
Well, if I don’t include that sentence, someone nitpicks by saying:
I was trying preempt by making it clear that McH gets imprisoned or killed, even by moral anti-realists (unless they are exceptionally stupid).