(That’s only a paperclipper with no discounting of the future, BTW.)
Paperclippers are not evolutionarily viable, nor is there any plausible evolutionary explanation for paperclippers to emerge.
You can posit a single artificial entity becoming a paperclipper via bad design. In the present context, which is of many agents trying to agree on ethics, this single entity has only a small voice.
It’s legit to talk about paperclippers in the context of the danger they pose if they become a singleton. It’s not legit to bring them up outside that context as a bogeyman to dismiss the idea of agreement on values.
(That’s only a paperclipper with no discounting of the future, BTW.)
Paperclippers are not evolutionarily viable, nor is there any plausible evolutionary explanation for paperclippers to emerge.
You can posit a single artificial entity becoming a paperclipper via bad design. In the present context, which is of many agents trying to agree on ethics, this single entity has only a small voice.
It’s legit to talk about paperclippers in the context of the danger they pose if they become a singleton. It’s not legit to bring them up outside that context as a bogeyman to dismiss the idea of agreement on values.