I tend to model Paperclippers as conscious, simply because it’s easier to use bits of my own brain as a black box. So naturally my instinct is to value it’s existence the same as any other modified human mind (although not more than any lives it might endanger.)
However, IIRC, the original “paperclip-maximizer” was supposed to be nonsentient; probably still worth something in the absence of “life”, but tricky to assign based on my intuitions (is it even possible to have a sufficiently smart being I don’t value the same way I do “conscious” ones?)
In other words, I have managed to confuse my intuitions here.
I tend to model Paperclippers as conscious, simply because it’s easier to use bits of my own brain as a black box. So naturally my instinct is to value it’s existence the same as any other modified human mind (although not more than any lives it might endanger.)
However, IIRC, the original “paperclip-maximizer” was supposed to be nonsentient; probably still worth something in the absence of “life”, but tricky to assign based on my intuitions (is it even possible to have a sufficiently smart being I don’t value the same way I do “conscious” ones?)
In other words, I have managed to confuse my intuitions here.