This is a different and cleaner question, because it avoids issues with intelligent life evolving again, and the paperclipper creating other kinds of life and intelligence for scientific or other reasons in the course of pursuing paperclip production.
And:
I can’t figure out how any of the welfare theories you specify could make paperclippers better than nothing?
Desires and preferences about paperclips can be satisfied. They can sense, learn, grow, reproduce, etc.
Desires and preferences about paperclips can be satisfied. They can sense, learn, grow, reproduce, etc.
Do you take that personally seriously or is it something someone else believes? Human experience with desire satisfaction and “learning” and “growth” isn’t going to transfer over to how it is for paperclip maximizers, and a generalization that this is still something that matters to us is unlikely to succeed. I predict an absence of any there there.
Yes, I believe that the existence of the thing itself, setting aside impacts on other life that it creates or interferes with, is better than nothing, although far short of the best thing that could be done with comparable resources.
Desires and preferences about paperclips can be satisfied.
But they can also be unsatisfied. Earlier you said “this can cut both ways” but only on the “hedonistic welfare theories” bullet point. Why doesn’t “can cut both ways” also apply for desire theories and objective list theories? For example, even if a paperclipper converts the entire accessible universe into paperclips, it might also want to convert other parts of the multiverse into paperclips but is powerless to do so. If we count unsatisfied desires as having negative value, then maybe a paperclipper has net negative value (i.e., is worse than nothing)?
And:
Desires and preferences about paperclips can be satisfied. They can sense, learn, grow, reproduce, etc.
Do you take that personally seriously or is it something someone else believes? Human experience with desire satisfaction and “learning” and “growth” isn’t going to transfer over to how it is for paperclip maximizers, and a generalization that this is still something that matters to us is unlikely to succeed. I predict an absence of any there there.
Yes, I believe that the existence of the thing itself, setting aside impacts on other life that it creates or interferes with, is better than nothing, although far short of the best thing that could be done with comparable resources.
This is far from obvious. There are definitely people who claim “morality” is satisfying the preferences of as many agents as you can.
If morality evolved for game-theoretic reasons, there might even be something to this, although I personally think it’s too neat to endorse.
But they can also be unsatisfied. Earlier you said “this can cut both ways” but only on the “hedonistic welfare theories” bullet point. Why doesn’t “can cut both ways” also apply for desire theories and objective list theories? For example, even if a paperclipper converts the entire accessible universe into paperclips, it might also want to convert other parts of the multiverse into paperclips but is powerless to do so. If we count unsatisfied desires as having negative value, then maybe a paperclipper has net negative value (i.e., is worse than nothing)?