Then on the other side of this question you could consider creating new sentiences who couldn’t suffer at all. But why would these have a priority over those who exist already?
From the point of view of those who’ll actually create the minds, it’s not a choice between somebody who exists already and a new mind. It’s the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.
I’m proposing to create these minds, if I survive. Many will want this. If we have FAI, it will help me, by its definition.
I would rather live in a future afterlife that has my grandparents in it than your ‘better designs’. Better by whose evaluation? I’d also say that my sense of ‘better’ outweighs any other sense of ‘better’ - my terminal values are my own.
One might also invoke Big Universe considerations to say that even the “new” kind of a mind has already existed in some corner of the universe
I could care less about some corner of the universe that is not casually connected to my corner. The big world stuff isn’t very relevant: this is a decision between two versions of our local future: one with people we love in it, and one without.
I’m proposing to create these minds, if I survive. Many will want this. If we have FAI, it will help me, by its definition.
I would rather live in a future afterlife that has my grandparents in it than your ‘better designs’. Better by whose evaluation? I’d also say that my sense of ‘better’ outweighs any other sense of ‘better’ - my terminal values are my own.
I could care less about some corner of the universe that is not casually connected to my corner. The big world stuff isn’t very relevant: this is a decision between two versions of our local future: one with people we love in it, and one without.