This is one of those areas where I think the AI alignment frame can do a lot to clear up underlying confusion. Which I suspect stems from you not taking the thought experiment far enough for you to no longer be willing to bite the bullet. Since it encourages AI aligned this way to either:
Care about itself more than all of humanity (if total pleasure/pain and not the number of minds is what matters), since it can turn itself into a utility monster whose pleasure and pain just dwarf humanity.
Alternately if all minds get more equal consideration it encourages the AI to care far more about future minds it plans on creating than all current humans. Since at a certain point it can build massive computers to run simulated minds faster without humans in the way. Plus on a more fundamental level, the matter and energy needed to support a human can support a much larger number of digital minds, especially if those minds are in a mindless blissed out state and are thus way less intensive to run than a simulated human mind.
It just seems like there’s no way to avoid the fact that sufficiently advanced technology easily takes the repugnant conclusions to even more extreme ends: Wherein you must be willing to wipe out humanity in exchange for creating some sufficiently large number of blissed out zombies who only barely rise to whatever you set as the minimum threshold for moral relevance.
More broadly I think this post takes for granted that morality is reduce-able to something simple enough to allow for this sort of marginal revolution. Plus without moral realism being true this avenue also doesn’t make sense as presented.
This is one of those areas where I think the AI alignment frame can do a lot to clear up underlying confusion. Which I suspect stems from you not taking the thought experiment far enough for you to no longer be willing to bite the bullet. Since it encourages AI aligned this way to either:
Care about itself more than all of humanity (if total pleasure/pain and not the number of minds is what matters), since it can turn itself into a utility monster whose pleasure and pain just dwarf humanity.
Alternately if all minds get more equal consideration it encourages the AI to care far more about future minds it plans on creating than all current humans. Since at a certain point it can build massive computers to run simulated minds faster without humans in the way. Plus on a more fundamental level, the matter and energy needed to support a human can support a much larger number of digital minds, especially if those minds are in a mindless blissed out state and are thus way less intensive to run than a simulated human mind.
It just seems like there’s no way to avoid the fact that sufficiently advanced technology easily takes the repugnant conclusions to even more extreme ends: Wherein you must be willing to wipe out humanity in exchange for creating some sufficiently large number of blissed out zombies who only barely rise to whatever you set as the minimum threshold for moral relevance.
More broadly I think this post takes for granted that morality is reduce-able to something simple enough to allow for this sort of marginal revolution. Plus without moral realism being true this avenue also doesn’t make sense as presented.