My feeling about this is that it’s okay to have some degree of arbitrariness in our preferences—our preferences do not have a solid external foundation, they’re human things, and like basically all human things will run into weird boundary cases when you let philosophers poke at them.
The good news is that I also think that hard-to-decide boundary cases are the ones that matter least, because I agree with others that moral uncertainty should behave a lot like regular uncertainty in this case (though I disagree with certain other applications of moral uncertainty).
The unfortunate thing about simulation as a ‘hard to decide boundary case’ is that, if we start doing it, we will probably do a LOT of it, which is a reason that its moral implications are likely to matter.
My feeling about this is that it’s okay to have some degree of arbitrariness in our preferences—our preferences do not have a solid external foundation, they’re human things, and like basically all human things will run into weird boundary cases when you let philosophers poke at them.
The good news is that I also think that hard-to-decide boundary cases are the ones that matter least, because I agree with others that moral uncertainty should behave a lot like regular uncertainty in this case (though I disagree with certain other applications of moral uncertainty).
The unfortunate thing about simulation as a ‘hard to decide boundary case’ is that, if we start doing it, we will probably do a LOT of it, which is a reason that its moral implications are likely to matter.
If we start doing it, we’ll have an actual case to look at, instead of handwaving about coulda/woulda/shoulda of entangling people and rocks.