I’m happy to accept the sadistic conclusion as normally stated, and in general I find “what would I prefer if I were behind the Rawlsian Veil and going to be assigned at random to one of the lives ever actually lived” an extremely compelling intuition pump. (Though there are other edge cases that I feel weirder about, e.g. is a universe where everyone has very negative utility really improved by adding lots of new people of only somewhat negative utility?)
As a practical matter though I’m most concerned that total utilitarianism could (not just theoretically but actually, with decisions that might be locked-in in our lifetimes) turn a “good” post-singularity future into Malthusian near-hell where everyone is significantly worse off than I am now, whereas the sadistic conclusion and other contrived counterintuitive edge cases are unlikely to resemble decisions humanity or an AGI we create will actually face. Preventing the lock-in of total utilitarian values therefore seems only a little less important to me than preventing extinction.
Another question. Imagine a universe with either only 5 or 10 people. If they’re all being tortured equally badly at a level of −100 utility, are you sure you’re indifferent as to the number of people existing? Isn’t less better here?
Yeah that’s essentially the example I mentioned that seems weirder to me, but I’m not sure, and at any rate it seems much further from the sorts of decisions I actually expect humanity to have to make than the need to avoid Malthusian futures.
I’m happy to accept the sadistic conclusion as normally stated, and in general I find “what would I prefer if I were behind the Rawlsian Veil and going to be assigned at random to one of the lives ever actually lived” an extremely compelling intuition pump. (Though there are other edge cases that I feel weirder about, e.g. is a universe where everyone has very negative utility really improved by adding lots of new people of only somewhat negative utility?)
As a practical matter though I’m most concerned that total utilitarianism could (not just theoretically but actually, with decisions that might be locked-in in our lifetimes) turn a “good” post-singularity future into Malthusian near-hell where everyone is significantly worse off than I am now, whereas the sadistic conclusion and other contrived counterintuitive edge cases are unlikely to resemble decisions humanity or an AGI we create will actually face. Preventing the lock-in of total utilitarian values therefore seems only a little less important to me than preventing extinction.
Another question. Imagine a universe with either only 5 or 10 people. If they’re all being tortured equally badly at a level of −100 utility, are you sure you’re indifferent as to the number of people existing? Isn’t less better here?
Yeah that’s essentially the example I mentioned that seems weirder to me, but I’m not sure, and at any rate it seems much further from the sorts of decisions I actually expect humanity to have to make than the need to avoid Malthusian futures.