I wasprettyfreakedoutaboutsimilarideas in 2013, but I’m over it now. (Mostly. I’m not signed up for cryonics even though a lot of my friends are.)
If you can stop doing philosophy and futurism, I recommend that. But if you can’t … um, how deep into personal-identity reductionism are you? You say you’re “selfishly” worried about bad things “happening to you”. As is everyone (and for sound evolutionary reasons), but it doesn’t really make sense if you think sub specie æternitatis. If an atom-for-atom identical copy of you, is you, and an almost identical copy is almost you, then in a sufficiently large universe where all possible configurations of matter are realized, it makes more sense to think about the relative measure of different configurations rather than what happens to “you”. And from that perspective …
Well, there’s still an unimaginably large amount of suffering in the universe, which is unimaginably bad. However, there’s also an unimaginably large amount of unimaginably great things, which are likely to vastly outnumber the bad things for very general reasons: lots of agents want to wirehead, almost no one wants to anti-wirehead. Some agents are altruists, almost no one is a general-purpose anti-altruist, as opposed to feeling spite towards some particular enemy. The only reason you would want to hurt other agents (rather than being indifferent to them except insofar as they are made out of atoms that can be used for other things), would be as part of a war—but superintelligences don’t have to fight wars, because it’s a Pareto improvement to compute what would have happened in a war, and divide resources accordingly. And there are evolutionary reasons for a creature like you to be more unable to imagine the scope of the great things.
So, those are some reasons to guess that the universe isn’t as Bad as you fear. But more importantly—you’re not really in a position to know, let alone do anything about it. Even if the future is Bad, this-you locally being upset about it, doesn’t make it any better. (If you’re freaked out thinking about this stuff, you’re not alignment researcher material anyway.) All you can do is work to optimize the world you see around you—the only world you can actually touch.
Thanks for your response, just a few of my thoughts on your points:
If you *can* stop doing philosophy and futurism
To be honest, I’ve never really *wanted* to be involved with this. I only really made an account here *because* of my anxieties and wanted to try to talk myself through them.
If an atom-for-atom identical copy of you, *is* you, and an *almost* identical copy is *almost* you, then in a sufficiently large universe where all possible configurations of matter are realized, it makes more sense to think about the relative measure of different configurations rather than what happens to “you”.
I don’t buy that theory of personal-identity personally. It seems to me that if the biological me that’s sitting here right now isn’t *feeling* the pain, that’s not worth worrying about as much. Like, I can *imagine* that a version of me might be getting tortured horribly or experiencing endless bliss, but my consciousness doesn’t (as far as I can tell) “jump” over to those versions. Similarly, were *I* to get tortured it’d be unlikely that I care about what’s happening to the “other” versions of me. The “continuity of consciousness” theory *seems* stronger to me, although admittedly it’s not something I’ve put a lot of thought into. I wouldn’t want to use a teleporter for the same reasons.
*And* there are evolutionary reasons for a creature like you to be *more* unable to imagine the scope of the great things.
Yes, I agree that it’s possible that the future could be just as good as an infinite torture future would be bad. And that my intuitions are somewhat lopsided. But I do struggle to find that comforting. Were an infinite-torture future realised (whether it be a SignFlip error, an insane neuromorph, etc.) the fact that I could’ve ended up in a utopia wouldn’t console me one bit.
I was pretty freaked out about similar ideas in 2013, but I’m over it now. (Mostly. I’m not signed up for cryonics even though a lot of my friends are.)
If you can stop doing philosophy and futurism, I recommend that. But if you can’t … um, how deep into personal-identity reductionism are you? You say you’re “selfishly” worried about bad things “happening to you”. As is everyone (and for sound evolutionary reasons), but it doesn’t really make sense if you think sub specie æternitatis. If an atom-for-atom identical copy of you, is you, and an almost identical copy is almost you, then in a sufficiently large universe where all possible configurations of matter are realized, it makes more sense to think about the relative measure of different configurations rather than what happens to “you”. And from that perspective …
Well, there’s still an unimaginably large amount of suffering in the universe, which is unimaginably bad. However, there’s also an unimaginably large amount of unimaginably great things, which are likely to vastly outnumber the bad things for very general reasons: lots of agents want to wirehead, almost no one wants to anti-wirehead. Some agents are altruists, almost no one is a general-purpose anti-altruist, as opposed to feeling spite towards some particular enemy. The only reason you would want to hurt other agents (rather than being indifferent to them except insofar as they are made out of atoms that can be used for other things), would be as part of a war—but superintelligences don’t have to fight wars, because it’s a Pareto improvement to compute what would have happened in a war, and divide resources accordingly. And there are evolutionary reasons for a creature like you to be more unable to imagine the scope of the great things.
So, those are some reasons to guess that the universe isn’t as Bad as you fear. But more importantly—you’re not really in a position to know, let alone do anything about it. Even if the future is Bad, this-you locally being upset about it, doesn’t make it any better. (If you’re freaked out thinking about this stuff, you’re not alignment researcher material anyway.) All you can do is work to optimize the world you see around you—the only world you can actually touch.
Thanks for your response, just a few of my thoughts on your points:
To be honest, I’ve never really *wanted* to be involved with this. I only really made an account here *because* of my anxieties and wanted to try to talk myself through them.
I don’t buy that theory of personal-identity personally. It seems to me that if the biological me that’s sitting here right now isn’t *feeling* the pain, that’s not worth worrying about as much. Like, I can *imagine* that a version of me might be getting tortured horribly or experiencing endless bliss, but my consciousness doesn’t (as far as I can tell) “jump” over to those versions. Similarly, were *I* to get tortured it’d be unlikely that I care about what’s happening to the “other” versions of me. The “continuity of consciousness” theory *seems* stronger to me, although admittedly it’s not something I’ve put a lot of thought into. I wouldn’t want to use a teleporter for the same reasons.
Yes, I agree that it’s possible that the future could be just as good as an infinite torture future would be bad. And that my intuitions are somewhat lopsided. But I do struggle to find that comforting. Were an infinite-torture future realised (whether it be a SignFlip error, an insane neuromorph, etc.) the fact that I could’ve ended up in a utopia wouldn’t console me one bit.