From the point of view of trying to reduce personal risk of s-risks, trying to improve the worlds prospects seems like a way to convince yourself you’re doing something helpful, without meaningfully reducing personal s-risk. I have significant uncertainty about even the order of magnitude I could reduce personal s-risk through activism, research, etc, but I’d imagine it would be less than 1%. To be clear, this does not mean that I think doing these things are a waste of time, in fact it’s probably some of the highest expected utility things anyone can do, but it’s not a particularly effective way to reduce personal s-risk. However, this plausibly changes if you factor in that being someone who helped make the singularity go well could put you in a favourable position post-singularity.
Regarding resurrection, do you know what the Lesswrong consensus is on the position that continuation of consciousness is what makes someone the same person as 5 minutes ago? My impression is that this idea doesn’t really make sense, but it’s an intuitive one and a cause of some of my uncertainty about the feasibility of resurrection.
I’m surprised you think that a good singularity would let me stay dead if I had decided to commit suicide out of fear of s-risk. Presumably the benevolent AI/s would know that I would want to live, no?
Also, just a reminder that my post was about what to do conditional on the world starting to end (think nanofactories and geoengineering and the AI/s being obviously not aligned). This means that the obvious paths to utopia are already ruled out by this point, although perhaps we could still get a slice of the lightcone for acausal trade/ decision theoretic reasons.
Also yeah, whether suicide is rational or not in this situation obviously comes down to your personal probabilities of various things.
Regarding resurrection, do you know what the Lesswrong consensus is on the position that continuation of consciousness is what makes someone the same person as 5 minutes ago? My impression is that this idea doesn’t really make sense, but it’s an intuitive one and a cause of some of my uncertainty about the feasibility of resurrection.
Yes, we don’t really know how reality works, that’s one of the problems. We don’t even know if we are in a simulation. So, it’s difficult to be certain.
I’m surprised you think that a good singularity would let me stay dead if I had decided to commit suicide out of fear of s-risk. Presumably the benevolent AI/s would know that I would want to live, no?
It did occur to me that they will try to “wake you up” once (if that’s feasible at all) and ask if you really meant to stay dead (while respecting your free will and refraining from manipulation).
And it did occur to me that it’s not clear if resurrection is possible, or if bad singularity would bother to resurrect you, even if it’s possible.
So, in reality, one needs to have a better idea about all kinds of probabilities, because the actual “tree of possible scenarios” is really complicated (and we know next to nothing about those).
So, I ended up noting that
my estimates of chances for positive singularity might be higher than your estimates, and this might color my judgement.
This does reflect my uncertainty about all this...
conditional on the world starting to end
Ah, I have not realized that you were talking not just about transformation being sufficiently radical (“end of the world known to us”), but about it specifically being bad...
My typical approach to all that is to consider non-anthropocentric points of view (this allows one to take a step back and to think in a more “invariant way”). In this sense, I suspect that “universal X-risk” (that is, the X-risk which threatens to destroy everything including the AIs themselves) dominates (I am occasionally trying to scribble something about that: https://www.lesswrong.com/posts/WJuASYDnhZ8hs5CnD/exploring-non-anthropocentric-aspects-of-ai-existential).
In this sense, while it is possible to have scenarios where huge sufferings are inflicted, but the “universal X-risk” is somehow avoided, this does not seem to my (unsubstantiated) intuition to be too likely. The need to control the “universal X-risk” and to protect interests of individual members of the AI ecosystem requires a degree of “social harmony” of some sort within the AI ecosystem.
I think that if an AI ecosystem permits to inflict massive sufferings within itself, this would increase the risks to all members, and to the ecosystem as a whole. It’s difficult to imagine something like this going on for a long time without a catastrophic blow-up. (Although, of course, what do I know...)
From the point of view of trying to reduce personal risk of s-risks, trying to improve the worlds prospects seems like a way to convince yourself you’re doing something helpful, without meaningfully reducing personal s-risk. I have significant uncertainty about even the order of magnitude I could reduce personal s-risk through activism, research, etc, but I’d imagine it would be less than 1%. To be clear, this does not mean that I think doing these things are a waste of time, in fact it’s probably some of the highest expected utility things anyone can do, but it’s not a particularly effective way to reduce personal s-risk. However, this plausibly changes if you factor in that being someone who helped make the singularity go well could put you in a favourable position post-singularity.
Regarding resurrection, do you know what the Lesswrong consensus is on the position that continuation of consciousness is what makes someone the same person as 5 minutes ago? My impression is that this idea doesn’t really make sense, but it’s an intuitive one and a cause of some of my uncertainty about the feasibility of resurrection.
I’m surprised you think that a good singularity would let me stay dead if I had decided to commit suicide out of fear of s-risk. Presumably the benevolent AI/s would know that I would want to live, no?
Also, just a reminder that my post was about what to do conditional on the world starting to end (think nanofactories and geoengineering and the AI/s being obviously not aligned). This means that the obvious paths to utopia are already ruled out by this point, although perhaps we could still get a slice of the lightcone for acausal trade/ decision theoretic reasons.
Also yeah, whether suicide is rational or not in this situation obviously comes down to your personal probabilities of various things.
Yes, we don’t really know how reality works, that’s one of the problems. We don’t even know if we are in a simulation. So, it’s difficult to be certain.
It did occur to me that they will try to “wake you up” once (if that’s feasible at all) and ask if you really meant to stay dead (while respecting your free will and refraining from manipulation).
And it did occur to me that it’s not clear if resurrection is possible, or if bad singularity would bother to resurrect you, even if it’s possible.
So, in reality, one needs to have a better idea about all kinds of probabilities, because the actual “tree of possible scenarios” is really complicated (and we know next to nothing about those).
So, I ended up noting that
This does reflect my uncertainty about all this...
Ah, I have not realized that you were talking not just about transformation being sufficiently radical (“end of the world known to us”), but about it specifically being bad...
My typical approach to all that is to consider non-anthropocentric points of view (this allows one to take a step back and to think in a more “invariant way”). In this sense, I suspect that “universal X-risk” (that is, the X-risk which threatens to destroy everything including the AIs themselves) dominates (I am occasionally trying to scribble something about that: https://www.lesswrong.com/posts/WJuASYDnhZ8hs5CnD/exploring-non-anthropocentric-aspects-of-ai-existential).
In this sense, while it is possible to have scenarios where huge sufferings are inflicted, but the “universal X-risk” is somehow avoided, this does not seem to my (unsubstantiated) intuition to be too likely. The need to control the “universal X-risk” and to protect interests of individual members of the AI ecosystem requires a degree of “social harmony” of some sort within the AI ecosystem.
I doubt that anthropocentric approaches to AI alignment are likely to fare well, but I think that a harmonious AI ecosystem where all individuals (including AIs, humans, and so on) are sufficiently respected and protected might be feasible, e.g. I tried to scribble something to that effect here: https://www.lesswrong.com/posts/5Dz3ZrwBzzMfaucrH/ai-57-all-the-ai-news-that-s-fit-to-print?commentId=ckYsqx2Kp6HTAR22b
I think that if an AI ecosystem permits to inflict massive sufferings within itself, this would increase the risks to all members, and to the ecosystem as a whole. It’s difficult to imagine something like this going on for a long time without a catastrophic blow-up. (Although, of course, what do I know...)