“Back to the Future: Curing Past Suffering and S-Risks via Indexical Uncertainty”
I uploaded the draft of my article about curing past sufferings.
Abstract:
The long unbearable sufferings in the past and agonies experienced in some future timelines in which a malevolent AI could torture people for some idiosyncratic reasons (s-risks) is a significant moral problem. Such events either already happened or will happen in causally disconnected regions of the multiverse and thus it seems unlikely that we can do anything about it. However, at least one pure theoretic way to cure past sufferings exists. If we assume that there is no stable substrate of personal identity and thus a copy equals original, then by creating many copies of the next observer-moment of a person in pain in which he stops suffer, we could create indexical uncertainty in her future location and thus effectively steal her consciousness from her initial location and immediately relieve her sufferings. However, to accomplish this for people who have already died, we need to perform this operation for all possible people thus requiring enormous amounts of computations. Such computation could be performed by the future benevolent AI of Galactic scale. Many such AIs could cooperate acausally by distributing parts of the work between them via quantum randomness. To ensure their success, they need to outnumber all possible evil AIs by orders of magnitude, and thus they need to convert most of the available matter into computronium in all universes where they exist and cooperate acausally across the whole multiverse. Another option for curing past suffering is the use of wormhole time-travel to send a nanobot in the past which will, after a period of secret replication, collect the data about people and secretly upload them when their suffering becomes unbearable. https://philpapers.org/rec/TURBTT
I don’t see how this can be possible. One of the few things that I’m certain are impossible is eliminating past experiences. I’ve just finished eating strawberries, I don’t see any possible way to eliminate the experience that I just had. You can delete my memory of it, or you can travel to the past and steal the strawberries from me, but then you’d just create an alternate timeline (if time travel to the past is possible, which I doubt). In none of both cases would you have eliminated my experience, at most you can make me forget it.
The proof that this is impossible is that people have suffered horrible many times before, and have survived to confirm that no one saved them.
We can dilute past experience and break chains of experience, so each painful moment becomes just a small speck in paradise.
The argument about people who survived and remember past sufferings is not working here as it is only one of infinitely many chains of experiences (in this model) which for any person has very small subjective probability.
In the same sense, everyone who became billionaire, has memories that he was always good in business. But if we take a random person from the past, his most probable future is to be poor, not a billionaire.
In the model discussed in the article I suggest the way how to change expected future for any past person – by creating many simulations where her life is improving starting form each painful moment of her real life.
Or are you telling me that person x remembers a very bad chain of experience, but might have indeed been saved by the Friendly AI, and the memory is now false? That’s interesting, but still impossible imo.
Imagine a situation when a person waits a execution in a remote fortress. If we use self sampling assumption, SSA, we could save him, if we create 1000 his exact copies in safe location. SSA tells us that one should reason if he is randomly selected from all of his copies. 1000 copies are in safe location and 1 is in fortress. So the person has 1000 to 1 chance to be out of the fortress, according to SSA. It means that he was saved from the fortress. This situation is called indexical uncertainty.
Now we apply this method of saving to the past observer-moments when people were suffering.
I see. Like I explain in the other comment that I just wrote, I don’t believe SSA works. You would just create 1000 new minds who would feel themselves saved and would kiss your feet (1000 clones), but the original person would still be executed with 100% chance.
Indexical uncertainty implies that consciousness can travel through space and time in between equal substrates (if such thing even exists considering chaos theory). I think that’s a lot weirder than to simply assume that consciousness is rooted in the brain, in a single brain, and that at best a clone will feel exactly the same way you do, will even think he is you, but there’s no way you will be seeing through his eyes.
So yes, memory may not be everything. An amnesiac can still maintain a continuous personal identity, as long as he’s not an extreme case.
But I quite like your papers btw! Lots of interesting stuff.
Consciousness does not need to travel as it already there. Imagine two bottles with water. If one bootle is destroyed, the water remains in the other, it doesn’t need to travel.
Someone suggested to call this “unification theory of identity”.
“The argument about people who survived and remember past sufferings is not working here as it is only one of infinitely many chains of experiences (in this model) which for any person has very small subjective probability.”
Then I think you would only be creating an enormous number of new minds. Among all those minds, indeed, very few would have gone through a very bad chain of experience. But that doesn’t mean that SOME would. In fact, you haven’t reduced that number (the number of minds who have gone through a very bad chain of experience). You only reduced their percentage among all existing minds, by creating a huge number of new minds without a very bad chain of experience. But that doesn’t in any way negate the existence of the minds who have gone through a very bad chain of experience.
I mean, you can’t outdo chains of past experience, that’s just impossible. You can’t outdo the past. You can go back in time and create new timelines, but that is just creating new minds. Nothing will ever outdo the fact that person x experienced chain of experience y.
It depends on the nature of our assumption about the role of continuity in human identity. If we assume that continuity is based only on remembering the past moment, then we can start new chains from any moment we chose.
Alternative view is that continuity of identity is based on causal connection or qualia connection. This view comes with ontological costs, close to the idea of the existence of immaterial soul. Such soul could be “saved” from the past using some technological tricks, and we again have some instruments to cure past sufferings.
If I instantly cloned you right now, your clone would experience the continuity of your identity, but so would you. You can double the continuity (create new minds, which become independent from each other after doubling), but not translocate it.
If I clone myself and then kill myself, I would have created a new person with a copy of my identity, but the original copy, the original consciousness, still ceases to exist. Likewise, if you create 1000 paradises for each second of agony, you will create 1000 new minds which will feel themselves “saved”, but you won’t save the original copy. The original copy is still in hell.
Our best option is to do everything possible not to bring uncontrollable new technologies into existence until they are provably safe, and meanwhile we can eliminate all future suffering by eliminating all conscious beings’ ability to suffer, á la David Pearce (abolitionist project).
Extremely large number, if we do not use some simplification methods. I discuss these methods in the article, and after them, the task become computable.
Without such tricks, it will be like 100 life histories for every second of sufferings. But as we care only about preventing very strong sufferings, then for normal people living normal life there are not that many such seconds.
For example, if a person is dying in fire, it is like 10 minutes of agony, that is 600 seconds and 60 000 life histories which need to be simulated. It is doable task for a future superinteligent AI.
I don’t get how you come to 10power51. if we want to save from the past 10 billion people and for each we need to run 10power5 simulations, it is only 10power15, which one Внящт sphere will do.
However, there is way to acausaly distribute computations between many superintelligence in different universes and it that case we can simulate all possible observers.
“Back to the Future: Curing Past Suffering and S-Risks via Indexical Uncertainty”
I uploaded the draft of my article about curing past sufferings.
Abstract:
The long unbearable sufferings in the past and agonies experienced in some future timelines in which a malevolent AI could torture people for some idiosyncratic reasons (s-risks) is a significant moral problem. Such events either already happened or will happen in causally disconnected regions of the multiverse and thus it seems unlikely that we can do anything about it. However, at least one pure theoretic way to cure past sufferings exists. If we assume that there is no stable substrate of personal identity and thus a copy equals original, then by creating many copies of the next observer-moment of a person in pain in which he stops suffer, we could create indexical uncertainty in her future location and thus effectively steal her consciousness from her initial location and immediately relieve her sufferings. However, to accomplish this for people who have already died, we need to perform this operation for all possible people thus requiring enormous amounts of computations. Such computation could be performed by the future benevolent AI of Galactic scale. Many such AIs could cooperate acausally by distributing parts of the work between them via quantum randomness. To ensure their success, they need to outnumber all possible evil AIs by orders of magnitude, and thus they need to convert most of the available matter into computronium in all universes where they exist and cooperate acausally across the whole multiverse. Another option for curing past suffering is the use of wormhole time-travel to send a nanobot in the past which will, after a period of secret replication, collect the data about people and secretly upload them when their suffering becomes unbearable. https://philpapers.org/rec/TURBTT
I don’t see how this can be possible. One of the few things that I’m certain are impossible is eliminating past experiences. I’ve just finished eating strawberries, I don’t see any possible way to eliminate the experience that I just had. You can delete my memory of it, or you can travel to the past and steal the strawberries from me, but then you’d just create an alternate timeline (if time travel to the past is possible, which I doubt). In none of both cases would you have eliminated my experience, at most you can make me forget it.
The proof that this is impossible is that people have suffered horrible many times before, and have survived to confirm that no one saved them.
We can dilute past experience and break chains of experience, so each painful moment becomes just a small speck in paradise.
The argument about people who survived and remember past sufferings is not working here as it is only one of infinitely many chains of experiences (in this model) which for any person has very small subjective probability.
In the same sense, everyone who became billionaire, has memories that he was always good in business. But if we take a random person from the past, his most probable future is to be poor, not a billionaire.
In the model discussed in the article I suggest the way how to change expected future for any past person – by creating many simulations where her life is improving starting form each painful moment of her real life.
Or are you telling me that person x remembers a very bad chain of experience, but might have indeed been saved by the Friendly AI, and the memory is now false? That’s interesting, but still impossible imo.
This is not what I meant.
Imagine a situation when a person waits a execution in a remote fortress. If we use self sampling assumption, SSA, we could save him, if we create 1000 his exact copies in safe location. SSA tells us that one should reason if he is randomly selected from all of his copies. 1000 copies are in safe location and 1 is in fortress. So the person has 1000 to 1 chance to be out of the fortress, according to SSA. It means that he was saved from the fortress. This situation is called indexical uncertainty.
Now we apply this method of saving to the past observer-moments when people were suffering.
I see. Like I explain in the other comment that I just wrote, I don’t believe SSA works. You would just create 1000 new minds who would feel themselves saved and would kiss your feet (1000 clones), but the original person would still be executed with 100% chance.
It comes with cost: you have to assume that SSA and informational identity theory are wrong, and therefore some other weird things could turn true.
Indexical uncertainty implies that consciousness can travel through space and time in between equal substrates (if such thing even exists considering chaos theory). I think that’s a lot weirder than to simply assume that consciousness is rooted in the brain, in a single brain, and that at best a clone will feel exactly the same way you do, will even think he is you, but there’s no way you will be seeing through his eyes.
So yes, memory may not be everything. An amnesiac can still maintain a continuous personal identity, as long as he’s not an extreme case.
But I quite like your papers btw! Lots of interesting stuff.
Thanks!
Consciousness does not need to travel as it already there. Imagine two bottles with water. If one bootle is destroyed, the water remains in the other, it doesn’t need to travel.
Someone suggested to call this “unification theory of identity”.
“The argument about people who survived and remember past sufferings is not working here as it is only one of infinitely many chains of experiences (in this model) which for any person has very small subjective probability.”
Then I think you would only be creating an enormous number of new minds. Among all those minds, indeed, very few would have gone through a very bad chain of experience. But that doesn’t mean that SOME would. In fact, you haven’t reduced that number (the number of minds who have gone through a very bad chain of experience). You only reduced their percentage among all existing minds, by creating a huge number of new minds without a very bad chain of experience. But that doesn’t in any way negate the existence of the minds who have gone through a very bad chain of experience.
I mean, you can’t outdo chains of past experience, that’s just impossible. You can’t outdo the past. You can go back in time and create new timelines, but that is just creating new minds. Nothing will ever outdo the fact that person x experienced chain of experience y.
It depends on the nature of our assumption about the role of continuity in human identity. If we assume that continuity is based only on remembering the past moment, then we can start new chains from any moment we chose.
Alternative view is that continuity of identity is based on causal connection or qualia connection. This view comes with ontological costs, close to the idea of the existence of immaterial soul. Such soul could be “saved” from the past using some technological tricks, and we again have some instruments to cure past sufferings.
If I instantly cloned you right now, your clone would experience the continuity of your identity, but so would you. You can double the continuity (create new minds, which become independent from each other after doubling), but not translocate it.
If I clone myself and then kill myself, I would have created a new person with a copy of my identity, but the original copy, the original consciousness, still ceases to exist. Likewise, if you create 1000 paradises for each second of agony, you will create 1000 new minds which will feel themselves “saved”, but you won’t save the original copy. The original copy is still in hell.
Our best option is to do everything possible not to bring uncontrollable new technologies into existence until they are provably safe, and meanwhile we can eliminate all future suffering by eliminating all conscious beings’ ability to suffer, á la David Pearce (abolitionist project).
[edited]
Extremely large number, if we do not use some simplification methods. I discuss these methods in the article, and after them, the task become computable.
Without such tricks, it will be like 100 life histories for every second of sufferings. But as we care only about preventing very strong sufferings, then for normal people living normal life there are not that many such seconds.
For example, if a person is dying in fire, it is like 10 minutes of agony, that is 600 seconds and 60 000 life histories which need to be simulated. It is doable task for a future superinteligent AI.
[edited]
why? if there is 60 000 futures where I escaped a bad outcome, I can bet on it as 1 to 60 000.
[edited]
I don’t get how you come to 10power51. if we want to save from the past 10 billion people and for each we need to run 10power5 simulations, it is only 10power15, which one Внящт sphere will do.
However, there is way to acausaly distribute computations between many superintelligence in different universes and it that case we can simulate all possible observers.
[edited]
“The fact that you’re living a bearable life right now suggests that this is already the state.”
Interesting remark… Could you elaborate?
[edited]
Still don’t know what you meant by that other sentence. What’s being “the state”, and what does a bearable life have do to with it?
And what’s the “e” in (100/e)%?