This is a reasonable objection which may need Patch 4 for the whole method in order to escape “billion stars for billion years” (which is still small cost for universe-wide superintelligent AI, which will control billions of billions stars for tens of billions of years).
Fortunately, the Patch 4 is simple: we model just one mind history, which complies known initial conditions, and use random variables for unknown initial historical facts. In that case we get correct distribution of random minds, but we spend computational resources to simulate just one person. Some additional patches may be needed to escape intensive sufferings inside the simulation, like the use of only one playing character and turning off its subjective experiences if the pain is above unbearable threshold.
To resurrect all the dead, we don’t need to run many past histories, but we need just one simulation of all human past, in which case all characters will be “playing characters”. Running one simulation may be computationally intensive, but not billion stars for billion years.
The next step of such game would be resurrect “all possible people”, which again could be done for a small cost via multiverse-wide cooperation. In that case, creation of new people, resurrection of past people and giving live to all possible minds will be approximately the same action or running different simulation with different initial parameters.
Moreover, we may be morally obliged to resurrect all possible minds to save these minds from very improbable timelines, where evil AI creates s-risks. I will address this type of multiverse-wide cooperation in the next post.
Fortunately, the Patch 4 is simple: we model just one mind history, which complies known initial conditions, and use random variables for unknown initial historical facts.
which is still small cost for universe-wide superintelligent AI, which will control billions of billions stars for tens of billions of years
All of the stars will be dead in 100 trillion years (although it’s likely a good org will aestivate and continue most of its activities beyond that, which supposedly will get them a much higher operating efficiency than anything that’s imaginable now). There are only 50 Bn stars in the local cluster, and afaik it’s not physically possible to spread beyond the local cluster. All that stuff’s just a bunch of fading images that we’ll never touch. (I tried to substantiate this and the only simple account I could find was a youtube video. Such is our internet https://www.youtube.com/watch?v=ZL4yYHdDSWs best I could do)
(And it doesn’t seem sound, to me, to guess that we’ll ever find a way around the laws of relativity just because we really want to.)
It still seems profoundly hard to tell how much of the distribution of a history generator is going to be fictional, and it wouldn’t surprise me if the methods you have in mind generate mostly cosmically unlikely life-histories. You essentially have to get the measure of your results to match the measure of people who really lived and died. We have access to a huge measure multiplier, but it’s finite, and the error rate might just as huge.
How many lives-worth of energy are you trading away for every resurrection?
Personally, I think that it would not be computationally intense for an AI capable to create past simulations (and also it will create them anyway for some instrumental reasons), so it will be more likely to be less than 1000 years and a small fraction of one star energy. It is based on some ideas about limits of computations and power of human brain, and I think Bostrom had calculations in hist article about simulations.
However, I think that we are morally obliged to resurrect all the dead, as most of the people of past dreamed about some form of life after death. They lived and died for us and for our capability to create advance technology. We will pay the price back.
This is a reasonable objection which may need Patch 4 for the whole method in order to escape “billion stars for billion years” (which is still small cost for universe-wide superintelligent AI, which will control billions of billions stars for tens of billions of years).
Fortunately, the Patch 4 is simple: we model just one mind history, which complies known initial conditions, and use random variables for unknown initial historical facts. In that case we get correct distribution of random minds, but we spend computational resources to simulate just one person. Some additional patches may be needed to escape intensive sufferings inside the simulation, like the use of only one playing character and turning off its subjective experiences if the pain is above unbearable threshold.
To resurrect all the dead, we don’t need to run many past histories, but we need just one simulation of all human past, in which case all characters will be “playing characters”. Running one simulation may be computationally intensive, but not billion stars for billion years.
The next step of such game would be resurrect “all possible people”, which again could be done for a small cost via multiverse-wide cooperation. In that case, creation of new people, resurrection of past people and giving live to all possible minds will be approximately the same action or running different simulation with different initial parameters.
Moreover, we may be morally obliged to resurrect all possible minds to save these minds from very improbable timelines, where evil AI creates s-risks. I will address this type of multiverse-wide cooperation in the next post.
Shameless plug: you may enjoy my short fiction piece on a similar idea.
Just regarding
All of the stars will be dead in 100 trillion years (although it’s likely a good org will aestivate and continue most of its activities beyond that, which supposedly will get them a much higher operating efficiency than anything that’s imaginable now). There are only 50 Bn stars in the local cluster, and afaik it’s not physically possible to spread beyond the local cluster. All that stuff’s just a bunch of fading images that we’ll never touch. (I tried to substantiate this and the only simple account I could find was a youtube video. Such is our internet https://www.youtube.com/watch?v=ZL4yYHdDSWs best I could do)
(And it doesn’t seem sound, to me, to guess that we’ll ever find a way around the laws of relativity just because we really want to.)
It still seems profoundly hard to tell how much of the distribution of a history generator is going to be fictional, and it wouldn’t surprise me if the methods you have in mind generate mostly cosmically unlikely life-histories. You essentially have to get the measure of your results to match the measure of people who really lived and died. We have access to a huge measure multiplier, but it’s finite, and the error rate might just as huge.
How many lives-worth of energy are you trading away for every resurrection?
Personally, I think that it would not be computationally intense for an AI capable to create past simulations (and also it will create them anyway for some instrumental reasons), so it will be more likely to be less than 1000 years and a small fraction of one star energy. It is based on some ideas about limits of computations and power of human brain, and I think Bostrom had calculations in hist article about simulations.
However, I think that we are morally obliged to resurrect all the dead, as most of the people of past dreamed about some form of life after death. They lived and died for us and for our capability to create advance technology. We will pay the price back.