and if the future has enough negentropy to resimulate the past. (That last point is a new source of doubt for me; I kinda just assumed it was true until a friend told me it might not be.)
Yeah, I don’t know about this one either.
Even if possible, it might be incredibly wasteful, in terms of how much negentropy (= future prosperity for new people) we’ll need to burn in order to rescue one person. And then the more we rescue, the less value we get out of that as well, since burning negentropy will reduce their extended lifespans too. So we’d need to assign greater (dramatically greater?) value to extending the life of someone who’d previously existed, compared to letting a new person live for the same length of time.
“Lossy resurrection” seems like a more negentropy-efficient way of handling that, by the same tokens as acausal norms likely being a better way to handle acausal trade than low-level simulations and babble-and-prune not being the most efficient way of doing general-purpose search.
Like, the full-history resimulation will surely still not allow you to narrow things down to one branch. You’d get an equivalence class of them, each of them consistent with all available information. Which, in turn, would correspond to a probability distribution over the rescuee’s mind; not a unique pick.
Given that, it seems plausible that there’s some method by which we can get to the same end result – constrain the PD over the rescuee’s mind by as much as the data available to us can let us – without actually running the full simulation.
Depends on how the space of human minds looks like, I suppose. Whether it’s actually much lower-dimensional than a naive analysis of possible brain-states suggests.
I’m pretty sure we just need one resimulation to save everyone; once we have located an exact copy of our history, it’s cheap to pluck out anyone (including people dead 100 or 1000 years ago). It’s a one-time cost.
Lossy resurrection is better than nothing but it doesn’t feel as “real” to me. If you resurrect a dead me, I expect that she says “I’m glad I exist! But — at least as per my ontology and values — you shouldn’t quite think of me as the same person as the original. We’re probly quite different, internally, and thus behaviorally as well, when ran over some time.”
Like, the full-history resimulation will surely still not allow you to narrow things down to one branch. You’d get an equivalence class of them, each of them consistent with all available information. Which, in turn, would correspond to a probability distribution over the rescuee’s mind; not a unique pick.
I feel like I’m not quite sure about this? It depends on what quantum mechanics entails, exactly, I think. For example: if BQP = P, then there’s “only a polynomial amount” of timeline-information (whatever that means!), and then my intuition tells me that the “our world serves as a checksum for the one true (macro-)timeline” idea is more likely to be a thing. But this reasoning is still quite heuristical. Plausibly, yeah, the best we get is a polynomially large or even exponentially large distribution.
That said, to get back to my original point, I feel like there’s enough unknowns making this scenario plausible here, that some people who really want to get reunited with their loved ones might totally pursue aligned superintelligence just for a potential shot at this, whether their idea of reuniting requires lossless resurrection or not.
I feel like there’s enough unknowns making this scenario plausible here
No argument on that.
I don’t find it particularly surprising that {have lost a loved one they wanna resurrect} ∩ {take the singularity and the possibility of resurrection seriously} ∩ {would mention this} is empty, though:
“Resurrection is information-theoretically possible” is a longer leap than “believes an unconditional pro-humanity utopia is possible”, which is itself a bigger leap than just “takes singularity seriously”. E. g., there’s a standard-ish counter-argument to “resurrection is possible” which naively assumes a combinatorial explosion of possible human minds consistent with a given behavior. Thinking past it requires some additional less-common insights.
“Would mention this” is downgraded by it being an extremely weakness/vulnerability-revealing motivation. Much more so than just “I want an awesome future”.
“Would mention this” is downgraded by… You know how people who want immortality get bombarded with pop-culture platitudes about accepting death? Well, as per above, immortality is dramatically more plausible-sounding than resurrection, and it’s not as vulnerable-to-mention a motivation. Yet talking about it is still not a great idea in a “respectable” company. Goes double for resurrection.
Yeah, I don’t know about this one either.
Even if possible, it might be incredibly wasteful, in terms of how much negentropy (= future prosperity for new people) we’ll need to burn in order to rescue one person. And then the more we rescue, the less value we get out of that as well, since burning negentropy will reduce their extended lifespans too. So we’d need to assign greater (dramatically greater?) value to extending the life of someone who’d previously existed, compared to letting a new person live for the same length of time.
“Lossy resurrection” seems like a more negentropy-efficient way of handling that, by the same tokens as acausal norms likely being a better way to handle acausal trade than low-level simulations and babble-and-prune not being the most efficient way of doing general-purpose search.
Like, the full-history resimulation will surely still not allow you to narrow things down to one branch. You’d get an equivalence class of them, each of them consistent with all available information. Which, in turn, would correspond to a probability distribution over the rescuee’s mind; not a unique pick.
Given that, it seems plausible that there’s some method by which we can get to the same end result – constrain the PD over the rescuee’s mind by as much as the data available to us can let us – without actually running the full simulation.
Depends on how the space of human minds looks like, I suppose. Whether it’s actually much lower-dimensional than a naive analysis of possible brain-states suggests.
I’m pretty sure we just need one resimulation to save everyone; once we have located an exact copy of our history, it’s cheap to pluck out anyone (including people dead 100 or 1000 years ago). It’s a one-time cost.
Lossy resurrection is better than nothing but it doesn’t feel as “real” to me. If you resurrect a dead me, I expect that she says “I’m glad I exist! But — at least as per my ontology and values — you shouldn’t quite think of me as the same person as the original. We’re probly quite different, internally, and thus behaviorally as well, when ran over some time.”
I feel like I’m not quite sure about this? It depends on what quantum mechanics entails, exactly, I think. For example: if BQP = P, then there’s “only a polynomial amount” of timeline-information (whatever that means!), and then my intuition tells me that the “our world serves as a checksum for the one true (macro-)timeline” idea is more likely to be a thing. But this reasoning is still quite heuristical. Plausibly, yeah, the best we get is a polynomially large or even exponentially large distribution.
That said, to get back to my original point, I feel like there’s enough unknowns making this scenario plausible here, that some people who really want to get reunited with their loved ones might totally pursue aligned superintelligence just for a potential shot at this, whether their idea of reuniting requires lossless resurrection or not.
No argument on that.
I don’t find it particularly surprising that {have lost a loved one they wanna resurrect} ∩ {take the singularity and the possibility of resurrection seriously} ∩ {would mention this} is empty, though:
“Resurrection is information-theoretically possible” is a longer leap than “believes an unconditional pro-humanity utopia is possible”, which is itself a bigger leap than just “takes singularity seriously”. E. g., there’s a standard-ish counter-argument to “resurrection is possible” which naively assumes a combinatorial explosion of possible human minds consistent with a given behavior. Thinking past it requires some additional less-common insights.
“Would mention this” is downgraded by it being an extremely weakness/vulnerability-revealing motivation. Much more so than just “I want an awesome future”.
“Would mention this” is downgraded by… You know how people who want immortality get bombarded with pop-culture platitudes about accepting death? Well, as per above, immortality is dramatically more plausible-sounding than resurrection, and it’s not as vulnerable-to-mention a motivation. Yet talking about it is still not a great idea in a “respectable” company. Goes double for resurrection.