I’m not sure what criteria you’re intending with “feasible”, but I’d say FAI, as uploading/cryonics have a lot of failure modes, one of which is uFAI. Unless something weird happens, e.g. a currently-hidden AI keeps us from gobbling the stars, then an FAI once unleashed should be able to revive every human who’s ever died, so even if you die before it’s developed you should still be okay. (If an FAI would want to do that, anyway.) Whereas most people would be skeptical that an AI could be powerful enough to resurrect every human ever, I’m actually more skeptical that we’re not currently at the mercy of an AI or an entire coalition of AIs. Fermi paradox and what not. I’d say that there’s a lot of structural uncertainty, though, and that it would be unwise to put much faith in any hypotheses that involve highly advanced technology/intelligences.
Unless something weird happens, e.g. a currently-hidden AI keeps us from gobbling the stars, then an FAI once unleashed should be able to revive every human who’s ever died, so even if you die before it’s developed you should still be okay.
There seems to be rather a lot of information lost beyond the chance of recovery. The mapping of ‘current world as best as the FAI could plausibly deconstruct’ to ‘possible histories that would lead to this state’ is not 1:1.
The best I could expect of an FAI is the ability to construct a probability distribution over all the likely combinations of humans who could have lived and perhaps ‘resurrect’ rather a lot of people who never lived on the hope that it’d get most the ones who did live in the process.
There seems to be rather a lot of information lost beyond the chance of recovery.
Not if there are AIs out there in the universe who can catch the information and run it back to your FAI at lightspeed. And since our FAI can catch information about the causal past of other AIs that they’d otherwise never have been able to get back, it’s even a clear-cut trade scenario. I see no reason not to expect this by default. (Steve’s idea; I think it’s pretty epic, especially if the part works where the AIs collectively catch all each others’ quantum entanglements which enables them to coordinate to reverse the past. That’d be freakin’ awesome. And either way, I think hearing this idea was the first time I thought, “wow, if a mere human can think of that, imagine what ideas a freaking superintelligence could come up with”.)
Not if there are AIs out there in the universe who can catch the information and run it back to your FAI at lightspeed. And since our FAI can catch information about the causal past of other AIs that they’d otherwise never have been able to get back, it’s even a clear-cut trade scenario. I see no reason not to expect this by default.
I’m not even confident there are other AIs out there in the universe. At least, not in our Everett-Branch-Future-And-Relevant-History-Light-Cone.
I’m not that confident, especially as I have a sneaking suspicion that something really weird is going on cosmologically speaking, but isn’t it the default assumption ’round these parts?
I’m not that confident, especially as I have a sneaking suspicion that something really weird is going on cosmologically speaking, but isn’t it the default assumption ’round these parts?
The default assumption is that there are (or will be) many other FAIs that light from our world-history will directly interact with? I didn’t know that. It’s certainly not mine. I thought it was more likely that we were for practical purposes alone. If you’ll pardon the shorthand reasoning:
Fermilike considerations… epically unlikely that life emerges all the way to superintelligence takeoff
Anthropics and self indication and suchforth… most EBs where one superintelligence emerges will not be branches in which more than one universe emerges.
There are heaps of other superintelligences, with high probability all the other ones are in parts of the (broadly used) Universe that are causally inaccessible.
The physics is complicated, often speculative (by folks like Tegmark) and beyond me but it all adds up to “we’re probably effectively alone in the universe as we see it”.
So you think there are other FAIs out there that our civilization would encounter if we got that far? How much does this depend on probability that we are in a simulation or under the benevolent (or otherwise) control of a powerful agent and how likely would you consider it to be conditional on us not being simulated/overseen?
how likely would you consider it to be conditional on us not being simulated/overseen?
So it’s possible that spacetime is infinitely dense and if you’re a superintelligence there’s no reason to expand. Dunno how likely that is, though blackholes do creep me out. Abiogenesis really doesn’t seem all that impossible, and anyway I think anthropic explanations are fundamentally confused. If your AI never expands then it can’t get precise info about its past, but maybe there are non-physical computational ways to do that, so the costs might not be worth the benefits. It seems like I might’ve been wrong in that LessWrong folk migh prefer anthropic solutons to Fermi, but I’m not sure how much evidence that is, especially as anthropics is confusing and possibly confused. So yeah… maybe 25% or so, but that’s only factoring in some structural uncertainty. Meh.
’Course, my primary hypothesis is that we are being overseen, and brains sometimes have trouble reasoning about hypothetical scenarios which aren’t already the default expectation. It’s at times like this when advanced rationality skills would be helpful.
Fermilike considerations… epically unlikely that life emerges all the way to superintelligence takeoff
I don’t follow. Do you think intelligences would loudly announce their existence over a long enough time period such that we would know about it? It always struck me as more likely that AGIs were quiet than that they didn’t exist. Remember, all those stars you see at night don’t necessarily exist; could just as easily be an illusion. All it’d take is for one superintelligence to show up somewhere and decide that we weren’t worth killing but that we shouldn’t get to see what’s actually going on as it gobbles all the unoccupied planets. There are various reasons it would want to do this. [ETA: The alternative, that abiogenesis is really difficult, strikes me as unlikely, and I have a very strong skepticism of anthropic “explanations”.]
There are heaps of other superintelligences, with high probability all the other ones are in parts of the (broadly used) Universe that are causally inaccessible
Hm, this might be a difference of perspective; I’m not very confident in the simulation argument as it’s usually put forth. (I tried to explain some of my reasons elsewhere in this thread.)
So you think there are other FAIs out there that our civilization would encounter if we got that far? How much does this depend on probability that we are in a simulation or under the benevolent (or otherwise) control of a powerful agent and how likely would you consider it to be conditional on us not being simulated/overseen?
(They don’t have to be Friendly, they just have to be willing to trade.) I don’t have a strong opinion either way. If we’re being overseen then it seems true by definition that we’ll run into other AGIs if we build an FAI, so I was focusing on the scenario where we’re not being overseen/simulated/fucked-with. In such a scenario I don’t know what probability to put on it… I’ll think about it more.
If we’re being overseen then it seems true by definition that we’ll run into other AGIs if we build an FAI
This seems likely but is not true by definition. In fact if I were designing and overseer I can see reasons why I may prefer to design one that keeps itself hidden except where intervention is required. Such an overseer, upon detecting that the overseen have created an AI with an acceptable goal system, may actively destroy all evidence of its existence.
True, mea culpa. I swear, there’s something about the words “by definition” that makes you misuse them even if you’re already aware of how often they’re misused. I almost never say “by definition” and yet it still screwed me over.
The alternative, that abiogenesis is really difficult, strikes me as unlikely, and I have a very strong skepticism of anthropic “explanations”.
I keep running into people who think anthropic reasoning doesn’t explain anything, or that have it entirely backwards. One prominent physicist whose name eludes me commented in an editorial published in physics today that anthropic reasoning was worthless unless the life-compatible section of the probability distribution of universal laws was especially likely. This so utterly misses the point that he clearly didn’t understand the basic argument.
I’ve never encountered anyone who’s willing to admit to buying anything stronger than the weak anthropic principle, which seems utterly obviously true:
1) If the universe didn’t enable the formation of sapient life, it wouldn’t exist (edited to clarify: sapient life, not the universe). If the universe made the formation of such life fantastically unlikely in any one location but the extent of the universe is larger than the reciprocal of that probability density, it would likely exist.
2) Our existence thus doesn’t indicate much about the general hospitability of the rules of the universe to the formation of sapient life, because the universe is awfully large, possibly infinite.
3) In the event that the rules of the universe that we observe are consequences of more fundamental laws, and those fundamental laws are quantum mechanical in nature so that multiple variants get a nonzero component, then the probability of life forming in this universe is taken as the OR among all of those variants.
There are heaps of other superintelligences, with high probability all the other ones are in parts of the (broadly used) Universe that are causally inaccessible
Hm, this might be a difference of perspective; I’m not very confident in the simulation argument as it’s usually put forth. (I tried to explain some of my reasons elsewhere in this thread.)
Is there a typo in there? The simulation argument doesn’t seem to fit.
Sorry, my sentence was unclear; “it” was referencing the belief that at least one intelligence besides us has already shown up or will show up somewhere in the universe at some point. It seems to me that most people, including most people on LessWrong, think this is likely.
An expanding superintelligence sphere acts as a lightyears-wide optical lens, providing extremely redundant observations of far-off objects. This can be combined with superintelligent error-correction and image reconstruction. If you have multiple such superintelligences then you get even more angles. But yeah, I haven’t done the actual calculations; it’d be super cool if someone else did them.
On another note, about six months ago I spent a few days looking at the quantum information theory literature trying to figure out if AIs could coordinate to reverse the past; I think I have enough knowledge to pose it as a coherent question to someone with a lot of knowledge of reversible computing and QIT. I’d like to do that someday.
But the “error correction and image reconstruction” itself is not lossless. There is inevitable distortion caused by scattering off the unknown distribution of interstellar dust particles and from gravitational lensing between the AI and its target. Not to mention all the truly random crap happening in the interstellar void as the photons interact with the quantum foam. The inversion methods you suggest do not yield a true image, merely a consistent one.
It could still be enough to resurect a person, if the difference from truth is on the order of the difference between me right now and me after sleeping for a few years. (Hint: the two me’s are very different, but they’re still recognizable as me.)
I think I have enough knowledge to pose it as a coherent question to someone with a lot of knowledge of reversible computing and QIT. I’d like to do that someday.
Does anyone have an estimate of how many actualy different humans there can be (i.e., the size of brain-space measured in units such that someone about one unit away from me would seem like the same person to someone who knows me).
It might be possible to simply create all humans that could have existed; those who actually did would be a subset, we just couldn’t tell which ones.
What is the recommended literature related to the ideas both you and wedrifid have been discussing in this thread? I googled but I figure it wouldn’t hurt to ask either. Thanks.
Which aspects? I think the only field relevant to what we were discussing is information theory. The stuff about superintelligences coordinating doesn’t have any existent literature, but similar ideas are discussed on LessWrong in the context of decision theory.
Your comment seems far-fetched. For one, an AI with such awesome powers could also choose to run a copy of you starting from any moment when you feel unhappy, not just the moment of your death. Since the universe around me stubbornly keeps on looking normal, something will probably stop “rescue sims” from happening.
I’m trying to avoid assuming a metaphysic where simulations are assumed to be possible, because I’m not sure such metaphysics ultimately make sense. (Maybe you can guess my rationale: I think “measure” and “existence” and so on are very fuzzy, and I think if we reason in terms of decision theoretic significant-ness then it might turn that running a simulation of something doesn’t double its “measure”, and what matters is what already “actually existed/exists”, i.e. what’s already “actually significant”.) If you don’t assume a simulationist metaphysic then “rescue sims” are dubious, whereas reviving people who are known to have already existed seems more like a straightforward application of technology. If you take a sort of common-sense layman’s perspective, reviving the dead sounds a lot less speculative than running an exact simulation of a mind on a computer in a way that will actually change the past. …No?
The layman’s perspective sounds reasonable enough, but seems to fall apart on closer inspection. What makes a human brain different from a simulation? Why would the AI have an easier time reconstructing the mind of someone who died on March 20 than reconstructing a copy of you on March 21? Why are future simulations of you necessarily less “significant” than current you? This looks suspiciously like a theory constructed specifically to be testable only by death, i.e. not testable to the rest of us.
(The following probably won’t be understandable / won’t appear motivated. Sorry.)
Why would the AI have an easier time reconstructing the mind of someone who died on March 20 than reconstructing a copy of you on March 21?
You can make a copy, but as soon as you simulate it diverging from the original then you’re imagining someone that never existed in a timeline that didn’t actually happen. Otherwise you’re just fooling yourself about what actually happened, you’re not causing something else to happen. Whereas if you revive a mind that died and have it have new experiences then you’re not deluding yourself about what actually happened, you’re just continuing the story.
Why are future simulations of you necessarily less “significant” than current you?
Because the simulator would just be deluding themselves about what actually happened, like minting counterfeit currency; the important aspect of me is that I’m here embedded in this particular decision policy computation with these and such constraints. Take me out of my contexts and you don’t have me anymore. If you make a thousand crackfics involving Romeo running away to a Chinese brothel then nobody’s going to listen to your stories unless they have tremendous artistic merit. And if a thousand Romeo & Juliet crackfics are shouted out in the middle of a forest but nobody hears them, do they have any decision theoretic significance?
But I haven’t actually worked out the math, so it’s possible things don’t work like I think they do.
This looks suspiciously like a theory constructed specifically to be testable only by death, i.e. not testable to the rest of us.
Well, it’s a theory about anthropics… quantum immortality is also a theory that is only testable by death, but I don’t think that’s suspicious as such. (In fact I don’t actually think quantum immortality is only testable by death, which you might be able to ascertain from what I wrote above, but um, I strongly suspect that I’m not understandable. Anyway, death is the simplest example.)
You might be on to something, but I can’t understand it properly until I figure out what “decision-theoretic significance” really means, and why it seems to play so nicely with both classical and quantum coinflips. Until then, “measure” seems to be a more promising explanation, though it has lots of difficulties too.
I don’t think this argument makes causal sense. If you’d been uplifted into a rescue sim, you certainly wouldn’t have made this post. The universe looks, from all of our perspectives, exactly like it would if rescue sims were possible, since that doesn’t currently make any testable preditictions. You(sub future) might see some differing effects, but that version of you isn’t around right now, and can’t provide evidence on the subject.
You’re right that my words don’t provide new evidence to you, but if you anticipate becoming a rescue sim at some point and that doesn’t happen, that’s evidence against rescue sims for you.
Even internally, no, that still doesn’t work. The evidence that your current continuity has observed is not influenced by whether or not rescue sims exist. That’s the same thing as saying that you have seen no evidence one way or the other. Even if multiple other versions of you are instantiated in the future, what the continuity of yourself that is typing this observes doesn’t change.
I don’t think that’s the right way to do Bayesian updating in the presence of observer-splitting.
Imagine I sell you a device that I claim to be a fair quantum coin. The first run of the device gives you 1000 heads in a row. You try again, and get another 1000 heads. You come back to my store to demand a refund, and I reply that my fair coin gives rise to many branches including this one, so you have nothing to complain about. Do you buy my explanation, or insist that the coin is defective?
I started to write a rebuttal, but it’s quickly becoming clear to me that I don’t have a systematic way of reasoning about this topic. I don’t necessarily agree with you, but I need to give the matter a lot more thought. Thank you for giving me something to think about.
My concern is basically that I’m profoundly uncomfortable with the idea of evidence flowing backwards in time. I mean, you’re updating your beliefs about the future based on what you haven’t seen happen in the future.
Wait a second, your objection doesn’t really strongly counter my point, right? ’Cuz the author of the post wanted to maximize immortality, so saying that the FAI would have better things to do with its time would imply that the FAI wasn’t applying the reversal test when it comes to keeping current humans alive. It seems that the FAI should either kill those living and replace them with something better, or revive the dead, otherwise it’s being inconsistent. (I mean not necessarily, but still.) Also, if it doesn’t resurrect those in graves or urns then it’s not gonna resurrect cryonauts either, so cryonics is out. And your “rescue sim” argument doesn’t seem strong; rescue sims might not be considered as good as running simulations of people who had died; high opportunity cost. So not being in a rescue sim could just mean that the FAI had better things to do, e.g. running simulations of previously-dead people in heaven or whatever. Am I missing something?
Also, if it doesn’t resurrect those in graves or urns then it’s not gonna resurrect cryonauts either, so cryonics is out.
Why? If FAI is weak enough, it might be unable to resurrect non-cryonauts. Also maybe there will be no AIs and an asteroid will kill us all in 200 years, but we’ll figure out how to thaw cryonauts in 100, so they get some bonus years.
I don’t think it’s a matter of an intelligence being strong or weak. I’m relatively confident that the inverse problem of computing the structure of a human brain given a rough history of the activities of the human as input is so woefully underconstrained and nonunique as to be impossible. If you’re familiar with inversion in general, you can look at countless examples where robust Bayesian models fail to yield anything but the grossest approximations even with rich multivariate data to match.
Unless you’re conjecturing FAI powers so advanced that the modern understanding of information theory doesn’t apply, or unless I’m missing the point entirely.
I’m not sure what criteria you’re intending with “feasible”, but I’d say FAI, as uploading/cryonics have a lot of failure modes, one of which is uFAI. Unless something weird happens, e.g. a currently-hidden AI keeps us from gobbling the stars, then an FAI once unleashed should be able to revive every human who’s ever died, so even if you die before it’s developed you should still be okay. (If an FAI would want to do that, anyway.) Whereas most people would be skeptical that an AI could be powerful enough to resurrect every human ever, I’m actually more skeptical that we’re not currently at the mercy of an AI or an entire coalition of AIs. Fermi paradox and what not. I’d say that there’s a lot of structural uncertainty, though, and that it would be unwise to put much faith in any hypotheses that involve highly advanced technology/intelligences.
There seems to be rather a lot of information lost beyond the chance of recovery. The mapping of ‘current world as best as the FAI could plausibly deconstruct’ to ‘possible histories that would lead to this state’ is not 1:1.
The best I could expect of an FAI is the ability to construct a probability distribution over all the likely combinations of humans who could have lived and perhaps ‘resurrect’ rather a lot of people who never lived on the hope that it’d get most the ones who did live in the process.
Not if there are AIs out there in the universe who can catch the information and run it back to your FAI at lightspeed. And since our FAI can catch information about the causal past of other AIs that they’d otherwise never have been able to get back, it’s even a clear-cut trade scenario. I see no reason not to expect this by default. (Steve’s idea; I think it’s pretty epic, especially if the part works where the AIs collectively catch all each others’ quantum entanglements which enables them to coordinate to reverse the past. That’d be freakin’ awesome. And either way, I think hearing this idea was the first time I thought, “wow, if a mere human can think of that, imagine what ideas a freaking superintelligence could come up with”.)
I’m not even confident there are other AIs out there in the universe. At least, not in our Everett-Branch-Future-And-Relevant-History-Light-Cone.
I’m not that confident, especially as I have a sneaking suspicion that something really weird is going on cosmologically speaking, but isn’t it the default assumption ’round these parts?
The default assumption is that there are (or will be) many other FAIs that light from our world-history will directly interact with? I didn’t know that. It’s certainly not mine. I thought it was more likely that we were for practical purposes alone. If you’ll pardon the shorthand reasoning:
Fermilike considerations… epically unlikely that life emerges all the way to superintelligence takeoff
Anthropics and self indication and suchforth… most EBs where one superintelligence emerges will not be branches in which more than one universe emerges.
There are heaps of other superintelligences, with high probability all the other ones are in parts of the (broadly used) Universe that are causally inaccessible.
The physics is complicated, often speculative (by folks like Tegmark) and beyond me but it all adds up to “we’re probably effectively alone in the universe as we see it”.
So you think there are other FAIs out there that our civilization would encounter if we got that far? How much does this depend on probability that we are in a simulation or under the benevolent (or otherwise) control of a powerful agent and how likely would you consider it to be conditional on us not being simulated/overseen?
So it’s possible that spacetime is infinitely dense and if you’re a superintelligence there’s no reason to expand. Dunno how likely that is, though blackholes do creep me out. Abiogenesis really doesn’t seem all that impossible, and anyway I think anthropic explanations are fundamentally confused. If your AI never expands then it can’t get precise info about its past, but maybe there are non-physical computational ways to do that, so the costs might not be worth the benefits. It seems like I might’ve been wrong in that LessWrong folk migh prefer anthropic solutons to Fermi, but I’m not sure how much evidence that is, especially as anthropics is confusing and possibly confused. So yeah… maybe 25% or so, but that’s only factoring in some structural uncertainty. Meh.
’Course, my primary hypothesis is that we are being overseen, and brains sometimes have trouble reasoning about hypothetical scenarios which aren’t already the default expectation. It’s at times like this when advanced rationality skills would be helpful.
I don’t follow. Do you think intelligences would loudly announce their existence over a long enough time period such that we would know about it? It always struck me as more likely that AGIs were quiet than that they didn’t exist. Remember, all those stars you see at night don’t necessarily exist; could just as easily be an illusion. All it’d take is for one superintelligence to show up somewhere and decide that we weren’t worth killing but that we shouldn’t get to see what’s actually going on as it gobbles all the unoccupied planets. There are various reasons it would want to do this. [ETA: The alternative, that abiogenesis is really difficult, strikes me as unlikely, and I have a very strong skepticism of anthropic “explanations”.]
Hm, this might be a difference of perspective; I’m not very confident in the simulation argument as it’s usually put forth. (I tried to explain some of my reasons elsewhere in this thread.)
(They don’t have to be Friendly, they just have to be willing to trade.) I don’t have a strong opinion either way. If we’re being overseen then it seems true by definition that we’ll run into other AGIs if we build an FAI, so I was focusing on the scenario where we’re not being overseen/simulated/fucked-with. In such a scenario I don’t know what probability to put on it… I’ll think about it more.
This seems likely but is not true by definition. In fact if I were designing and overseer I can see reasons why I may prefer to design one that keeps itself hidden except where intervention is required. Such an overseer, upon detecting that the overseen have created an AI with an acceptable goal system, may actively destroy all evidence of its existence.
True, mea culpa. I swear, there’s something about the words “by definition” that makes you misuse them even if you’re already aware of how often they’re misused. I almost never say “by definition” and yet it still screwed me over.
I keep running into people who think anthropic reasoning doesn’t explain anything, or that have it entirely backwards. One prominent physicist whose name eludes me commented in an editorial published in physics today that anthropic reasoning was worthless unless the life-compatible section of the probability distribution of universal laws was especially likely. This so utterly misses the point that he clearly didn’t understand the basic argument.
I’ve never encountered anyone who’s willing to admit to buying anything stronger than the weak anthropic principle, which seems utterly obviously true:
1) If the universe didn’t enable the formation of sapient life, it wouldn’t exist (edited to clarify: sapient life, not the universe). If the universe made the formation of such life fantastically unlikely in any one location but the extent of the universe is larger than the reciprocal of that probability density, it would likely exist.
2) Our existence thus doesn’t indicate much about the general hospitability of the rules of the universe to the formation of sapient life, because the universe is awfully large, possibly infinite.
3) In the event that the rules of the universe that we observe are consequences of more fundamental laws, and those fundamental laws are quantum mechanical in nature so that multiple variants get a nonzero component, then the probability of life forming in this universe is taken as the OR among all of those variants.
That’s really all there is to it...
Is there a typo in there? The simulation argument doesn’t seem to fit.
Oh, I was assuming… never mind, it’s probably not worth untangling.
Not as far as I can tell. What do you mean?
Sorry, my sentence was unclear; “it” was referencing the belief that at least one intelligence besides us has already shown up or will show up somewhere in the universe at some point. It seems to me that most people, including most people on LessWrong, think this is likely.
Is there any reason to think that such detailed information as would be needed to recreate people wouldn’t get lost in noise?
An expanding superintelligence sphere acts as a lightyears-wide optical lens, providing extremely redundant observations of far-off objects. This can be combined with superintelligent error-correction and image reconstruction. If you have multiple such superintelligences then you get even more angles. But yeah, I haven’t done the actual calculations; it’d be super cool if someone else did them.
On another note, about six months ago I spent a few days looking at the quantum information theory literature trying to figure out if AIs could coordinate to reverse the past; I think I have enough knowledge to pose it as a coherent question to someone with a lot of knowledge of reversible computing and QIT. I’d like to do that someday.
But the “error correction and image reconstruction” itself is not lossless. There is inevitable distortion caused by scattering off the unknown distribution of interstellar dust particles and from gravitational lensing between the AI and its target. Not to mention all the truly random crap happening in the interstellar void as the photons interact with the quantum foam. The inversion methods you suggest do not yield a true image, merely a consistent one.
It could still be enough to resurect a person, if the difference from truth is on the order of the difference between me right now and me after sleeping for a few years. (Hint: the two me’s are very different, but they’re still recognizable as me.)
I’m not a total expert, but try me.
I’ll have to spend a few hours reloading the concepts into my brain. When I do that I’ll post it to Discussion.
Does anyone have an estimate of how many actualy different humans there can be (i.e., the size of brain-space measured in units such that someone about one unit away from me would seem like the same person to someone who knows me).
It might be possible to simply create all humans that could have existed; those who actually did would be a subset, we just couldn’t tell which ones.
What is the recommended literature related to the ideas both you and wedrifid have been discussing in this thread? I googled but I figure it wouldn’t hurt to ask either. Thanks.
Which aspects? I think the only field relevant to what we were discussing is information theory. The stuff about superintelligences coordinating doesn’t have any existent literature, but similar ideas are discussed on LessWrong in the context of decision theory.
Your comment seems far-fetched. For one, an AI with such awesome powers could also choose to run a copy of you starting from any moment when you feel unhappy, not just the moment of your death. Since the universe around me stubbornly keeps on looking normal, something will probably stop “rescue sims” from happening.
I’m trying to avoid assuming a metaphysic where simulations are assumed to be possible, because I’m not sure such metaphysics ultimately make sense. (Maybe you can guess my rationale: I think “measure” and “existence” and so on are very fuzzy, and I think if we reason in terms of decision theoretic significant-ness then it might turn that running a simulation of something doesn’t double its “measure”, and what matters is what already “actually existed/exists”, i.e. what’s already “actually significant”.) If you don’t assume a simulationist metaphysic then “rescue sims” are dubious, whereas reviving people who are known to have already existed seems more like a straightforward application of technology. If you take a sort of common-sense layman’s perspective, reviving the dead sounds a lot less speculative than running an exact simulation of a mind on a computer in a way that will actually change the past. …No?
The layman’s perspective sounds reasonable enough, but seems to fall apart on closer inspection. What makes a human brain different from a simulation? Why would the AI have an easier time reconstructing the mind of someone who died on March 20 than reconstructing a copy of you on March 21? Why are future simulations of you necessarily less “significant” than current you? This looks suspiciously like a theory constructed specifically to be testable only by death, i.e. not testable to the rest of us.
(The following probably won’t be understandable / won’t appear motivated. Sorry.)
You can make a copy, but as soon as you simulate it diverging from the original then you’re imagining someone that never existed in a timeline that didn’t actually happen. Otherwise you’re just fooling yourself about what actually happened, you’re not causing something else to happen. Whereas if you revive a mind that died and have it have new experiences then you’re not deluding yourself about what actually happened, you’re just continuing the story.
Because the simulator would just be deluding themselves about what actually happened, like minting counterfeit currency; the important aspect of me is that I’m here embedded in this particular decision policy computation with these and such constraints. Take me out of my contexts and you don’t have me anymore. If you make a thousand crackfics involving Romeo running away to a Chinese brothel then nobody’s going to listen to your stories unless they have tremendous artistic merit. And if a thousand Romeo & Juliet crackfics are shouted out in the middle of a forest but nobody hears them, do they have any decision theoretic significance?
But I haven’t actually worked out the math, so it’s possible things don’t work like I think they do.
Well, it’s a theory about anthropics… quantum immortality is also a theory that is only testable by death, but I don’t think that’s suspicious as such. (In fact I don’t actually think quantum immortality is only testable by death, which you might be able to ascertain from what I wrote above, but um, I strongly suspect that I’m not understandable. Anyway, death is the simplest example.)
You might be on to something, but I can’t understand it properly until I figure out what “decision-theoretic significance” really means, and why it seems to play so nicely with both classical and quantum coinflips. Until then, “measure” seems to be a more promising explanation, though it has lots of difficulties too.
I don’t think this argument makes causal sense. If you’d been uplifted into a rescue sim, you certainly wouldn’t have made this post. The universe looks, from all of our perspectives, exactly like it would if rescue sims were possible, since that doesn’t currently make any testable preditictions. You(sub future) might see some differing effects, but that version of you isn’t around right now, and can’t provide evidence on the subject.
You’re right that my words don’t provide new evidence to you, but if you anticipate becoming a rescue sim at some point and that doesn’t happen, that’s evidence against rescue sims for you.
Even internally, no, that still doesn’t work. The evidence that your current continuity has observed is not influenced by whether or not rescue sims exist. That’s the same thing as saying that you have seen no evidence one way or the other. Even if multiple other versions of you are instantiated in the future, what the continuity of yourself that is typing this observes doesn’t change.
I don’t think that’s the right way to do Bayesian updating in the presence of observer-splitting.
Imagine I sell you a device that I claim to be a fair quantum coin. The first run of the device gives you 1000 heads in a row. You try again, and get another 1000 heads. You come back to my store to demand a refund, and I reply that my fair coin gives rise to many branches including this one, so you have nothing to complain about. Do you buy my explanation, or insist that the coin is defective?
I started to write a rebuttal, but it’s quickly becoming clear to me that I don’t have a systematic way of reasoning about this topic. I don’t necessarily agree with you, but I need to give the matter a lot more thought. Thank you for giving me something to think about.
My concern is basically that I’m profoundly uncomfortable with the idea of evidence flowing backwards in time. I mean, you’re updating your beliefs about the future based on what you haven’t seen happen in the future.
Wait a second, your objection doesn’t really strongly counter my point, right? ’Cuz the author of the post wanted to maximize immortality, so saying that the FAI would have better things to do with its time would imply that the FAI wasn’t applying the reversal test when it comes to keeping current humans alive. It seems that the FAI should either kill those living and replace them with something better, or revive the dead, otherwise it’s being inconsistent. (I mean not necessarily, but still.) Also, if it doesn’t resurrect those in graves or urns then it’s not gonna resurrect cryonauts either, so cryonics is out. And your “rescue sim” argument doesn’t seem strong; rescue sims might not be considered as good as running simulations of people who had died; high opportunity cost. So not being in a rescue sim could just mean that the FAI had better things to do, e.g. running simulations of previously-dead people in heaven or whatever. Am I missing something?
Why? If FAI is weak enough, it might be unable to resurrect non-cryonauts. Also maybe there will be no AIs and an asteroid will kill us all in 200 years, but we’ll figure out how to thaw cryonauts in 100, so they get some bonus years.
I don’t think it’s a matter of an intelligence being strong or weak. I’m relatively confident that the inverse problem of computing the structure of a human brain given a rough history of the activities of the human as input is so woefully underconstrained and nonunique as to be impossible. If you’re familiar with inversion in general, you can look at countless examples where robust Bayesian models fail to yield anything but the grossest approximations even with rich multivariate data to match.
Unless you’re conjecturing FAI powers so advanced that the modern understanding of information theory doesn’t apply, or unless I’m missing the point entirely.
I think those possibilities are unlikely. /shrugs