Your comment seems far-fetched. For one, an AI with such awesome powers could also choose to run a copy of you starting from any moment when you feel unhappy, not just the moment of your death. Since the universe around me stubbornly keeps on looking normal, something will probably stop “rescue sims” from happening.
I’m trying to avoid assuming a metaphysic where simulations are assumed to be possible, because I’m not sure such metaphysics ultimately make sense. (Maybe you can guess my rationale: I think “measure” and “existence” and so on are very fuzzy, and I think if we reason in terms of decision theoretic significant-ness then it might turn that running a simulation of something doesn’t double its “measure”, and what matters is what already “actually existed/exists”, i.e. what’s already “actually significant”.) If you don’t assume a simulationist metaphysic then “rescue sims” are dubious, whereas reviving people who are known to have already existed seems more like a straightforward application of technology. If you take a sort of common-sense layman’s perspective, reviving the dead sounds a lot less speculative than running an exact simulation of a mind on a computer in a way that will actually change the past. …No?
The layman’s perspective sounds reasonable enough, but seems to fall apart on closer inspection. What makes a human brain different from a simulation? Why would the AI have an easier time reconstructing the mind of someone who died on March 20 than reconstructing a copy of you on March 21? Why are future simulations of you necessarily less “significant” than current you? This looks suspiciously like a theory constructed specifically to be testable only by death, i.e. not testable to the rest of us.
(The following probably won’t be understandable / won’t appear motivated. Sorry.)
Why would the AI have an easier time reconstructing the mind of someone who died on March 20 than reconstructing a copy of you on March 21?
You can make a copy, but as soon as you simulate it diverging from the original then you’re imagining someone that never existed in a timeline that didn’t actually happen. Otherwise you’re just fooling yourself about what actually happened, you’re not causing something else to happen. Whereas if you revive a mind that died and have it have new experiences then you’re not deluding yourself about what actually happened, you’re just continuing the story.
Why are future simulations of you necessarily less “significant” than current you?
Because the simulator would just be deluding themselves about what actually happened, like minting counterfeit currency; the important aspect of me is that I’m here embedded in this particular decision policy computation with these and such constraints. Take me out of my contexts and you don’t have me anymore. If you make a thousand crackfics involving Romeo running away to a Chinese brothel then nobody’s going to listen to your stories unless they have tremendous artistic merit. And if a thousand Romeo & Juliet crackfics are shouted out in the middle of a forest but nobody hears them, do they have any decision theoretic significance?
But I haven’t actually worked out the math, so it’s possible things don’t work like I think they do.
This looks suspiciously like a theory constructed specifically to be testable only by death, i.e. not testable to the rest of us.
Well, it’s a theory about anthropics… quantum immortality is also a theory that is only testable by death, but I don’t think that’s suspicious as such. (In fact I don’t actually think quantum immortality is only testable by death, which you might be able to ascertain from what I wrote above, but um, I strongly suspect that I’m not understandable. Anyway, death is the simplest example.)
You might be on to something, but I can’t understand it properly until I figure out what “decision-theoretic significance” really means, and why it seems to play so nicely with both classical and quantum coinflips. Until then, “measure” seems to be a more promising explanation, though it has lots of difficulties too.
I don’t think this argument makes causal sense. If you’d been uplifted into a rescue sim, you certainly wouldn’t have made this post. The universe looks, from all of our perspectives, exactly like it would if rescue sims were possible, since that doesn’t currently make any testable preditictions. You(sub future) might see some differing effects, but that version of you isn’t around right now, and can’t provide evidence on the subject.
You’re right that my words don’t provide new evidence to you, but if you anticipate becoming a rescue sim at some point and that doesn’t happen, that’s evidence against rescue sims for you.
Even internally, no, that still doesn’t work. The evidence that your current continuity has observed is not influenced by whether or not rescue sims exist. That’s the same thing as saying that you have seen no evidence one way or the other. Even if multiple other versions of you are instantiated in the future, what the continuity of yourself that is typing this observes doesn’t change.
I don’t think that’s the right way to do Bayesian updating in the presence of observer-splitting.
Imagine I sell you a device that I claim to be a fair quantum coin. The first run of the device gives you 1000 heads in a row. You try again, and get another 1000 heads. You come back to my store to demand a refund, and I reply that my fair coin gives rise to many branches including this one, so you have nothing to complain about. Do you buy my explanation, or insist that the coin is defective?
I started to write a rebuttal, but it’s quickly becoming clear to me that I don’t have a systematic way of reasoning about this topic. I don’t necessarily agree with you, but I need to give the matter a lot more thought. Thank you for giving me something to think about.
My concern is basically that I’m profoundly uncomfortable with the idea of evidence flowing backwards in time. I mean, you’re updating your beliefs about the future based on what you haven’t seen happen in the future.
Wait a second, your objection doesn’t really strongly counter my point, right? ’Cuz the author of the post wanted to maximize immortality, so saying that the FAI would have better things to do with its time would imply that the FAI wasn’t applying the reversal test when it comes to keeping current humans alive. It seems that the FAI should either kill those living and replace them with something better, or revive the dead, otherwise it’s being inconsistent. (I mean not necessarily, but still.) Also, if it doesn’t resurrect those in graves or urns then it’s not gonna resurrect cryonauts either, so cryonics is out. And your “rescue sim” argument doesn’t seem strong; rescue sims might not be considered as good as running simulations of people who had died; high opportunity cost. So not being in a rescue sim could just mean that the FAI had better things to do, e.g. running simulations of previously-dead people in heaven or whatever. Am I missing something?
Also, if it doesn’t resurrect those in graves or urns then it’s not gonna resurrect cryonauts either, so cryonics is out.
Why? If FAI is weak enough, it might be unable to resurrect non-cryonauts. Also maybe there will be no AIs and an asteroid will kill us all in 200 years, but we’ll figure out how to thaw cryonauts in 100, so they get some bonus years.
I don’t think it’s a matter of an intelligence being strong or weak. I’m relatively confident that the inverse problem of computing the structure of a human brain given a rough history of the activities of the human as input is so woefully underconstrained and nonunique as to be impossible. If you’re familiar with inversion in general, you can look at countless examples where robust Bayesian models fail to yield anything but the grossest approximations even with rich multivariate data to match.
Unless you’re conjecturing FAI powers so advanced that the modern understanding of information theory doesn’t apply, or unless I’m missing the point entirely.
Your comment seems far-fetched. For one, an AI with such awesome powers could also choose to run a copy of you starting from any moment when you feel unhappy, not just the moment of your death. Since the universe around me stubbornly keeps on looking normal, something will probably stop “rescue sims” from happening.
I’m trying to avoid assuming a metaphysic where simulations are assumed to be possible, because I’m not sure such metaphysics ultimately make sense. (Maybe you can guess my rationale: I think “measure” and “existence” and so on are very fuzzy, and I think if we reason in terms of decision theoretic significant-ness then it might turn that running a simulation of something doesn’t double its “measure”, and what matters is what already “actually existed/exists”, i.e. what’s already “actually significant”.) If you don’t assume a simulationist metaphysic then “rescue sims” are dubious, whereas reviving people who are known to have already existed seems more like a straightforward application of technology. If you take a sort of common-sense layman’s perspective, reviving the dead sounds a lot less speculative than running an exact simulation of a mind on a computer in a way that will actually change the past. …No?
The layman’s perspective sounds reasonable enough, but seems to fall apart on closer inspection. What makes a human brain different from a simulation? Why would the AI have an easier time reconstructing the mind of someone who died on March 20 than reconstructing a copy of you on March 21? Why are future simulations of you necessarily less “significant” than current you? This looks suspiciously like a theory constructed specifically to be testable only by death, i.e. not testable to the rest of us.
(The following probably won’t be understandable / won’t appear motivated. Sorry.)
You can make a copy, but as soon as you simulate it diverging from the original then you’re imagining someone that never existed in a timeline that didn’t actually happen. Otherwise you’re just fooling yourself about what actually happened, you’re not causing something else to happen. Whereas if you revive a mind that died and have it have new experiences then you’re not deluding yourself about what actually happened, you’re just continuing the story.
Because the simulator would just be deluding themselves about what actually happened, like minting counterfeit currency; the important aspect of me is that I’m here embedded in this particular decision policy computation with these and such constraints. Take me out of my contexts and you don’t have me anymore. If you make a thousand crackfics involving Romeo running away to a Chinese brothel then nobody’s going to listen to your stories unless they have tremendous artistic merit. And if a thousand Romeo & Juliet crackfics are shouted out in the middle of a forest but nobody hears them, do they have any decision theoretic significance?
But I haven’t actually worked out the math, so it’s possible things don’t work like I think they do.
Well, it’s a theory about anthropics… quantum immortality is also a theory that is only testable by death, but I don’t think that’s suspicious as such. (In fact I don’t actually think quantum immortality is only testable by death, which you might be able to ascertain from what I wrote above, but um, I strongly suspect that I’m not understandable. Anyway, death is the simplest example.)
You might be on to something, but I can’t understand it properly until I figure out what “decision-theoretic significance” really means, and why it seems to play so nicely with both classical and quantum coinflips. Until then, “measure” seems to be a more promising explanation, though it has lots of difficulties too.
I don’t think this argument makes causal sense. If you’d been uplifted into a rescue sim, you certainly wouldn’t have made this post. The universe looks, from all of our perspectives, exactly like it would if rescue sims were possible, since that doesn’t currently make any testable preditictions. You(sub future) might see some differing effects, but that version of you isn’t around right now, and can’t provide evidence on the subject.
You’re right that my words don’t provide new evidence to you, but if you anticipate becoming a rescue sim at some point and that doesn’t happen, that’s evidence against rescue sims for you.
Even internally, no, that still doesn’t work. The evidence that your current continuity has observed is not influenced by whether or not rescue sims exist. That’s the same thing as saying that you have seen no evidence one way or the other. Even if multiple other versions of you are instantiated in the future, what the continuity of yourself that is typing this observes doesn’t change.
I don’t think that’s the right way to do Bayesian updating in the presence of observer-splitting.
Imagine I sell you a device that I claim to be a fair quantum coin. The first run of the device gives you 1000 heads in a row. You try again, and get another 1000 heads. You come back to my store to demand a refund, and I reply that my fair coin gives rise to many branches including this one, so you have nothing to complain about. Do you buy my explanation, or insist that the coin is defective?
I started to write a rebuttal, but it’s quickly becoming clear to me that I don’t have a systematic way of reasoning about this topic. I don’t necessarily agree with you, but I need to give the matter a lot more thought. Thank you for giving me something to think about.
My concern is basically that I’m profoundly uncomfortable with the idea of evidence flowing backwards in time. I mean, you’re updating your beliefs about the future based on what you haven’t seen happen in the future.
Wait a second, your objection doesn’t really strongly counter my point, right? ’Cuz the author of the post wanted to maximize immortality, so saying that the FAI would have better things to do with its time would imply that the FAI wasn’t applying the reversal test when it comes to keeping current humans alive. It seems that the FAI should either kill those living and replace them with something better, or revive the dead, otherwise it’s being inconsistent. (I mean not necessarily, but still.) Also, if it doesn’t resurrect those in graves or urns then it’s not gonna resurrect cryonauts either, so cryonics is out. And your “rescue sim” argument doesn’t seem strong; rescue sims might not be considered as good as running simulations of people who had died; high opportunity cost. So not being in a rescue sim could just mean that the FAI had better things to do, e.g. running simulations of previously-dead people in heaven or whatever. Am I missing something?
Why? If FAI is weak enough, it might be unable to resurrect non-cryonauts. Also maybe there will be no AIs and an asteroid will kill us all in 200 years, but we’ll figure out how to thaw cryonauts in 100, so they get some bonus years.
I don’t think it’s a matter of an intelligence being strong or weak. I’m relatively confident that the inverse problem of computing the structure of a human brain given a rough history of the activities of the human as input is so woefully underconstrained and nonunique as to be impossible. If you’re familiar with inversion in general, you can look at countless examples where robust Bayesian models fail to yield anything but the grossest approximations even with rich multivariate data to match.
Unless you’re conjecturing FAI powers so advanced that the modern understanding of information theory doesn’t apply, or unless I’m missing the point entirely.
I think those possibilities are unlikely. /shrugs