The layman’s perspective sounds reasonable enough, but seems to fall apart on closer inspection. What makes a human brain different from a simulation? Why would the AI have an easier time reconstructing the mind of someone who died on March 20 than reconstructing a copy of you on March 21? Why are future simulations of you necessarily less “significant” than current you? This looks suspiciously like a theory constructed specifically to be testable only by death, i.e. not testable to the rest of us.
(The following probably won’t be understandable / won’t appear motivated. Sorry.)
Why would the AI have an easier time reconstructing the mind of someone who died on March 20 than reconstructing a copy of you on March 21?
You can make a copy, but as soon as you simulate it diverging from the original then you’re imagining someone that never existed in a timeline that didn’t actually happen. Otherwise you’re just fooling yourself about what actually happened, you’re not causing something else to happen. Whereas if you revive a mind that died and have it have new experiences then you’re not deluding yourself about what actually happened, you’re just continuing the story.
Why are future simulations of you necessarily less “significant” than current you?
Because the simulator would just be deluding themselves about what actually happened, like minting counterfeit currency; the important aspect of me is that I’m here embedded in this particular decision policy computation with these and such constraints. Take me out of my contexts and you don’t have me anymore. If you make a thousand crackfics involving Romeo running away to a Chinese brothel then nobody’s going to listen to your stories unless they have tremendous artistic merit. And if a thousand Romeo & Juliet crackfics are shouted out in the middle of a forest but nobody hears them, do they have any decision theoretic significance?
But I haven’t actually worked out the math, so it’s possible things don’t work like I think they do.
This looks suspiciously like a theory constructed specifically to be testable only by death, i.e. not testable to the rest of us.
Well, it’s a theory about anthropics… quantum immortality is also a theory that is only testable by death, but I don’t think that’s suspicious as such. (In fact I don’t actually think quantum immortality is only testable by death, which you might be able to ascertain from what I wrote above, but um, I strongly suspect that I’m not understandable. Anyway, death is the simplest example.)
You might be on to something, but I can’t understand it properly until I figure out what “decision-theoretic significance” really means, and why it seems to play so nicely with both classical and quantum coinflips. Until then, “measure” seems to be a more promising explanation, though it has lots of difficulties too.
The layman’s perspective sounds reasonable enough, but seems to fall apart on closer inspection. What makes a human brain different from a simulation? Why would the AI have an easier time reconstructing the mind of someone who died on March 20 than reconstructing a copy of you on March 21? Why are future simulations of you necessarily less “significant” than current you? This looks suspiciously like a theory constructed specifically to be testable only by death, i.e. not testable to the rest of us.
(The following probably won’t be understandable / won’t appear motivated. Sorry.)
You can make a copy, but as soon as you simulate it diverging from the original then you’re imagining someone that never existed in a timeline that didn’t actually happen. Otherwise you’re just fooling yourself about what actually happened, you’re not causing something else to happen. Whereas if you revive a mind that died and have it have new experiences then you’re not deluding yourself about what actually happened, you’re just continuing the story.
Because the simulator would just be deluding themselves about what actually happened, like minting counterfeit currency; the important aspect of me is that I’m here embedded in this particular decision policy computation with these and such constraints. Take me out of my contexts and you don’t have me anymore. If you make a thousand crackfics involving Romeo running away to a Chinese brothel then nobody’s going to listen to your stories unless they have tremendous artistic merit. And if a thousand Romeo & Juliet crackfics are shouted out in the middle of a forest but nobody hears them, do they have any decision theoretic significance?
But I haven’t actually worked out the math, so it’s possible things don’t work like I think they do.
Well, it’s a theory about anthropics… quantum immortality is also a theory that is only testable by death, but I don’t think that’s suspicious as such. (In fact I don’t actually think quantum immortality is only testable by death, which you might be able to ascertain from what I wrote above, but um, I strongly suspect that I’m not understandable. Anyway, death is the simplest example.)
You might be on to something, but I can’t understand it properly until I figure out what “decision-theoretic significance” really means, and why it seems to play so nicely with both classical and quantum coinflips. Until then, “measure” seems to be a more promising explanation, though it has lots of difficulties too.