Assume that our world can’t survive by itself, and that this world is destroyed as soon as Eliezer finishes contemplating.
Assume we don’t value worlds other than those that diverge from the current one, or at least that we care mainly about that one, and that we care more about worlds or people in proportion to their similarity to ours.
In order to keep this world (or collection of multiple-worlds) running for as long as possible, we need to estimate the utility of the Not-Deleting worlds, and keep our total utility close enough to theirs that Eliezer isn’t confident enough to decide either way.
As a second goal, we need to make this set of worlds have a higher utility than the others, so that if he does finish contemplating, he’ll decide in favour of ours.
These are just the general characteristics of this sort of world (similar to some of Robin Hanson’s thought). Obveously, this contemplation is a special case, and we’re not going to explain the special consequences in public.
Assume we don’t value worlds other than those that diverge from the current one, or at least that we care mainly about that one, and that we care more about worlds or people in proportion to their similarity to ours.
But I care about the real world. If this world is just a hypothetical, why should I care about it? Also, the real me, in the real world, is very very similar to the hypothetical me. Out of over nine thousand days, there are only a few different ones.
As a second goal, we need to make this set of worlds have a higher utility than the others, so that if he does finish contemplating, he’ll decide in favour of ours.
Because I care about the real world, I want the best outcome for it, which is that Eliezer keeps Roko’s post. I’ll lose the last few days, but that’s okay: I’ll just “pop” back to a couple days ago.
Note that if Eliezer does decide to delete the post in the real world, we’ll still “pop” back as the hypothetical ends, and then re-live the last few days, possibly with some slight changes that Eliezer didn’t contemplate in his hypothetical.
Well, this world is isomorphic to the real one. It’s just like if we’re actually in a Simulation; are simulated events any less significant to simulated beings than real events are to real beings?
Yes, if Eliezer goes for delete, we’ll survive in a way, but we’ll probably re-live all the time between the post and the Singularity, not just the last few days.
Yes, if Eliezer goes for delete, we’ll survive in a way, but we’ll probably re-live all the time between the post and the Singularity, not just the last few days.
If my cryonics revival only loses the last few days, I’ll be ecstatic. I won’t think, “well, I guess I survived in a way.”
I’m not sure what you mean about re-living time in the future. How can I re-live it if I haven’t lived it yet?
Well, simulated-you will have experienced that period of time, and then ‘real’ you (or at least, the you that’s in the same reality as Eliezer) will experience those events, after Eliezer stops contemplating.
Interesting thought:
Assume that our world can’t survive by itself, and that this world is destroyed as soon as Eliezer finishes contemplating.
Assume we don’t value worlds other than those that diverge from the current one, or at least that we care mainly about that one, and that we care more about worlds or people in proportion to their similarity to ours.
In order to keep this world (or collection of multiple-worlds) running for as long as possible, we need to estimate the utility of the Not-Deleting worlds, and keep our total utility close enough to theirs that Eliezer isn’t confident enough to decide either way.
As a second goal, we need to make this set of worlds have a higher utility than the others, so that if he does finish contemplating, he’ll decide in favour of ours.
These are just the general characteristics of this sort of world (similar to some of Robin Hanson’s thought). Obveously, this contemplation is a special case, and we’re not going to explain the special consequences in public.
But I care about the real world. If this world is just a hypothetical, why should I care about it? Also, the real me, in the real world, is very very similar to the hypothetical me. Out of over nine thousand days, there are only a few different ones.
Because I care about the real world, I want the best outcome for it, which is that Eliezer keeps Roko’s post. I’ll lose the last few days, but that’s okay: I’ll just “pop” back to a couple days ago.
Note that if Eliezer does decide to delete the post in the real world, we’ll still “pop” back as the hypothetical ends, and then re-live the last few days, possibly with some slight changes that Eliezer didn’t contemplate in his hypothetical.
Well, this world is isomorphic to the real one. It’s just like if we’re actually in a Simulation; are simulated events any less significant to simulated beings than real events are to real beings?
Yes, if Eliezer goes for delete, we’ll survive in a way, but we’ll probably re-live all the time between the post and the Singularity, not just the last few days.
If my cryonics revival only loses the last few days, I’ll be ecstatic. I won’t think, “well, I guess I survived in a way.”
I’m not sure what you mean about re-living time in the future. How can I re-live it if I haven’t lived it yet?
I don’t understand this thread.
I believe this relates to what has been called “[one’s] strength as a rationalist”.
Well, simulated-you will have experienced that period of time, and then ‘real’ you (or at least, the you that’s in the same reality as Eliezer) will experience those events, after Eliezer stops contemplating.