So it mostly used as universal objection to any strange things.
Well, for the avoidance of doubt, I do not endorse any such use and I hope I haven’t fallen into such sloppiness myself.
Your interpretation of Egan’s law is that everything useful should already be used by evolution.
No, I didn’t intend to say or imply that at all. I do, however, say that if evolution has found some particular mode of thinking or feeling or acting useful (for evolution’s goals, which of course need not be ours) then that isn’t generally invalidated by new discoveries about why the world is the way that’s made those things evolutionarily fruitful.
(Of course it could be, given the “right” discoveries. Suppose it turns out that something about humans having sex accelerates some currently unknown process that will in a few hundred years make the earth explode. Then the urge to have sex that evolution has implanted in most people would be evolutionarily suboptimal in the long run and we might do better to use artificial insemination until we figure out how to stop the earth-exploding process.)
In case of QI it has some similarities to anthropic principle, by the way
You could have deduced that I’d noticed that, from the fact that I wrote
what I’m claiming is that those things aren’t invalidated by saying words like “anthropic” or “quantum”.
but no matter.
You also suggest to use Egan’s law as normative: don’t do strange risky things.
I didn’t intend to say or imply that, either, and this one I don’t see how you got out of what I wrote. I apologize if I was very unclear. But I might endorse as a version of Egan’s law something like “If something is a terrible risk, discovering new scientific underpinnings for things doesn’t stop it being a terrible risk unless the new discoveries actually change either the probabilities or the consequences”. Whether that applies in the present case is, I take it, one of the points under dispute.
so my best strategy should not be normal
I take it you mean might not be; it could turn out that even in this rather unusual situation “normal” is the best you can do.
even if QI doesn’t work
I have never been able to understand what different predictions about the world anyone expects if “QI works” versus if “QI doesn’t work”, beyond the predictions already made by physics. (QI seems to me to mean: standard physics, plus a decision to condition probabilities on future rather than present epistemic state. The first bit is unproblematic; the second bit—which is what you need to say e.g. “I will survive”—seems to me like a decision rather than a proposition, and I don’t know what it would mean to say that it does or doesn’t work.)
cryonics
I’m not really seeing any connection to speak of between cryonics and QI. (Except for this. Suppose you reckon that cryonics has a 5% chance of working on other people, but QI considerations lead you to say that for you it will almost certainly work. No, sorry, I see you give QI a 10% chance of working. So I mean that for you it will work with probability more like 10%. Does that mean that you’d be prepared to pay about twice as much for cryonics as you would be without bringing QI into it? (Given the presumably regrettable costs for whatever influence you might have hoped to have post mortem using the money: children, charities, etc.)
I have never been able to understand what different predictions about the world anyone expects if “QI works” versus if “QI doesn’t work”, beyond the predictions already made by physics.
Turchin may have something else in mind, but personally (since I’ve also used this expression several times on LW) I mean something like this: usually people think that when they die, their experience will be irreversibly lost (unless extra measures like cryonics are taken, or they are religious), meaning that the experiences they have just prior to death will be their final ones (and death will inevitably come). If “QI works”, this will not be true: there will never be final experiences, but instead there will be an eternal (or perhaps almost eternal) chain of experiences and thus no final death, from a first-person point of view.
Of course, it could be that if you’ve accepted MWI and the basic idea of multiple future selves implied by it then this is not very radical, but it sounds like a pretty radical departure from our usual way of thinking to me.
I think your last paragraph is the key point here. Forget about QI; MWI says some small fraction of your future measure will be alive very far into the future (for ever? depends on difficult cosmological questions); even objective-collapse theories say that this holds with nonzero but very small probability (which I suggest you should feel exactly the same way about); every theory, quantum or otherwise, says that at no point will you experience being dead-and-unable-to-experience things; all QI seems to me to add to this is a certain attitude.
Another interpretation is that it is a name for an implication of MWI that a even many people who fully accept MWI seem to somehow miss (or deny, for some reason; just have a look at discussions in relevant Reddit subs, for example).
Objective-collapse theories in a spatially or temporally infinite universe or with eternal inflation etc. actually say that it holds with nonzero but very small probability, but essentially give it an infinite number of chances to happen, meaning that this scenario is for all practical purposes identical to MWI. But I think what you are saying can be supposed to mean something like “if the world was like the normal intuitions of most people say it is like”, in which case I still think there’s a world of difference between very small probability and very small measure.
I’m not entirely convinced by the usual EY/LW argument that utilitarianism can be salvaged in an MWI setting by caring about measure, but I can understand it and find it reasonable. But when this is translated to a first-person view, I find it difficult. The reason I believe that the Sun will rise tomorrow morning is not because my past observations indicate that it will happen in a majority of “branches” (“branches” or “worlds” of course not being a real thing, but a convenient shorthand), but because it seems like the most likely thing for me to experience, given past experiences. But if I’m in a submarine with turchin and x-risk is about to be realized, I don’t get how I could “expect” that I will most likely blow up or be turned into a pile of paperclips like everyone else, while I will certainly (and only) experience it not happening. If QI is an attitude, and a bad one too, I don’t understand how to adopt any other attitude.
Actually, I think there are at least a couple of variations of this attitude: the first one that people take upon first hearing of the idea and giving it some credibility is basically “so I’m immortal, yay; now I could play quantum russian roulette and make myself rich”; the second one, after thinking about it a bit more, is much more pessimistic; there are probably others, but I suppose you could say that underneath there is this core idea that somehow it makes sense to say “I’m alive” if even a very small fraction of my original measure still exists.
QI predicts not the different variants of the world, but different variants of my future experiences. It says that I will not experience “no existence”, but will experience my most probable survival way. If I have a chance to survive 1 in 1000 in some situation, QI shifts probability that I will experience survival up to 1.
But it could fail in unpredictable ways: if we are in the simulation, and my plane crashes, the next my experience will be probably screen with title “game over”, not experience of me alive on the ground.
I agree with what you said in brackets about cryonics. I also think that investing in cryonics will help to promote it and all other good things, so it doesn’t contradict with my regrettable costs. I think that one rational way of action is make a will where one gives all his money to cryocompany. (It also depends of existence and well being of children, and other useful charities, which could prevent x-risks, so it may need more complex consideration.)
Well, for the avoidance of doubt, I do not endorse any such use and I hope I haven’t fallen into such sloppiness myself.
No, I didn’t intend to say or imply that at all. I do, however, say that if evolution has found some particular mode of thinking or feeling or acting useful (for evolution’s goals, which of course need not be ours) then that isn’t generally invalidated by new discoveries about why the world is the way that’s made those things evolutionarily fruitful.
(Of course it could be, given the “right” discoveries. Suppose it turns out that something about humans having sex accelerates some currently unknown process that will in a few hundred years make the earth explode. Then the urge to have sex that evolution has implanted in most people would be evolutionarily suboptimal in the long run and we might do better to use artificial insemination until we figure out how to stop the earth-exploding process.)
You could have deduced that I’d noticed that, from the fact that I wrote
but no matter.
I didn’t intend to say or imply that, either, and this one I don’t see how you got out of what I wrote. I apologize if I was very unclear. But I might endorse as a version of Egan’s law something like “If something is a terrible risk, discovering new scientific underpinnings for things doesn’t stop it being a terrible risk unless the new discoveries actually change either the probabilities or the consequences”. Whether that applies in the present case is, I take it, one of the points under dispute.
I take it you mean might not be; it could turn out that even in this rather unusual situation “normal” is the best you can do.
I have never been able to understand what different predictions about the world anyone expects if “QI works” versus if “QI doesn’t work”, beyond the predictions already made by physics. (QI seems to me to mean: standard physics, plus a decision to condition probabilities on future rather than present epistemic state. The first bit is unproblematic; the second bit—which is what you need to say e.g. “I will survive”—seems to me like a decision rather than a proposition, and I don’t know what it would mean to say that it does or doesn’t work.)
I’m not really seeing any connection to speak of between cryonics and QI. (Except for this. Suppose you reckon that cryonics has a 5% chance of working on other people, but QI considerations lead you to say that for you it will almost certainly work. No, sorry, I see you give QI a 10% chance of working. So I mean that for you it will work with probability more like 10%. Does that mean that you’d be prepared to pay about twice as much for cryonics as you would be without bringing QI into it? (Given the presumably regrettable costs for whatever influence you might have hoped to have post mortem using the money: children, charities, etc.)
Turchin may have something else in mind, but personally (since I’ve also used this expression several times on LW) I mean something like this: usually people think that when they die, their experience will be irreversibly lost (unless extra measures like cryonics are taken, or they are religious), meaning that the experiences they have just prior to death will be their final ones (and death will inevitably come). If “QI works”, this will not be true: there will never be final experiences, but instead there will be an eternal (or perhaps almost eternal) chain of experiences and thus no final death, from a first-person point of view.
Of course, it could be that if you’ve accepted MWI and the basic idea of multiple future selves implied by it then this is not very radical, but it sounds like a pretty radical departure from our usual way of thinking to me.
I think your last paragraph is the key point here. Forget about QI; MWI says some small fraction of your future measure will be alive very far into the future (for ever? depends on difficult cosmological questions); even objective-collapse theories say that this holds with nonzero but very small probability (which I suggest you should feel exactly the same way about); every theory, quantum or otherwise, says that at no point will you experience being dead-and-unable-to-experience things; all QI seems to me to add to this is a certain attitude.
Another interpretation is that it is a name for an implication of MWI that a even many people who fully accept MWI seem to somehow miss (or deny, for some reason; just have a look at discussions in relevant Reddit subs, for example).
Objective-collapse theories in a spatially or temporally infinite universe or with eternal inflation etc. actually say that it holds with nonzero but very small probability, but essentially give it an infinite number of chances to happen, meaning that this scenario is for all practical purposes identical to MWI. But I think what you are saying can be supposed to mean something like “if the world was like the normal intuitions of most people say it is like”, in which case I still think there’s a world of difference between very small probability and very small measure.
I’m not entirely convinced by the usual EY/LW argument that utilitarianism can be salvaged in an MWI setting by caring about measure, but I can understand it and find it reasonable. But when this is translated to a first-person view, I find it difficult. The reason I believe that the Sun will rise tomorrow morning is not because my past observations indicate that it will happen in a majority of “branches” (“branches” or “worlds” of course not being a real thing, but a convenient shorthand), but because it seems like the most likely thing for me to experience, given past experiences. But if I’m in a submarine with turchin and x-risk is about to be realized, I don’t get how I could “expect” that I will most likely blow up or be turned into a pile of paperclips like everyone else, while I will certainly (and only) experience it not happening. If QI is an attitude, and a bad one too, I don’t understand how to adopt any other attitude.
Actually, I think there are at least a couple of variations of this attitude: the first one that people take upon first hearing of the idea and giving it some credibility is basically “so I’m immortal, yay; now I could play quantum russian roulette and make myself rich”; the second one, after thinking about it a bit more, is much more pessimistic; there are probably others, but I suppose you could say that underneath there is this core idea that somehow it makes sense to say “I’m alive” if even a very small fraction of my original measure still exists.
QI predicts not the different variants of the world, but different variants of my future experiences. It says that I will not experience “no existence”, but will experience my most probable survival way. If I have a chance to survive 1 in 1000 in some situation, QI shifts probability that I will experience survival up to 1.
But it could fail in unpredictable ways: if we are in the simulation, and my plane crashes, the next my experience will be probably screen with title “game over”, not experience of me alive on the ground.
I agree with what you said in brackets about cryonics. I also think that investing in cryonics will help to promote it and all other good things, so it doesn’t contradict with my regrettable costs. I think that one rational way of action is make a will where one gives all his money to cryocompany. (It also depends of existence and well being of children, and other useful charities, which could prevent x-risks, so it may need more complex consideration.)