Interesting. The general consensus in that thread seems to have been that the user in question was missing the point somehow, and −3 isn’t really such a terribly low score for something generally thought to have been missing the point. (I guess it was actually +6 −9.)
I don’t think the poor reception of “Adding up to normality” is why the user in question left LW. E.g., this post was made by the same user about 6 months later, so clearly s/he wasn’t immediately driven off by the downvotes on “Adding up to normality”.
Anyway. I think I agree with the general consensus in that thread (though I didn’t downvote the post and still wouldn’t) that the author missed the point a bit. I think Egan’s law is a variant on a witticism attributed to Wittgenstein. Supposedly, he and a colleague had a conversation like this. W: Why did anyone think the sun went round the earth? C: Because it looks as if it does. W: What would it have looked like, if it had looked as if the earth went round the sun? The answer, of course, being that it would have looked just the way it actually does, because the earth does go round the sun and things look the way they do.
Similarly (and I think this is Egan’s point), if you have (or the whole species has) developed some attitude to life, or some expectation about what will happen in ordinary circumstances, based on how the world looks, and if some new scientific theory that predicts that the world will look that way, then either you shouldn’t change that attitude or it was actually inappropriate all along.
Now, you can always take the second branch and say things like this: “This theory shows that we should all shoot ourselves, so plainly if we’d been clever enough we’d already have deduced from everyday observation that we should all shoot ourselves. But we weren’t, and it took the discovery of this theory to show us that. But now, we should all shoot ourselves.” So far as I can tell, appealing to Egan’s law doesn’t do anything to refute that. It just says that if something is known to work well in the real world, then ipso facto our best scientific theories tell us it should work well in the world they describe, even if the way they describe that world feels weird to us.
I agree with the author when s/he writes that correct versions of Egan’s law don’t at all rule out the possibility that some proposition we feel attached to might in fact be ruled out by our best scientific theories, provided that proposition goes beyond merely-observational statements along the lines of “it looks as if X”.
So, what about the example we’re actually discussing? Your proposal, AIUI, is as follows: rig things up so that in the event of the human race getting wiped out you almost certainly get instantly annihilated before you have a chance to learn what’s happening; then you will almost certainly never experience the wiping-out of the human race. You describe this by saying that you “probably survive any x-risk”.
This seems all wrong to me, and I can see the appeal of expressing its wrongness in terms of “Egan’s law”, but I don’t think that’s necessary. I would just say: Are you quite sure that what this buys you is really what you care about? If so, then e.g. it seems you should be indifferent to the installation of a device at your house that at 4am every day, with probability 1⁄2, blows up the house in a massive explosion with you in it. After all, you will almost certainly never experience being killed by the device (the explosion is big and quick enough for that, and in any case it usually happens when you’re asleep). Personally, I would very much not want such a device in my house, because I value not dying as well as not experiencing death, and also because there are other people who would be (consciously) harmed if this happened. And I think it much better terminology to describe the situation as “the device will almost certainly kill me” than as “the device will almost certainly not kill me”, because when computing probabilities now I want to condition on my knowledge, existence, etc., now, not after the relevant events happen.
Am I applying “Egan’s law” here? Kinda. I care about not dying because that’s how my brain’s built, and it was built that way by an evolutionary process formed in the actual world where a lineage isn’t any better off for having its siblings in other wavefunction-branches survive; and when describing probabilities I prefer to condition only on my present epistemic state because in most contexts that leads to neater formulas and fewer mistakes; and what I’m claiming is that those things aren’t invalidated by saying words like “anthropic” or “quantum”. But an explicit appeal to Egan seems unnecessary. I’m just reasoning in the usual way, and waiting to be shown a specific reason why I’m wrong.
I meant that not only his post but most of his comments were downvoted, and from my personal experience if I get a lot of downvoting, I feel difficult to continue rational discussion of the topic.
Egan’s law is very vague in its short formulation. It is not clear, what is “all”, what kind of law is it—epistemic, natural, legal; what is normality—physics, experience, our expectation, our social agreements. So it mostly used as universal objection to any strange things.
But there are lot of strange things. Nukes were not normal before they were created, and if one apply Egan’s law before their creation, he may claim that they are not possible. Strong self improving AI also is something new on Earth, but we don’t use Egan’s law to disprove its possibility.
Your interpretation of Egan’s law is that everything useful should already be used by evolution. In case of QI it has some similarities to anthropic principle, by the way, so there is nothing new here from evolutionary point of view.
You also suggest to use Egan’s law as normative: don’t do strange risky things.
I could suggest more correct formulation of Egan’s law: it all adds up to normality in local surroundings (and in normal circumstances.)
And from this follows that than surrounding become large enough everything is not normal (think about black holes, sun became red giant, or strange quantum effects in small scale)
In local surrounding Newtonian, relativistic and quantum mechanics produce the same observations and the same visible world. Also in normal circumstances I will not put a bomb into my house.
But, as OP suggested, I know that soon 1 of 16 outcomes will happen, where 15 will kill the Earth and me, so my best strategy should not be normal. In this case going into a submarine with a diverse group of people capable to restore civilization may be best strategy. And here I get benefits even if QI doesn’t work, so it positive sum game.
I put only 10 per cent probability in QI to work as intended, so I will try any other strategy which have higher payoff (if I have any). That is why I will not put a bomb under my house in normal situations.
But there are situations there I don’t risk anything if I use QI, but benefit if it works. One of them is cryonics, to which I signed up.
So it mostly used as universal objection to any strange things.
Well, for the avoidance of doubt, I do not endorse any such use and I hope I haven’t fallen into such sloppiness myself.
Your interpretation of Egan’s law is that everything useful should already be used by evolution.
No, I didn’t intend to say or imply that at all. I do, however, say that if evolution has found some particular mode of thinking or feeling or acting useful (for evolution’s goals, which of course need not be ours) then that isn’t generally invalidated by new discoveries about why the world is the way that’s made those things evolutionarily fruitful.
(Of course it could be, given the “right” discoveries. Suppose it turns out that something about humans having sex accelerates some currently unknown process that will in a few hundred years make the earth explode. Then the urge to have sex that evolution has implanted in most people would be evolutionarily suboptimal in the long run and we might do better to use artificial insemination until we figure out how to stop the earth-exploding process.)
In case of QI it has some similarities to anthropic principle, by the way
You could have deduced that I’d noticed that, from the fact that I wrote
what I’m claiming is that those things aren’t invalidated by saying words like “anthropic” or “quantum”.
but no matter.
You also suggest to use Egan’s law as normative: don’t do strange risky things.
I didn’t intend to say or imply that, either, and this one I don’t see how you got out of what I wrote. I apologize if I was very unclear. But I might endorse as a version of Egan’s law something like “If something is a terrible risk, discovering new scientific underpinnings for things doesn’t stop it being a terrible risk unless the new discoveries actually change either the probabilities or the consequences”. Whether that applies in the present case is, I take it, one of the points under dispute.
so my best strategy should not be normal
I take it you mean might not be; it could turn out that even in this rather unusual situation “normal” is the best you can do.
even if QI doesn’t work
I have never been able to understand what different predictions about the world anyone expects if “QI works” versus if “QI doesn’t work”, beyond the predictions already made by physics. (QI seems to me to mean: standard physics, plus a decision to condition probabilities on future rather than present epistemic state. The first bit is unproblematic; the second bit—which is what you need to say e.g. “I will survive”—seems to me like a decision rather than a proposition, and I don’t know what it would mean to say that it does or doesn’t work.)
cryonics
I’m not really seeing any connection to speak of between cryonics and QI. (Except for this. Suppose you reckon that cryonics has a 5% chance of working on other people, but QI considerations lead you to say that for you it will almost certainly work. No, sorry, I see you give QI a 10% chance of working. So I mean that for you it will work with probability more like 10%. Does that mean that you’d be prepared to pay about twice as much for cryonics as you would be without bringing QI into it? (Given the presumably regrettable costs for whatever influence you might have hoped to have post mortem using the money: children, charities, etc.)
I have never been able to understand what different predictions about the world anyone expects if “QI works” versus if “QI doesn’t work”, beyond the predictions already made by physics.
Turchin may have something else in mind, but personally (since I’ve also used this expression several times on LW) I mean something like this: usually people think that when they die, their experience will be irreversibly lost (unless extra measures like cryonics are taken, or they are religious), meaning that the experiences they have just prior to death will be their final ones (and death will inevitably come). If “QI works”, this will not be true: there will never be final experiences, but instead there will be an eternal (or perhaps almost eternal) chain of experiences and thus no final death, from a first-person point of view.
Of course, it could be that if you’ve accepted MWI and the basic idea of multiple future selves implied by it then this is not very radical, but it sounds like a pretty radical departure from our usual way of thinking to me.
I think your last paragraph is the key point here. Forget about QI; MWI says some small fraction of your future measure will be alive very far into the future (for ever? depends on difficult cosmological questions); even objective-collapse theories say that this holds with nonzero but very small probability (which I suggest you should feel exactly the same way about); every theory, quantum or otherwise, says that at no point will you experience being dead-and-unable-to-experience things; all QI seems to me to add to this is a certain attitude.
Another interpretation is that it is a name for an implication of MWI that a even many people who fully accept MWI seem to somehow miss (or deny, for some reason; just have a look at discussions in relevant Reddit subs, for example).
Objective-collapse theories in a spatially or temporally infinite universe or with eternal inflation etc. actually say that it holds with nonzero but very small probability, but essentially give it an infinite number of chances to happen, meaning that this scenario is for all practical purposes identical to MWI. But I think what you are saying can be supposed to mean something like “if the world was like the normal intuitions of most people say it is like”, in which case I still think there’s a world of difference between very small probability and very small measure.
I’m not entirely convinced by the usual EY/LW argument that utilitarianism can be salvaged in an MWI setting by caring about measure, but I can understand it and find it reasonable. But when this is translated to a first-person view, I find it difficult. The reason I believe that the Sun will rise tomorrow morning is not because my past observations indicate that it will happen in a majority of “branches” (“branches” or “worlds” of course not being a real thing, but a convenient shorthand), but because it seems like the most likely thing for me to experience, given past experiences. But if I’m in a submarine with turchin and x-risk is about to be realized, I don’t get how I could “expect” that I will most likely blow up or be turned into a pile of paperclips like everyone else, while I will certainly (and only) experience it not happening. If QI is an attitude, and a bad one too, I don’t understand how to adopt any other attitude.
Actually, I think there are at least a couple of variations of this attitude: the first one that people take upon first hearing of the idea and giving it some credibility is basically “so I’m immortal, yay; now I could play quantum russian roulette and make myself rich”; the second one, after thinking about it a bit more, is much more pessimistic; there are probably others, but I suppose you could say that underneath there is this core idea that somehow it makes sense to say “I’m alive” if even a very small fraction of my original measure still exists.
QI predicts not the different variants of the world, but different variants of my future experiences. It says that I will not experience “no existence”, but will experience my most probable survival way. If I have a chance to survive 1 in 1000 in some situation, QI shifts probability that I will experience survival up to 1.
But it could fail in unpredictable ways: if we are in the simulation, and my plane crashes, the next my experience will be probably screen with title “game over”, not experience of me alive on the ground.
I agree with what you said in brackets about cryonics. I also think that investing in cryonics will help to promote it and all other good things, so it doesn’t contradict with my regrettable costs. I think that one rational way of action is make a will where one gives all his money to cryocompany. (It also depends of existence and well being of children, and other useful charities, which could prevent x-risks, so it may need more complex consideration.)
Interesting. The general consensus in that thread seems to have been that the user in question was missing the point somehow, and −3 isn’t really such a terribly low score for something generally thought to have been missing the point. (I guess it was actually +6 −9.)
I don’t think the poor reception of “Adding up to normality” is why the user in question left LW. E.g., this post was made by the same user about 6 months later, so clearly s/he wasn’t immediately driven off by the downvotes on “Adding up to normality”.
Anyway. I think I agree with the general consensus in that thread (though I didn’t downvote the post and still wouldn’t) that the author missed the point a bit. I think Egan’s law is a variant on a witticism attributed to Wittgenstein. Supposedly, he and a colleague had a conversation like this. W: Why did anyone think the sun went round the earth? C: Because it looks as if it does. W: What would it have looked like, if it had looked as if the earth went round the sun? The answer, of course, being that it would have looked just the way it actually does, because the earth does go round the sun and things look the way they do.
Similarly (and I think this is Egan’s point), if you have (or the whole species has) developed some attitude to life, or some expectation about what will happen in ordinary circumstances, based on how the world looks, and if some new scientific theory that predicts that the world will look that way, then either you shouldn’t change that attitude or it was actually inappropriate all along.
Now, you can always take the second branch and say things like this: “This theory shows that we should all shoot ourselves, so plainly if we’d been clever enough we’d already have deduced from everyday observation that we should all shoot ourselves. But we weren’t, and it took the discovery of this theory to show us that. But now, we should all shoot ourselves.” So far as I can tell, appealing to Egan’s law doesn’t do anything to refute that. It just says that if something is known to work well in the real world, then ipso facto our best scientific theories tell us it should work well in the world they describe, even if the way they describe that world feels weird to us.
I agree with the author when s/he writes that correct versions of Egan’s law don’t at all rule out the possibility that some proposition we feel attached to might in fact be ruled out by our best scientific theories, provided that proposition goes beyond merely-observational statements along the lines of “it looks as if X”.
So, what about the example we’re actually discussing? Your proposal, AIUI, is as follows: rig things up so that in the event of the human race getting wiped out you almost certainly get instantly annihilated before you have a chance to learn what’s happening; then you will almost certainly never experience the wiping-out of the human race. You describe this by saying that you “probably survive any x-risk”.
This seems all wrong to me, and I can see the appeal of expressing its wrongness in terms of “Egan’s law”, but I don’t think that’s necessary. I would just say: Are you quite sure that what this buys you is really what you care about? If so, then e.g. it seems you should be indifferent to the installation of a device at your house that at 4am every day, with probability 1⁄2, blows up the house in a massive explosion with you in it. After all, you will almost certainly never experience being killed by the device (the explosion is big and quick enough for that, and in any case it usually happens when you’re asleep). Personally, I would very much not want such a device in my house, because I value not dying as well as not experiencing death, and also because there are other people who would be (consciously) harmed if this happened. And I think it much better terminology to describe the situation as “the device will almost certainly kill me” than as “the device will almost certainly not kill me”, because when computing probabilities now I want to condition on my knowledge, existence, etc., now, not after the relevant events happen.
Am I applying “Egan’s law” here? Kinda. I care about not dying because that’s how my brain’s built, and it was built that way by an evolutionary process formed in the actual world where a lineage isn’t any better off for having its siblings in other wavefunction-branches survive; and when describing probabilities I prefer to condition only on my present epistemic state because in most contexts that leads to neater formulas and fewer mistakes; and what I’m claiming is that those things aren’t invalidated by saying words like “anthropic” or “quantum”. But an explicit appeal to Egan seems unnecessary. I’m just reasoning in the usual way, and waiting to be shown a specific reason why I’m wrong.
I meant that not only his post but most of his comments were downvoted, and from my personal experience if I get a lot of downvoting, I feel difficult to continue rational discussion of the topic.
Egan’s law is very vague in its short formulation. It is not clear, what is “all”, what kind of law is it—epistemic, natural, legal; what is normality—physics, experience, our expectation, our social agreements. So it mostly used as universal objection to any strange things.
But there are lot of strange things. Nukes were not normal before they were created, and if one apply Egan’s law before their creation, he may claim that they are not possible. Strong self improving AI also is something new on Earth, but we don’t use Egan’s law to disprove its possibility.
Your interpretation of Egan’s law is that everything useful should already be used by evolution. In case of QI it has some similarities to anthropic principle, by the way, so there is nothing new here from evolutionary point of view.
You also suggest to use Egan’s law as normative: don’t do strange risky things.
I could suggest more correct formulation of Egan’s law: it all adds up to normality in local surroundings (and in normal circumstances.)
And from this follows that than surrounding become large enough everything is not normal (think about black holes, sun became red giant, or strange quantum effects in small scale)
In local surrounding Newtonian, relativistic and quantum mechanics produce the same observations and the same visible world. Also in normal circumstances I will not put a bomb into my house.
But, as OP suggested, I know that soon 1 of 16 outcomes will happen, where 15 will kill the Earth and me, so my best strategy should not be normal. In this case going into a submarine with a diverse group of people capable to restore civilization may be best strategy. And here I get benefits even if QI doesn’t work, so it positive sum game.
I put only 10 per cent probability in QI to work as intended, so I will try any other strategy which have higher payoff (if I have any). That is why I will not put a bomb under my house in normal situations.
But there are situations there I don’t risk anything if I use QI, but benefit if it works. One of them is cryonics, to which I signed up.
Well, for the avoidance of doubt, I do not endorse any such use and I hope I haven’t fallen into such sloppiness myself.
No, I didn’t intend to say or imply that at all. I do, however, say that if evolution has found some particular mode of thinking or feeling or acting useful (for evolution’s goals, which of course need not be ours) then that isn’t generally invalidated by new discoveries about why the world is the way that’s made those things evolutionarily fruitful.
(Of course it could be, given the “right” discoveries. Suppose it turns out that something about humans having sex accelerates some currently unknown process that will in a few hundred years make the earth explode. Then the urge to have sex that evolution has implanted in most people would be evolutionarily suboptimal in the long run and we might do better to use artificial insemination until we figure out how to stop the earth-exploding process.)
You could have deduced that I’d noticed that, from the fact that I wrote
but no matter.
I didn’t intend to say or imply that, either, and this one I don’t see how you got out of what I wrote. I apologize if I was very unclear. But I might endorse as a version of Egan’s law something like “If something is a terrible risk, discovering new scientific underpinnings for things doesn’t stop it being a terrible risk unless the new discoveries actually change either the probabilities or the consequences”. Whether that applies in the present case is, I take it, one of the points under dispute.
I take it you mean might not be; it could turn out that even in this rather unusual situation “normal” is the best you can do.
I have never been able to understand what different predictions about the world anyone expects if “QI works” versus if “QI doesn’t work”, beyond the predictions already made by physics. (QI seems to me to mean: standard physics, plus a decision to condition probabilities on future rather than present epistemic state. The first bit is unproblematic; the second bit—which is what you need to say e.g. “I will survive”—seems to me like a decision rather than a proposition, and I don’t know what it would mean to say that it does or doesn’t work.)
I’m not really seeing any connection to speak of between cryonics and QI. (Except for this. Suppose you reckon that cryonics has a 5% chance of working on other people, but QI considerations lead you to say that for you it will almost certainly work. No, sorry, I see you give QI a 10% chance of working. So I mean that for you it will work with probability more like 10%. Does that mean that you’d be prepared to pay about twice as much for cryonics as you would be without bringing QI into it? (Given the presumably regrettable costs for whatever influence you might have hoped to have post mortem using the money: children, charities, etc.)
Turchin may have something else in mind, but personally (since I’ve also used this expression several times on LW) I mean something like this: usually people think that when they die, their experience will be irreversibly lost (unless extra measures like cryonics are taken, or they are religious), meaning that the experiences they have just prior to death will be their final ones (and death will inevitably come). If “QI works”, this will not be true: there will never be final experiences, but instead there will be an eternal (or perhaps almost eternal) chain of experiences and thus no final death, from a first-person point of view.
Of course, it could be that if you’ve accepted MWI and the basic idea of multiple future selves implied by it then this is not very radical, but it sounds like a pretty radical departure from our usual way of thinking to me.
I think your last paragraph is the key point here. Forget about QI; MWI says some small fraction of your future measure will be alive very far into the future (for ever? depends on difficult cosmological questions); even objective-collapse theories say that this holds with nonzero but very small probability (which I suggest you should feel exactly the same way about); every theory, quantum or otherwise, says that at no point will you experience being dead-and-unable-to-experience things; all QI seems to me to add to this is a certain attitude.
Another interpretation is that it is a name for an implication of MWI that a even many people who fully accept MWI seem to somehow miss (or deny, for some reason; just have a look at discussions in relevant Reddit subs, for example).
Objective-collapse theories in a spatially or temporally infinite universe or with eternal inflation etc. actually say that it holds with nonzero but very small probability, but essentially give it an infinite number of chances to happen, meaning that this scenario is for all practical purposes identical to MWI. But I think what you are saying can be supposed to mean something like “if the world was like the normal intuitions of most people say it is like”, in which case I still think there’s a world of difference between very small probability and very small measure.
I’m not entirely convinced by the usual EY/LW argument that utilitarianism can be salvaged in an MWI setting by caring about measure, but I can understand it and find it reasonable. But when this is translated to a first-person view, I find it difficult. The reason I believe that the Sun will rise tomorrow morning is not because my past observations indicate that it will happen in a majority of “branches” (“branches” or “worlds” of course not being a real thing, but a convenient shorthand), but because it seems like the most likely thing for me to experience, given past experiences. But if I’m in a submarine with turchin and x-risk is about to be realized, I don’t get how I could “expect” that I will most likely blow up or be turned into a pile of paperclips like everyone else, while I will certainly (and only) experience it not happening. If QI is an attitude, and a bad one too, I don’t understand how to adopt any other attitude.
Actually, I think there are at least a couple of variations of this attitude: the first one that people take upon first hearing of the idea and giving it some credibility is basically “so I’m immortal, yay; now I could play quantum russian roulette and make myself rich”; the second one, after thinking about it a bit more, is much more pessimistic; there are probably others, but I suppose you could say that underneath there is this core idea that somehow it makes sense to say “I’m alive” if even a very small fraction of my original measure still exists.
QI predicts not the different variants of the world, but different variants of my future experiences. It says that I will not experience “no existence”, but will experience my most probable survival way. If I have a chance to survive 1 in 1000 in some situation, QI shifts probability that I will experience survival up to 1.
But it could fail in unpredictable ways: if we are in the simulation, and my plane crashes, the next my experience will be probably screen with title “game over”, not experience of me alive on the ground.
I agree with what you said in brackets about cryonics. I also think that investing in cryonics will help to promote it and all other good things, so it doesn’t contradict with my regrettable costs. I think that one rational way of action is make a will where one gives all his money to cryocompany. (It also depends of existence and well being of children, and other useful charities, which could prevent x-risks, so it may need more complex consideration.)