P(cold war safe | survival) = P(cold war safe)*P(survival | cold war safe)/P(survival). [...]
That’s it. No extra complications, or worries about what to do with multiple copies of yourself (dealing with multiple copies comes under the purview of decisiontheory, rather than probability).
Copies seem to me to matter. In a quantum universe where you share consciousness with all of your copies, you have P(survival|∙) = 1, and therefore P(cws | s)=P(s | cws)⋅P(cws)P(s)=1⋅P(cws)1=P(cws)
You might not have P(survival | ∙) = 1 for your copy’s survival, but you have it for the survival of any of your copies, and that’s the probability which matters, because that’s the observation you make.
I think this is outside the scope of what is being argued here. This seems to be about subjective probability within the observable universe. What you are considering seems to require knowledge about the world we don’t have access to. That doesn’t make you wrong or Stuart wrong, but I do think you’re talking about different things.
(FWIW I brought up a similar concern to Stuart in person.)
I’m sympathetic to your line of reasoning (I’m not even sure what a counterfactual would really mean without using something like MWI), but I would suppose you could imagine a subjective, self-world-bounded interpretation where “less dangerous” means “less dangerous in expectation from back when I would have assessed the probability prior to knowing the outcome of the event”.
I don’t get that interpretation. I don’t know how you can bound it to one world when other worlds matter.
Let’s simplify and consider the cold war as a one-time event which either kills you or doesn’t. I’m claiming the observation that you survived tells you literally nothing about the probability that it kills you except that it’s not 1. It could be that 99% of the mass of copies originated out of you from the point where you assessed the probability prior to knowing the outcome are now dead. This seems to be the expectation you’ve described, and your observations do nothing to update on that in either direction. Our survival does in fact have exactly no implications about the danger.
If you have a probability distribution over whether quantum worlds (and if needed, shared consciousness) exist, then in that sense the probability has changed, since the probability conditioned on a single world has changed. But that seems like a cheap way out and not what anyone intended to say. Or if you consider that a super dangerous cold war would also increase the probability out outcomes that look different from reality right now but where you’re not dead in, then that could be – and I think is – a legitimate reason to update. But that’s also not what’s been argued here.
A problem with this line of reasoning is that it would apply to many other matters too. It’s thought that various planet-sized object are wandering in interstellar space, but I think no one has a clear idea how many there are. One of them could zip into the solar system, hit earth, and destroy all life on earth. Do you think that the fact that this hasn’t happened for a few billion years is NO EVIDENCE AT ALL that the probability of it happening in any given year is low? The same question could be asked about many other possible catostrophic events, for some of which there might be some action we could take to mitigate the problem (for instance, a mutation making some species of insect become highly aggressive, and highly prolific, killing off all mammals, for which stockpiling DDT might be prudent). Do you think we should devote large amounts of resources to preventing such eventualities, even though ordinary reasoning would seem to indicate that they are very unlikely?
I happen to believe that there is a reason why my argument does not have the uncomfortable implications you’re sketching out here. But before I present it, I want to point out that this has no bearing on whether the argument is true or false. If it had as an implication that we have a 30% chance of going extinct on any given day, that wouldn’t make it any less plausible.
Do you think that the fact that this hasn’t happened for a few billion years is NO EVIDENCE AT ALL that the probability of it happening in any given year is low?
Well, it is some evidence because the quantum assumptions may not hold.
But if we assume the necessary things then yes; I think the fact that we haven’t been killed by an asteroid in a few billion years is no evidence at all that the probability of it happening is low. However! The fact that we also had very few large but non-fatal asteroids across a few billion years is very strong evidence that it is unlikely. And the same argument works for most stuff.
We only have to worry about the things for which we have a special reason to believe that they won’t come in less-than-lethal forms. Three candidates for such things are, I think, nuclear weapons, aliens, and superintelligence. And I am indeed totally biting the bullet on the implications there. I made a post about the first two here. And as for superintelligence, I think there will be some versions of us still around after a singularity, and it will indeed be incorrect for our future selves to conclude that, since we survived it, it wasn’t that dangerous after all.
I agree that the possibility of serious but less than catastrophic effects renders the issue here moot for many problems (which I think includes nuclear war.) I tried to make the interstellar planet example one where the issue is real—the number of such planets seems to me to be unrelated to how many asteroids are in the solar system, and might collide with less-catastrophic effects (or at least we could suppose so), whereas even a glancing collision with a planet-sized object would wipe out humanity. However, I may have failed with the mutated insect example, since one can easily imagine less catastrophic mutations.
I’m unclear on what your position is regarding such catastrophes, though. Something that quickly kills me seems like the most plausible situation where an argument regarding selection effects might be valid. But you seem to have in mind things that kill me more slowly as well, taking long enough for me to have lots of thoughts after realizing that I’m doomed. And you also seem to have in mind things that would have wiped out humanity before I was born, which seems like a different sort of thing altogether to me.
I tried to make the interstellar planet example one where the issue is real—the number of such planets seems to me to be unrelated to how many asteroids are in the solar system
Mh. I see. Well, my position on that isn’t complicated, it’s whatever the argument implies. If it is indeed true that we have no evidence on the probability of this even now, then I think it is possible that it happens quite frequently. (I’m ignorant on this, so I just have to take your word.) In regard to things that kill you “slowly,” I think time just matters proportionately. If an event sets your expected lifespan to one year. then it would have to happen with the frequency of once per year for you to have even odds of finding yourself in that world, which would then be moderate evidence. (I might have made a mistake there, but it seems to me like that’s how it works.) I think we can conclude that nukes probably don’t go off once per month, but not that they go off less than once per lifetime.
I don’t think you’re correct.
Copies seem to me to matter. In a quantum universe where you share consciousness with all of your copies, you have P(survival|∙) = 1, and therefore P(cws | s)=P(s | cws)⋅P(cws)P(s)=1⋅P(cws)1=P(cws)
You might not have P(survival | ∙) = 1 for your copy’s survival, but you have it for the survival of any of your copies, and that’s the probability which matters, because that’s the observation you make.
I’m still uncertain about what happens in the many world scenarios, see https://www.lesswrong.com/posts/NiA59mFjFGx9h5eB6/duplication-versus-probability
I think this is outside the scope of what is being argued here. This seems to be about subjective probability within the observable universe. What you are considering seems to require knowledge about the world we don’t have access to. That doesn’t make you wrong or Stuart wrong, but I do think you’re talking about different things.
(FWIW I brought up a similar concern to Stuart in person.)
But it is relevant for whether the leading proposition is true or false.
So if the intention is to be agnostic about quantum copies, then it is wrong to assert that the proposition is true, as the post does.
I’m sympathetic to your line of reasoning (I’m not even sure what a counterfactual would really mean without using something like MWI), but I would suppose you could imagine a subjective, self-world-bounded interpretation where “less dangerous” means “less dangerous in expectation from back when I would have assessed the probability prior to knowing the outcome of the event”.
I don’t get that interpretation. I don’t know how you can bound it to one world when other worlds matter.
Let’s simplify and consider the cold war as a one-time event which either kills you or doesn’t. I’m claiming the observation that you survived tells you literally nothing about the probability that it kills you except that it’s not 1. It could be that 99% of the mass of copies originated out of you from the point where you assessed the probability prior to knowing the outcome are now dead. This seems to be the expectation you’ve described, and your observations do nothing to update on that in either direction. Our survival does in fact have exactly no implications about the danger.
If you have a probability distribution over whether quantum worlds (and if needed, shared consciousness) exist, then in that sense the probability has changed, since the probability conditioned on a single world has changed. But that seems like a cheap way out and not what anyone intended to say. Or if you consider that a super dangerous cold war would also increase the probability out outcomes that look different from reality right now but where you’re not dead in, then that could be – and I think is – a legitimate reason to update. But that’s also not what’s been argued here.
A problem with this line of reasoning is that it would apply to many other matters too. It’s thought that various planet-sized object are wandering in interstellar space, but I think no one has a clear idea how many there are. One of them could zip into the solar system, hit earth, and destroy all life on earth. Do you think that the fact that this hasn’t happened for a few billion years is NO EVIDENCE AT ALL that the probability of it happening in any given year is low? The same question could be asked about many other possible catostrophic events, for some of which there might be some action we could take to mitigate the problem (for instance, a mutation making some species of insect become highly aggressive, and highly prolific, killing off all mammals, for which stockpiling DDT might be prudent). Do you think we should devote large amounts of resources to preventing such eventualities, even though ordinary reasoning would seem to indicate that they are very unlikely?
I happen to believe that there is a reason why my argument does not have the uncomfortable implications you’re sketching out here. But before I present it, I want to point out that this has no bearing on whether the argument is true or false. If it had as an implication that we have a 30% chance of going extinct on any given day, that wouldn’t make it any less plausible.
Well, it is some evidence because the quantum assumptions may not hold.
But if we assume the necessary things then yes; I think the fact that we haven’t been killed by an asteroid in a few billion years is no evidence at all that the probability of it happening is low. However! The fact that we also had very few large but non-fatal asteroids across a few billion years is very strong evidence that it is unlikely. And the same argument works for most stuff.
We only have to worry about the things for which we have a special reason to believe that they won’t come in less-than-lethal forms. Three candidates for such things are, I think, nuclear weapons, aliens, and superintelligence. And I am indeed totally biting the bullet on the implications there. I made a post about the first two here. And as for superintelligence, I think there will be some versions of us still around after a singularity, and it will indeed be incorrect for our future selves to conclude that, since we survived it, it wasn’t that dangerous after all.
I agree that the possibility of serious but less than catastrophic effects renders the issue here moot for many problems (which I think includes nuclear war.) I tried to make the interstellar planet example one where the issue is real—the number of such planets seems to me to be unrelated to how many asteroids are in the solar system, and might collide with less-catastrophic effects (or at least we could suppose so), whereas even a glancing collision with a planet-sized object would wipe out humanity. However, I may have failed with the mutated insect example, since one can easily imagine less catastrophic mutations.
I’m unclear on what your position is regarding such catastrophes, though. Something that quickly kills me seems like the most plausible situation where an argument regarding selection effects might be valid. But you seem to have in mind things that kill me more slowly as well, taking long enough for me to have lots of thoughts after realizing that I’m doomed. And you also seem to have in mind things that would have wiped out humanity before I was born, which seems like a different sort of thing altogether to me.
Mh. I see. Well, my position on that isn’t complicated, it’s whatever the argument implies. If it is indeed true that we have no evidence on the probability of this even now, then I think it is possible that it happens quite frequently. (I’m ignorant on this, so I just have to take your word.) In regard to things that kill you “slowly,” I think time just matters proportionately. If an event sets your expected lifespan to one year. then it would have to happen with the frequency of once per year for you to have even odds of finding yourself in that world, which would then be moderate evidence. (I might have made a mistake there, but it seems to me like that’s how it works.) I think we can conclude that nukes probably don’t go off once per month, but not that they go off less than once per lifetime.