It is not possible for humans to observe the end of the human race. so lack of that observation is not evidence.
Global catastropic risks that weren’t the the extinction of the race have happened. At one point, it is theorized that there were just 500 reproducing females left. That counts as a close shave.
Also, Homo Florensis and Homo Neanderthalis did, in fact, get wiped out.
I don’t think pre-modern catastrophes are relevant to this discussion.
The point about the anthropic issues are well taken, but I still contend that we should be skeptical of over-hyped predictions by supposed experts. Especially when they propose solutions that (apparently, to me) reduce ‘freedoms.’
There is a grand tradition of them failing.
And, if we do have the anthropic explanation to ‘protect us’ from doomsday-like outcomes, why should we worry about them?
Can you explain how it is not hypocritical to consider anthropic explanations relevant to previous experiences but not to future ones?
And, if we do have the anthropic explanation to ‘protect us’ from doomsday-like outcomes, why should we worry about them?
Can you explain how it is not hypocritical to consider anthropic explanations relevant to previous experiences but not to future ones?
Observation that you current exist trivially implies that you haven’t been destroyed, but doesn’t imply that you won’t be destroyed. As simple as that.
Can you explain how it is not hypocritical to consider anthropic explanations relevant to previous experiences but not to future ones?
Anthropics will prevent us from being able, after the event, to observe that the human race has ended. Dead people don’t do observations. However, it will have ended, which many consider to be a bad thing. I suspect that you’re confused about what it is that anthropics says: consider reading LW wiki or wikipedia on it.
Of course, if you bring Many Worlds QM into this mix, then you have the quantum immortality hypothesis, stating that nothing can kill you. However, I am still a little uncertain of what to make of QI.
No problem. QI still does confuse me somewhat. If my reading of the situation is correct, then properly implemented quantum suicide really would win you the lottery, without you especially losing anything. (yes, in the branches where you lose, you no longer exist, but since I am branching at a rate of 10^10^2 or so splits per second, who cares about a factor of 10^6 here or there? Survival for just one extra second would make up for it—the number of “me’s” is increasing so quickly that losing 99.999999% of them is negated by waiting a fraction of a second)
since I am branching at a rate of 10^10^2 or so splits per second, who cares about a factor of 10^6 here or there?
You’re talking about the number of branches, but perhaps the important thing is not that but measure, i.e., squared amplitude. Branching preserves measure, while quantum suicide doesn’t, so you can’t make up for it by branching more times if what you care about is measure.
It seems clear that on a revealed preference level, people do care about measure, and not the number of branches, since nobody actually attempts quantum suicide, nor do they try to do anything to increase the branching rate.
If you go further and ask why do we/should we care about measure instead of the number of branches, I have to answer I don’t know, but I think one clue is that those who do care about the number of branches but not measure will end up in a large number of branches but have small measure, and they will have high algorithmic complexity/low algorithmic probability as a result.
(I may have written more about this in a OB comment, and I’ll try to look it up. ETA: Nope, can’t find it now.)
It seems clear that on a revealed preference level, people do care about measure, and not the number of branches, since nobody actually attempts quantum suicide, nor do they try to do anything to increase the branching rate.
Do you think that the thing that, as a historical fact, causes people to not try quantum suicide, is the argument that it decreases measure? I doubt this a lot. Do you think that if people were told that it preserved measure, they would be popping off to do it all the time?
I don’t think that people are revealing a preference for measure here. I think that they’re revealing that they trust their instinct to not do weird things that look like suicide to their subconscious.
Do you think that the thing that, as a historical fact, causes people to not try quantum suicide, is the argument that it decreases measure?
No, I’m not claiming that. I think people avoid quantum suicide because they fear death. Perhaps we can interpret that as caring about measure, or maybe not. In either case there is still a question of why do we fear death, and whether it makes sense to care about measure. As I said, I don’t know the answers, but I think I do have a clue that others don’t seem to have noticed yet.
ETA: Or perhaps we should take the fear of death as a hint that we should care about measure, much like how Eliezer considers his altruistic feelings to be a good reason for adopting utilitarianism.
If quantum suicide works, then there’s little hurry to use it, since it’s not possible to die before getting the chance. Anyone who does have quantum immortality should expect to have it proven to them, by going far enough over the record age if nothing else. So attempting quantum suicide without such proof would be wrong.
I’m pointing out a possible position one might take
Yes, but you didn’t explain why anyone would want to take that position, and I didn’t manage to infer why. One obvious reason, that the fear of death (the fear of a decrease in measure) is some sort of legitimate signal about what matters to many people, prompts the question of why I should care about what evolution has programmed into me. Or perhaps, more subtly, the question of why my morality function should (logically) similarly weight two quite different things—a huge extrinsic decrease in my measure (involuntary death) vs. an self-imposed selective decrease in measure—that were not at all separate as far as evolution is concerned, where only the former was possible in the EEA, and perhaps where upon reflection only the reasons for the former seem intuitively clear.
ETA: Also, I totally don’t understand why you think that it’s a puzzle that evolution optimized us solely for the branches of reality with the greatest measure.
If you follow the above link, you’ll see that I actually took a position that’s opposite of my position here: I said that people mostly don’t care about measure. I think the lesson here is that A) I have a very bad memory :-) and B) I don’t know how to formalize human preferences.
you’ll see that I actually took a position that’s opposite of my position here: I said that people mostly don’t care about measure. I think the lesson here is that A) I have a very bad memory :-) and B) I don’t know how to formalize human preferences.
Well, Wei, I certainly agree that formalizing human preferences is tough!
But, suppose that what you really care about is what you’re about to experience next, rather than measure, i.e. the sum of absolute values of all the complex numbers premultiplying all of your branches?
But, suppose that what you really care about is what you’re about to experience next, rather than what the absolute value of the complex number that premultiplies that experience is?
I think this is a more reasonable alternative to “caring about measure” (as opposed to “caring about the number of branches” which is mainly what I was arguing against in my first reply to you in this thread). I’m not sure what I can say about this that might be new to you. I guess I can point out that this is not something that “evolution would do” if mind copying technology were available, but that’s another “clue” that I’m not sure what to make of.
I guess I can point out that this is not something that “evolution would do” if mind copying technology were available
Ok, I’ll appease the part of me that cares about what my genes want by donating to every sperm bank in the country (an exploit that very few people use), then I’ll use the money from that to buy 1000 lottery tickets determined by random qbits, and on with the QS moneymaker ;-)
I am branching at a rate of 10^10^2 or so splits per second
Source? I’m curious how that’s calculated.
without you especially losing anything
Well, if you have anyone that cares deeply about your continued living, then doing so would hurt them deeply in 99.999999% of universes. But if you’re completely alone in the world or a sociopath, then go for it! (Actually, I calculated the percentage for Mega Millions jackpot, which is 1-1/(56^5*46) = 1-1/2.5e10 = 99.999999996%. Doesn’t affect your argument, of course.)
The point about the anthropic issues are well taken, but I still contend that we should be skeptical of over-hyped predictions by supposed experts. Especially when they propose solutions that (apparently, to me) reduce ‘freedoms.’
This is a legitimate heuristic, but how familiar are you with the object-level reasoning in this case, which IMO is much stronger?
Doomsday predictions have never come true in the past, no matter much confidence the futurist had. Why should we believe this particular futurist?
And why would that be?...
I salute your sense of humor here, but I suspect that it needs spelling out…
Anthropic issues are relevant here.
It is not possible for humans to observe the end of the human race. so lack of that observation is not evidence.
Global catastropic risks that weren’t the the extinction of the race have happened. At one point, it is theorized that there were just 500 reproducing females left. That counts as a close shave.
Also, Homo Florensis and Homo Neanderthalis did, in fact, get wiped out.
I don’t think pre-modern catastrophes are relevant to this discussion.
The point about the anthropic issues are well taken, but I still contend that we should be skeptical of over-hyped predictions by supposed experts. Especially when they propose solutions that (apparently, to me) reduce ‘freedoms.’
There is a grand tradition of them failing.
And, if we do have the anthropic explanation to ‘protect us’ from doomsday-like outcomes, why should we worry about them?
Can you explain how it is not hypocritical to consider anthropic explanations relevant to previous experiences but not to future ones?
Observation that you current exist trivially implies that you haven’t been destroyed, but doesn’t imply that you won’t be destroyed. As simple as that.
I can’t observe myself getting destroyed either, however.
When you close your eyes, the World doesn’t go dark.
The world probably doesn’t go dark. We can’t know for sure without using sense data.
http://lesswrong.com/lw/pb/belief_in_the_implied_invisible/
Anthropics will prevent us from being able, after the event, to observe that the human race has ended. Dead people don’t do observations. However, it will have ended, which many consider to be a bad thing. I suspect that you’re confused about what it is that anthropics says: consider reading LW wiki or wikipedia on it.
Of course, if you bring Many Worlds QM into this mix, then you have the quantum immortality hypothesis, stating that nothing can kill you. However, I am still a little uncertain of what to make of QI.
I think I was equating quantum immortality with anthropic explanations, in general. My mistake.
No problem. QI still does confuse me somewhat. If my reading of the situation is correct, then properly implemented quantum suicide really would win you the lottery, without you especially losing anything. (yes, in the branches where you lose, you no longer exist, but since I am branching at a rate of 10^10^2 or so splits per second, who cares about a factor of 10^6 here or there? Survival for just one extra second would make up for it—the number of “me’s” is increasing so quickly that losing 99.999999% of them is negated by waiting a fraction of a second)
You’re talking about the number of branches, but perhaps the important thing is not that but measure, i.e., squared amplitude. Branching preserves measure, while quantum suicide doesn’t, so you can’t make up for it by branching more times if what you care about is measure.
It seems clear that on a revealed preference level, people do care about measure, and not the number of branches, since nobody actually attempts quantum suicide, nor do they try to do anything to increase the branching rate.
If you go further and ask why do we/should we care about measure instead of the number of branches, I have to answer I don’t know, but I think one clue is that those who do care about the number of branches but not measure will end up in a large number of branches but have small measure, and they will have high algorithmic complexity/low algorithmic probability as a result.
(I may have written more about this in a OB comment, and I’ll try to look it up. ETA: Nope, can’t find it now.)
Do you think that the thing that, as a historical fact, causes people to not try quantum suicide, is the argument that it decreases measure? I doubt this a lot. Do you think that if people were told that it preserved measure, they would be popping off to do it all the time?
I don’t think that people are revealing a preference for measure here. I think that they’re revealing that they trust their instinct to not do weird things that look like suicide to their subconscious.
No, I’m not claiming that. I think people avoid quantum suicide because they fear death. Perhaps we can interpret that as caring about measure, or maybe not. In either case there is still a question of why do we fear death, and whether it makes sense to care about measure. As I said, I don’t know the answers, but I think I do have a clue that others don’t seem to have noticed yet.
ETA: Or perhaps we should take the fear of death as a hint that we should care about measure, much like how Eliezer considers his altruistic feelings to be a good reason for adopting utilitarianism.
If quantum suicide works, then there’s little hurry to use it, since it’s not possible to die before getting the chance. Anyone who does have quantum immortality should expect to have it proven to them, by going far enough over the record age if nothing else. So attempting quantum suicide without such proof would be wrong.
Um, what? Why did we evolve to fear death? I suspect I’m missing something here.
You’re converting an “is” to an “ought” there with no explanation, or else I don’t know in what sense you’re using “should”.
That the way we fear death has the effect of maximizing our measure, but not the number of branches we are in, is perhaps a puzzle. See also http://lesswrong.com/lw/19d/the_anthropic_trilemma/14r8 starting at “But a problem with that”.
I’m pointing out a possible position one might take, not one that I agree with myself. See http://lesswrong.com/lw/196/boredom_vs_scope_insensitivity/14jn
Yes, but you didn’t explain why anyone would want to take that position, and I didn’t manage to infer why. One obvious reason, that the fear of death (the fear of a decrease in measure) is some sort of legitimate signal about what matters to many people, prompts the question of why I should care about what evolution has programmed into me. Or perhaps, more subtly, the question of why my morality function should (logically) similarly weight two quite different things—a huge extrinsic decrease in my measure (involuntary death) vs. an self-imposed selective decrease in measure—that were not at all separate as far as evolution is concerned, where only the former was possible in the EEA, and perhaps where upon reflection only the reasons for the former seem intuitively clear.
ETA: Also, I totally don’t understand why you think that it’s a puzzle that evolution optimized us solely for the branches of reality with the greatest measure.
Have you looked at Jacques Mallah’s papers?
Yes, and I had a discussion with him last year at http://old.nabble.com/language%2C-cloning-and-thought-experiments-tt22185985.html#a22189232 (Thanks for the reminder.)
If you follow the above link, you’ll see that I actually took a position that’s opposite of my position here: I said that people mostly don’t care about measure. I think the lesson here is that A) I have a very bad memory :-) and B) I don’t know how to formalize human preferences.
Well, Wei, I certainly agree that formalizing human preferences is tough!
Preserves measure of what, exactly? The integral of over all arrangements of particles that we classify into the “Roko ALIVE” category?
I.e. it preserves the measure of the set of all arrangements of particles that we classify into the “Roko ALIVE” category.
Yes, something like that.
But, suppose that what you really care about is what you’re about to experience next, rather than measure, i.e. the sum of absolute values of all the complex numbers premultiplying all of your branches?
I think this is a more reasonable alternative to “caring about measure” (as opposed to “caring about the number of branches” which is mainly what I was arguing against in my first reply to you in this thread). I’m not sure what I can say about this that might be new to you. I guess I can point out that this is not something that “evolution would do” if mind copying technology were available, but that’s another “clue” that I’m not sure what to make of.
Ok, I’ll appease the part of me that cares about what my genes want by donating to every sperm bank in the country (an exploit that very few people use), then I’ll use the money from that to buy 1000 lottery tickets determined by random qbits, and on with the QS moneymaker ;-)
Source? I’m curious how that’s calculated.
Well, if you have anyone that cares deeply about your continued living, then doing so would hurt them deeply in 99.999999% of universes. But if you’re completely alone in the world or a sociopath, then go for it! (Actually, I calculated the percentage for Mega Millions jackpot, which is 1-1/(56^5*46) = 1-1/2.5e10 = 99.999999996%. Doesn’t affect your argument, of course.)
Don’t trust this; it’s just my guess. This is roughly the number of photons that interact with you per second.
This is a legitimate heuristic, but how familiar are you with the object-level reasoning in this case, which IMO is much stronger?
not very. Thanks for the link.
So I assume you’re not afraid of AI?