For example, I can easily imagine that humans will never be able to colonize even this one galaxy, or even any solar system other than this one.
Counts as Doomsday, also doesn’t work because this solar system could support vast numbers of uploads for vast amounts of time (by comparison to previous population).
Or that they will artificially limit the number of individuals.
This is a potential reply to both Doomsday and SA but only if you think that ‘random individual’ has more force than a similar argument from ‘random observer-moment’, i.e. to the second you reply, “What do you mean, why am I near the beginning of a billion-year life rather than the middle? Anyone would think that near the beginning!” (And then you have to not translate that argument back into a beginning-civilization saying the same thing.)
Or maybe the only consistent CEV is that of a single superintelligence of which human minds will be tiny parts.
...whereupon we wonder something about total ‘experience mass’, and, if that argument doesn’t go through, why the original Doomsday Argument / SH should either.
Thanks, I’ll chew on that a bit. I don’t understand the argument in the second and third paragraphs. Also, it’s not clear to me whether by “counts as doomsday” you mean the standard doomsday with the probability estimates attached, or some generalized doomsday, with no clear timeline or total number of people estimated.
Anyway, the feeling I get from your reply is that I’m missing some basic background stuff here I need to go through first, not the usual “this guy is talking out of his ass” impression when someone invokes anthropics in an argument.
No, this is talking-out-of-our-ass anthropics, it’s just that the anthropic part comes in when you start arguing “No, you can’t really be in a position of that much influence”, not when you’re shrugging “Sure, why shouldn’t you have that much influence?” Like, if you’re not arriving at your probability estimate for “Humans will never leave the solar system” just by looking at the costs of interstellar travel, and are factoring in how unique we’d have to be, this is where the talking-out-of-our-ass anthropics comes in.
Though it should be clearly stated that, as always, “We don’t need to talk out of our ass!” is also talking out of your ass, and not necessarily a nicer ass.
it’s just that the anthropic part comes in when you start arguing “No, you can’t really be in a position of that much influence”, not when you’re shrugging “Sure, why shouldn’t you have that much influence?”
Or when you (the generic you) start arguing “Yes, I am indeed in a position of that much influence”, as opposed to “There is an unknown chance of me being in such a position, which I cannot give a ballpark estimate for without talking out of my ass, so I won’t”?
Huh. I don’t understand how refusing to speculate about anthropics counts as anthropics. I guess that’s what you meant by
Though it should be clearly stated that, as always, “We don’t need to talk out of our ass!” is also talking out of your ass, and not necessarily a nicer ass.
I wonder if your definition of anthropics matches mine. I assume that any statement of the sort
All other things equal, an observer should reason as if they are randomly selected from the set of
is anthropics. I do not see how refusing to reason based on some arbitrary set of observers counts as anthropics.
Right. So if you just take everything at face value—the observed laws of physics, the situation we seem to find ourselves in, our default causal model of civilization—and say, “Hm, looks like we’re collectively in a position to influence the future of the galaxy,” that’s non-anthropics. If you reply “But that’s super improbable a priori!” that’s anthropics. If you counter-reply “I don’t believe in all this anthropic stuff!” that’s also an implicit theory of anthropics. If you treat the possibility as more “unknown” than it would be otherwise, that’s anthropics.
OK, I think I understand your point now. I still feel uneasy about the projection like your influencing 10^80 people in some far future, mainly because I think it does not account for the unknown unknowns and so is lost in the noise and ought to be ignored, but I don’t have a calculation to back up this uneasiness at the moment.
if you think that ‘random individual’ has more force than a similar argument from ‘random observer-moment’
I’ve had a vague idea as to why the random observer-moment argument might not be as strong as the random individual one, though I’m not very confident it makes much sense. (But neither argument sounds anywhere near obviously wrong to me.)
Counts as Doomsday, also doesn’t work because this solar system could support vast numbers of uploads for vast amounts of time (by comparison to previous population).
This is a potential reply to both Doomsday and SA but only if you think that ‘random individual’ has more force than a similar argument from ‘random observer-moment’, i.e. to the second you reply, “What do you mean, why am I near the beginning of a billion-year life rather than the middle? Anyone would think that near the beginning!” (And then you have to not translate that argument back into a beginning-civilization saying the same thing.)
...whereupon we wonder something about total ‘experience mass’, and, if that argument doesn’t go through, why the original Doomsday Argument / SH should either.
Thanks, I’ll chew on that a bit. I don’t understand the argument in the second and third paragraphs. Also, it’s not clear to me whether by “counts as doomsday” you mean the standard doomsday with the probability estimates attached, or some generalized doomsday, with no clear timeline or total number of people estimated.
Anyway, the feeling I get from your reply is that I’m missing some basic background stuff here I need to go through first, not the usual “this guy is talking out of his ass” impression when someone invokes anthropics in an argument.
No, this is talking-out-of-our-ass anthropics, it’s just that the anthropic part comes in when you start arguing “No, you can’t really be in a position of that much influence”, not when you’re shrugging “Sure, why shouldn’t you have that much influence?” Like, if you’re not arriving at your probability estimate for “Humans will never leave the solar system” just by looking at the costs of interstellar travel, and are factoring in how unique we’d have to be, this is where the talking-out-of-our-ass anthropics comes in.
Though it should be clearly stated that, as always, “We don’t need to talk out of our ass!” is also talking out of your ass, and not necessarily a nicer ass.
Or when you (the generic you) start arguing “Yes, I am indeed in a position of that much influence”, as opposed to “There is an unknown chance of me being in such a position, which I cannot give a ballpark estimate for without talking out of my ass, so I won’t”?
When you try to say that there’s something particularly unknown about having lots of influence, you’re using anthropics.
Huh. I don’t understand how refusing to speculate about anthropics counts as anthropics. I guess that’s what you meant by
I wonder if your definition of anthropics matches mine. I assume that any statement of the sort
is anthropics. I do not see how refusing to reason based on some arbitrary set of observers counts as anthropics.
Right. So if you just take everything at face value—the observed laws of physics, the situation we seem to find ourselves in, our default causal model of civilization—and say, “Hm, looks like we’re collectively in a position to influence the future of the galaxy,” that’s non-anthropics. If you reply “But that’s super improbable a priori!” that’s anthropics. If you counter-reply “I don’t believe in all this anthropic stuff!” that’s also an implicit theory of anthropics. If you treat the possibility as more “unknown” than it would be otherwise, that’s anthropics.
OK, I think I understand your point now. I still feel uneasy about the projection like your influencing 10^80 people in some far future, mainly because I think it does not account for the unknown unknowns and so is lost in the noise and ought to be ignored, but I don’t have a calculation to back up this uneasiness at the moment.
Does he?
Does he what?
I’ve had a vague idea as to why the random observer-moment argument might not be as strong as the random individual one, though I’m not very confident it makes much sense. (But neither argument sounds anywhere near obviously wrong to me.)