So do you believe in the Simulation Hypothesis or the Doomsday Argument, then? All attempts to cash out that refusal-to-believe end in one or the other, inevitably.
From where I stand, it’s more like arcane meta-arguments about probability are motivating a refusal-to-doubt the assumptions of a prized scenario.
Yes, I am apriori skeptical of anything which says I am that special. I know there are weird counterarguments (SIA) and I never got to the bottom of that debate. But meta issues aside, why should the “10^80 scenario” be the rational default estimation of Earth’s significance in the universe?
The 10^80 scenario assumes that it’s physically possible to conquer the universe and that nothing would try to stop such a conquest, both enormous assumptions… astronomically naive and optimistic, about the cosmic prospects that await an Earth which doesn’t destroy itself.
Okay, so that’s the Doomsday Argument then: Since being able to conquer the universe implies we’re 10^70 special, we must not be able to conquer the universe.
Calling the converse of this an arcane meta-argument about probability hardly seems fair. You can make a case for Doomsday but it’s not non-arcane.
Perhaps this is hairsplitting but the principle I am employing is not arcane: it is that I should doubt theories which imply astronomically improbable things. The only unusual step is to realize that theories with vast future populations have such an implication.
I am unable to state what the SIA counterargument is.
In the theory that there are astronomically large numbers of people, it is a certainty that some of them came first. The probability that YOU are one of those people equal to the probability that YOU are any one of those other people. However, it does define a certain small narrow equivalence class that you happen to be a member of.
It’s a bit like the difference between theorizing that: A) given that you bought a ticket, you’ll win the lottery, and B) given that the lottery folks gave you a large sum, that you had the winning ticket.
That’s not the “SIA counterargument”, which is what I want to hear (in a compact form, that makes it sound straightforward). You’re just saying “accept the evidence that something ultra-improbable happened to you, because it had to happen to someone”.
But it’s still a simple idea once you grasp it. I was hoping you could state the counterargument with comparable simplicity. What is the counterargument at the level of principles, which neutralizes this one?
I largely agree with your skepticism. I would go even farther and say that even the 10^80 scenario happens, what we do now can only influence it by random chance, because the uncertainty in the calculations of the consequences of our actions in the near term on the far future overwhelms the calculations themselves. That said, we should still do what we think is best in the near term (defined by our estimates of the uncertainty being reasonably small), just not invoke the 10^80 leverage argument. This can probably be formalized, by assuming that the prediction error grows exponentially with some relevant parameter, like time or the number of choices investigated, and calculating the exponent from historical data.
Doomsday for me, I think. Especially when you consider that it doesn’t mean doomsday is literally imminent, just “imminent” relative to the kind of timescale that would be expected to create populations on the order of 10^80.
In other words, it fits with the default human assumption that civilization will basically continue as it is for another few centuries or millennia before being wiped out by some great catastrophe.
Do you mind elaborating on this inevitability? It seems like there ought to be other assumptions involved. For example, I can easily imagine that humans will never be able to colonize even this one galaxy, or even any solar system other than this one. Or that they will artificially limit the number of individuals. Or maybe the only consistent CEV is that of a single superintelligence of which human minds will be tiny parts. All of these result in the rather small total number of individuals existing at any point in time.
For example, I can easily imagine that humans will never be able to colonize even this one galaxy, or even any solar system other than this one.
Counts as Doomsday, also doesn’t work because this solar system could support vast numbers of uploads for vast amounts of time (by comparison to previous population).
Or that they will artificially limit the number of individuals.
This is a potential reply to both Doomsday and SA but only if you think that ‘random individual’ has more force than a similar argument from ‘random observer-moment’, i.e. to the second you reply, “What do you mean, why am I near the beginning of a billion-year life rather than the middle? Anyone would think that near the beginning!” (And then you have to not translate that argument back into a beginning-civilization saying the same thing.)
Or maybe the only consistent CEV is that of a single superintelligence of which human minds will be tiny parts.
...whereupon we wonder something about total ‘experience mass’, and, if that argument doesn’t go through, why the original Doomsday Argument / SH should either.
Thanks, I’ll chew on that a bit. I don’t understand the argument in the second and third paragraphs. Also, it’s not clear to me whether by “counts as doomsday” you mean the standard doomsday with the probability estimates attached, or some generalized doomsday, with no clear timeline or total number of people estimated.
Anyway, the feeling I get from your reply is that I’m missing some basic background stuff here I need to go through first, not the usual “this guy is talking out of his ass” impression when someone invokes anthropics in an argument.
No, this is talking-out-of-our-ass anthropics, it’s just that the anthropic part comes in when you start arguing “No, you can’t really be in a position of that much influence”, not when you’re shrugging “Sure, why shouldn’t you have that much influence?” Like, if you’re not arriving at your probability estimate for “Humans will never leave the solar system” just by looking at the costs of interstellar travel, and are factoring in how unique we’d have to be, this is where the talking-out-of-our-ass anthropics comes in.
Though it should be clearly stated that, as always, “We don’t need to talk out of our ass!” is also talking out of your ass, and not necessarily a nicer ass.
it’s just that the anthropic part comes in when you start arguing “No, you can’t really be in a position of that much influence”, not when you’re shrugging “Sure, why shouldn’t you have that much influence?”
Or when you (the generic you) start arguing “Yes, I am indeed in a position of that much influence”, as opposed to “There is an unknown chance of me being in such a position, which I cannot give a ballpark estimate for without talking out of my ass, so I won’t”?
Huh. I don’t understand how refusing to speculate about anthropics counts as anthropics. I guess that’s what you meant by
Though it should be clearly stated that, as always, “We don’t need to talk out of our ass!” is also talking out of your ass, and not necessarily a nicer ass.
I wonder if your definition of anthropics matches mine. I assume that any statement of the sort
All other things equal, an observer should reason as if they are randomly selected from the set of
is anthropics. I do not see how refusing to reason based on some arbitrary set of observers counts as anthropics.
Right. So if you just take everything at face value—the observed laws of physics, the situation we seem to find ourselves in, our default causal model of civilization—and say, “Hm, looks like we’re collectively in a position to influence the future of the galaxy,” that’s non-anthropics. If you reply “But that’s super improbable a priori!” that’s anthropics. If you counter-reply “I don’t believe in all this anthropic stuff!” that’s also an implicit theory of anthropics. If you treat the possibility as more “unknown” than it would be otherwise, that’s anthropics.
OK, I think I understand your point now. I still feel uneasy about the projection like your influencing 10^80 people in some far future, mainly because I think it does not account for the unknown unknowns and so is lost in the noise and ought to be ignored, but I don’t have a calculation to back up this uneasiness at the moment.
if you think that ‘random individual’ has more force than a similar argument from ‘random observer-moment’
I’ve had a vague idea as to why the random observer-moment argument might not be as strong as the random individual one, though I’m not very confident it makes much sense. (But neither argument sounds anywhere near obviously wrong to me.)
If there was ever a reliable indicator that you’re wrong about something, it is the belief that you are special to the order of 1 in 10^70.
So do you believe in the Simulation Hypothesis or the Doomsday Argument, then? All attempts to cash out that refusal-to-believe end in one or the other, inevitably.
From where I stand, it’s more like arcane meta-arguments about probability are motivating a refusal-to-doubt the assumptions of a prized scenario.
Yes, I am apriori skeptical of anything which says I am that special. I know there are weird counterarguments (SIA) and I never got to the bottom of that debate. But meta issues aside, why should the “10^80 scenario” be the rational default estimation of Earth’s significance in the universe?
The 10^80 scenario assumes that it’s physically possible to conquer the universe and that nothing would try to stop such a conquest, both enormous assumptions… astronomically naive and optimistic, about the cosmic prospects that await an Earth which doesn’t destroy itself.
Okay, so that’s the Doomsday Argument then: Since being able to conquer the universe implies we’re 10^70 special, we must not be able to conquer the universe.
Calling the converse of this an arcane meta-argument about probability hardly seems fair. You can make a case for Doomsday but it’s not non-arcane.
Perhaps this is hairsplitting but the principle I am employing is not arcane: it is that I should doubt theories which imply astronomically improbable things. The only unusual step is to realize that theories with vast future populations have such an implication.
I am unable to state what the SIA counterargument is.
In the theory that there are astronomically large numbers of people, it is a certainty that some of them came first. The probability that YOU are one of those people equal to the probability that YOU are any one of those other people. However, it does define a certain small narrow equivalence class that you happen to be a member of.
It’s a bit like the difference between theorizing that: A) given that you bought a ticket, you’ll win the lottery, and B) given that the lottery folks gave you a large sum, that you had the winning ticket.
That’s not the “SIA counterargument”, which is what I want to hear (in a compact form, that makes it sound straightforward). You’re just saying “accept the evidence that something ultra-improbable happened to you, because it had to happen to someone”.
I was only replying to the first paragraph, really. Even under the SSA there’s no real problem here. I don’t see how the SIA makes matters worse.
Right. That’s arcane. Mundane theories have no need to measure the population of the universe.
But it’s still a simple idea once you grasp it. I was hoping you could state the counterargument with comparable simplicity. What is the counterargument at the level of principles, which neutralizes this one?
I largely agree with your skepticism. I would go even farther and say that even the 10^80 scenario happens, what we do now can only influence it by random chance, because the uncertainty in the calculations of the consequences of our actions in the near term on the far future overwhelms the calculations themselves. That said, we should still do what we think is best in the near term (defined by our estimates of the uncertainty being reasonably small), just not invoke the 10^80 leverage argument. This can probably be formalized, by assuming that the prediction error grows exponentially with some relevant parameter, like time or the number of choices investigated, and calculating the exponent from historical data.
Doomsday for me, I think. Especially when you consider that it doesn’t mean doomsday is literally imminent, just “imminent” relative to the kind of timescale that would be expected to create populations on the order of 10^80.
In other words, it fits with the default human assumption that civilization will basically continue as it is for another few centuries or millennia before being wiped out by some great catastrophe.
Do you mind elaborating on this inevitability? It seems like there ought to be other assumptions involved. For example, I can easily imagine that humans will never be able to colonize even this one galaxy, or even any solar system other than this one. Or that they will artificially limit the number of individuals. Or maybe the only consistent CEV is that of a single superintelligence of which human minds will be tiny parts. All of these result in the rather small total number of individuals existing at any point in time.
Counts as Doomsday, also doesn’t work because this solar system could support vast numbers of uploads for vast amounts of time (by comparison to previous population).
This is a potential reply to both Doomsday and SA but only if you think that ‘random individual’ has more force than a similar argument from ‘random observer-moment’, i.e. to the second you reply, “What do you mean, why am I near the beginning of a billion-year life rather than the middle? Anyone would think that near the beginning!” (And then you have to not translate that argument back into a beginning-civilization saying the same thing.)
...whereupon we wonder something about total ‘experience mass’, and, if that argument doesn’t go through, why the original Doomsday Argument / SH should either.
Thanks, I’ll chew on that a bit. I don’t understand the argument in the second and third paragraphs. Also, it’s not clear to me whether by “counts as doomsday” you mean the standard doomsday with the probability estimates attached, or some generalized doomsday, with no clear timeline or total number of people estimated.
Anyway, the feeling I get from your reply is that I’m missing some basic background stuff here I need to go through first, not the usual “this guy is talking out of his ass” impression when someone invokes anthropics in an argument.
No, this is talking-out-of-our-ass anthropics, it’s just that the anthropic part comes in when you start arguing “No, you can’t really be in a position of that much influence”, not when you’re shrugging “Sure, why shouldn’t you have that much influence?” Like, if you’re not arriving at your probability estimate for “Humans will never leave the solar system” just by looking at the costs of interstellar travel, and are factoring in how unique we’d have to be, this is where the talking-out-of-our-ass anthropics comes in.
Though it should be clearly stated that, as always, “We don’t need to talk out of our ass!” is also talking out of your ass, and not necessarily a nicer ass.
Or when you (the generic you) start arguing “Yes, I am indeed in a position of that much influence”, as opposed to “There is an unknown chance of me being in such a position, which I cannot give a ballpark estimate for without talking out of my ass, so I won’t”?
When you try to say that there’s something particularly unknown about having lots of influence, you’re using anthropics.
Huh. I don’t understand how refusing to speculate about anthropics counts as anthropics. I guess that’s what you meant by
I wonder if your definition of anthropics matches mine. I assume that any statement of the sort
is anthropics. I do not see how refusing to reason based on some arbitrary set of observers counts as anthropics.
Right. So if you just take everything at face value—the observed laws of physics, the situation we seem to find ourselves in, our default causal model of civilization—and say, “Hm, looks like we’re collectively in a position to influence the future of the galaxy,” that’s non-anthropics. If you reply “But that’s super improbable a priori!” that’s anthropics. If you counter-reply “I don’t believe in all this anthropic stuff!” that’s also an implicit theory of anthropics. If you treat the possibility as more “unknown” than it would be otherwise, that’s anthropics.
OK, I think I understand your point now. I still feel uneasy about the projection like your influencing 10^80 people in some far future, mainly because I think it does not account for the unknown unknowns and so is lost in the noise and ought to be ignored, but I don’t have a calculation to back up this uneasiness at the moment.
Does he?
Does he what?
I’ve had a vague idea as to why the random observer-moment argument might not be as strong as the random individual one, though I’m not very confident it makes much sense. (But neither argument sounds anywhere near obviously wrong to me.)