From where I stand, it’s more like arcane meta-arguments about probability are motivating a refusal-to-doubt the assumptions of a prized scenario.
Yes, I am apriori skeptical of anything which says I am that special. I know there are weird counterarguments (SIA) and I never got to the bottom of that debate. But meta issues aside, why should the “10^80 scenario” be the rational default estimation of Earth’s significance in the universe?
The 10^80 scenario assumes that it’s physically possible to conquer the universe and that nothing would try to stop such a conquest, both enormous assumptions… astronomically naive and optimistic, about the cosmic prospects that await an Earth which doesn’t destroy itself.
Okay, so that’s the Doomsday Argument then: Since being able to conquer the universe implies we’re 10^70 special, we must not be able to conquer the universe.
Calling the converse of this an arcane meta-argument about probability hardly seems fair. You can make a case for Doomsday but it’s not non-arcane.
Perhaps this is hairsplitting but the principle I am employing is not arcane: it is that I should doubt theories which imply astronomically improbable things. The only unusual step is to realize that theories with vast future populations have such an implication.
I am unable to state what the SIA counterargument is.
In the theory that there are astronomically large numbers of people, it is a certainty that some of them came first. The probability that YOU are one of those people equal to the probability that YOU are any one of those other people. However, it does define a certain small narrow equivalence class that you happen to be a member of.
It’s a bit like the difference between theorizing that: A) given that you bought a ticket, you’ll win the lottery, and B) given that the lottery folks gave you a large sum, that you had the winning ticket.
That’s not the “SIA counterargument”, which is what I want to hear (in a compact form, that makes it sound straightforward). You’re just saying “accept the evidence that something ultra-improbable happened to you, because it had to happen to someone”.
But it’s still a simple idea once you grasp it. I was hoping you could state the counterargument with comparable simplicity. What is the counterargument at the level of principles, which neutralizes this one?
I largely agree with your skepticism. I would go even farther and say that even the 10^80 scenario happens, what we do now can only influence it by random chance, because the uncertainty in the calculations of the consequences of our actions in the near term on the far future overwhelms the calculations themselves. That said, we should still do what we think is best in the near term (defined by our estimates of the uncertainty being reasonably small), just not invoke the 10^80 leverage argument. This can probably be formalized, by assuming that the prediction error grows exponentially with some relevant parameter, like time or the number of choices investigated, and calculating the exponent from historical data.
From where I stand, it’s more like arcane meta-arguments about probability are motivating a refusal-to-doubt the assumptions of a prized scenario.
Yes, I am apriori skeptical of anything which says I am that special. I know there are weird counterarguments (SIA) and I never got to the bottom of that debate. But meta issues aside, why should the “10^80 scenario” be the rational default estimation of Earth’s significance in the universe?
The 10^80 scenario assumes that it’s physically possible to conquer the universe and that nothing would try to stop such a conquest, both enormous assumptions… astronomically naive and optimistic, about the cosmic prospects that await an Earth which doesn’t destroy itself.
Okay, so that’s the Doomsday Argument then: Since being able to conquer the universe implies we’re 10^70 special, we must not be able to conquer the universe.
Calling the converse of this an arcane meta-argument about probability hardly seems fair. You can make a case for Doomsday but it’s not non-arcane.
Perhaps this is hairsplitting but the principle I am employing is not arcane: it is that I should doubt theories which imply astronomically improbable things. The only unusual step is to realize that theories with vast future populations have such an implication.
I am unable to state what the SIA counterargument is.
In the theory that there are astronomically large numbers of people, it is a certainty that some of them came first. The probability that YOU are one of those people equal to the probability that YOU are any one of those other people. However, it does define a certain small narrow equivalence class that you happen to be a member of.
It’s a bit like the difference between theorizing that: A) given that you bought a ticket, you’ll win the lottery, and B) given that the lottery folks gave you a large sum, that you had the winning ticket.
That’s not the “SIA counterargument”, which is what I want to hear (in a compact form, that makes it sound straightforward). You’re just saying “accept the evidence that something ultra-improbable happened to you, because it had to happen to someone”.
I was only replying to the first paragraph, really. Even under the SSA there’s no real problem here. I don’t see how the SIA makes matters worse.
Right. That’s arcane. Mundane theories have no need to measure the population of the universe.
But it’s still a simple idea once you grasp it. I was hoping you could state the counterargument with comparable simplicity. What is the counterargument at the level of principles, which neutralizes this one?
I largely agree with your skepticism. I would go even farther and say that even the 10^80 scenario happens, what we do now can only influence it by random chance, because the uncertainty in the calculations of the consequences of our actions in the near term on the far future overwhelms the calculations themselves. That said, we should still do what we think is best in the near term (defined by our estimates of the uncertainty being reasonably small), just not invoke the 10^80 leverage argument. This can probably be formalized, by assuming that the prediction error grows exponentially with some relevant parameter, like time or the number of choices investigated, and calculating the exponent from historical data.