Lesswrong contains a large intersection of people who are interested in x-risk reduction and people who are aware of the Doomsday Argument. Yet these two things seem to be incompatible with each other, so I’m going to ask about the elephant in the room:
What are your stances on the Doomsday Argument? Does it encourage or discourage you from working on x-risks? Is it a significant concern for you at all?
Do most people working on x-risks believe the Doomsday Argument to be flawed?
If not, it seems to me that avoiding astronomical waste is also astronomically unlikely, thus balancing out x-risk reduction to a moderately important issue for humanity at best. From an individual perspective (or altruistic perspective with future discounting), we perhaps should focus on having a good time before inevitable doom? What am I missing?
Suppose we ignore the simulation argument and take the evidence of history and astronomy at face value. The doomsday argument provides a good prior. However, the evidence that shows we are on early earth is really strong, and the prior is updated away. If we take the simulation hypothesis into account, then there could be a version of us in reality, and many in simulations. The relative balance of preventing X risk vs having a good time is swung, but still strongly cares about X risk. Actually, the doomsday argument puts the probability that infinitely many people will exist, but only finitely many have existed so far at 0, so I’m don’t think I believe it.
People are bad at interpreting the Doomsday Argument, because people are bad at dealing with evidence as Bayesian evidence, rather than a direct statement of the correct belief.
The Doomsday Argument is evidence that we should update on. But it is not a direct statement of the correct belief.
A parable:
On a parallel earth, humanity is on the decline. Some disaster has struck, and the once-billions of proud humanity have been reduced to a few scattered thousands. Now the last exiles of civilization hide in sealed habitats that they no longer have the supply chains to repair, and they know that soon enough the end will come for them too. But on the other hand, the philosophers among them remark, at least there’s the Doomsday Argument, which says that on average we should expect to be in the middle of humanity. So if the DA is right, the current crisis is merely a bottleneck in the middle of humanity’s time, and everything will probably work itself out any day now. The last philosopher dies after breathing in contaminated air, with the last words “No! The position I occupy is… very unlikely!”
Moral:
Your eyes and ears also provide you evidence about the expected span of humanity.
But isn’t the point of the Doomsday Argument that we’ll need very very VERY strong evidence to the contrary to have any confidence that we’re not doomed? Perhaps we should focus on drastically controlling future population growth to better our chances of prolonged survival?
To believe that you’re a one in a million case (e.g. in the first or last millionth of all humans), you need 20 bits of information (because 2^20 is about 1000000).
So on the one hand, 20 bits can be hard to get if the topic is hard to get reliable information about. But we regularly get more than 20 bits of information about all sorts of questions (reading this comment has probably given you more than 20 bits of information). So how hard this should “feel” depends heavily on how well we can translate our observational data into information about the future of humanity.
Extra note: In the case that there are an infinite number of humans, this uniform prior actually breaks down (or else naively you’d think you have a 0.0% chance of being anyone at all), so there can be a finite contribution from the possibility that there are infinite people.
I’d like to see someone explore the apparent contradiction in more detail. Even if I were convinced that we will almost certainly fail, I might still prioritize x-risk reduction, since the stakes are so high.
Anyhow, my guess is that most people think the doomsday argument probably doesn’t work. I am not sure myself. If it does work though, its conclusion is not that we will all go extinct soon, but rather that ancestor simulations are one of the main uses of cosmic resources.
If ancestor simulations are one of the main uses of cosmic resources, we probably will go extinct soon (somewhat depending on how you define extinction), because we’re probably in an ancestor simulation that will be turned off. If the simulators were to keep us alive for billions of years, it would be pretty unlikely that we didn’t find ourselves living in those billions of years, by the same logic as the doomsday argument.
Yeah. It depends on how you define extinction. I agree that most simulations don’t last very long. (You don’t even need the doomsday argument to get that conclusion, I think)
In this case, it isn’t so much that “stakes are high and chances are low so they might cancel out”, rather there is an exact inverse proportionality between the stakes and the chances because the Doomsday Argument operates directly through the number of observers.
I feel like being in a simulation is just as terrible a predicament as doom soon; given all the horrible things that happen in our world the simulators are clearly UnFriendly, they could easily turn off the simulation or thwart our efforts at creating an AI. Basically we’re already living in a post-Singularity dystopia so it’s too late to work on it.
I have a much harder time accepting the Simulation Hypothesis though because there are so many alternative philosophical considerations that could be pursued. Maybe we are (I am) Boltzmann brains. Maybe we live in an inflationary universe that expands 10^37 fold every second. Maybe minds do not need instantiation, or anything like a rock could be an instantiation. Etc.
Going one meta level up, I can’t help but feel like a hypocrite to lament the lack of attention given to intelligence explosion and x-risks by the general public yet fail to seriously consider all these other big weird philosophical ideas. Are we (the rationalist community) doing the same as people outside it, just with a slightly shifted Overton Window? When is it Ok to sweep ideas under the rug and throw hands up in the air?
re: inverse proportinality: Good point, I’ll have to think about that more. Maybe it does neatly cancel out, or even worse, since my utility function isn’t linear in happy lives lived, maybe it more than cancels out.
I for one have seriously investigated all those weird philosophical ideas you mentioned. ;) And I think our community has been pretty good about taking these ideas seriously, especially compared to, well, literally every other community, including academic philosophy. Our overton window definitely includes all these ideas, I’d say.
But I agree with your general point that there is a tension we should explore. Even if we are OK seriously discussing these ideas, we often don’t actually live by them. Our overton window includes them, but our median opinion doesn’t. Why not?
I think there is a good answer, and it has to do with humility/caution. Philosophy is weird. If you follow every argument where it leads you, you very quickly find that your beliefs don’t add up to normality, or anything close. Faith that beliefs will (approximately) add up to normality seems to be important for staying sane and productive, and moreover, seems to have been vindicated often in the past: crazy-sounding arguments turn out to have flaws in them, or maybe they work but there is an additional argument we hadn’t considered that combines with it to add up to normality.
I agree that Lesswrong is probably the place where crazy philosophical ideas are given the most serious consideration; elsewhere it’s usually just mentioned as a mind-blowing trivia over dinner parties, if at all. I think there are two reasons why these ideas are so troubling:
They are big. Failing to take account of even one of them will derail one’s worldview completely
Being humble and not taking an explicit position is still just taking the default position effectively
But alas, I guess that’s just the epistemological reality we live in. We’ll just have to make working assumptions and carry on.
Agreed. If you want to talk more about these ideas sometime, I’d be happy to video chat!
Thank you for the offer, I am however currently reluctant to interact with people I met on the internet in this way. But know that your openness and forthcomingness is greatly appreciated :)
There is an uncertainty if DA valid or not. Around 40 per cent of scientists who analysed it, think that some version of DA is true, and if we treat as a prediction market, it is a 40 per cent bet. So there is a 60 per cent chance that DA is not valid and thus we should continue to work on x-risks prevention.
Also, it is possible to cheat DA, if we precomit to forget our position number in the future (may be via creating enough simulations of early past).
The doomsday argument strikes me as complete and utter misguided bullshit, notwithstanding the fact that smart and careful physicists have worked on it, including J. Richard Gott and Brandon Carter, whose work in actual physics I had used extensively in my research. There are plenty of good reasons for x-risk work, no need to invoke lousy ones. The main issue with the argument is the misuse of probability.
First, the argument assumes a specific distribution (usually uniform) a priory without any justification. Indeed one needs a probability distribution to meaningfully talk about probabilities, but there is no reason to pick one specific distribution over another until you have a useful reference class.
Second, the potential infinite expectation value makes any conclusions from the argument moot.
Basically, the Doomsday argument has zero predictive power. Consider a set of civilizations with a fixed number of humans at any given time, each existing for a finite time T, randomly distributed with a distribution function f(T), which does not necessarily have a finite expectation value, standard deviation or any other moments. Now, given a random person from a random civilization at the time t, the Doomsday argument tells them that their civilization will exist for about as long as it had so far. It gives you no clue at all about the shape of f(t) beyond it being non-zero (though maybe measure zero) at t.
Now, shall we lay this nonsense to rest and focus on something productive?
Nitpick: I was arguing that the Doomsday Argument would actually discourage x-risks related work because “we’re doomed anyway”.
Right. Either way, it’s not a good argument to base one’s decisions on.
I confidently reject the Doomsday argument, so it doesn’t have any implications.