I’d like to see someone explore the apparent contradiction in more detail. Even if I were convinced that we will almost certainly fail, I might still prioritize x-risk reduction, since the stakes are so high.
Anyhow, my guess is that most people think the doomsday argument probably doesn’t work. I am not sure myself. If it does work though, its conclusion is not that we will all go extinct soon, but rather that ancestor simulations are one of the main uses of cosmic resources.
If ancestor simulations are one of the main uses of cosmic resources, we probably will go extinct soon (somewhat depending on how you define extinction), because we’re probably in an ancestor simulation that will be turned off. If the simulators were to keep us alive for billions of years, it would be pretty unlikely that we didn’t find ourselves living in those billions of years, by the same logic as the doomsday argument.
Yeah. It depends on how you define extinction. I agree that most simulations don’t last very long. (You don’t even need the doomsday argument to get that conclusion, I think)
Even if I were convinced that we will almost certainly fail, I might still prioritize x-risk reduction, since the stakes are so high.
In this case, it isn’t so much that “stakes are high and chances are low so they might cancel out”, rather there is an exact inverse proportionality between the stakes and the chances because the Doomsday Argument operates directly through the number of observers.
If it does work though, its conclusion is not that we will all go extinct soon, but rather that ancestor simulations are one of the main uses of cosmic resources.
I feel like being in a simulation is just as terrible a predicament as doom soon; given all the horrible things that happen in our world the simulators are clearly UnFriendly, they could easily turn off the simulation or thwart our efforts at creating an AI. Basically we’re already living in a post-Singularity dystopia so it’s too late to work on it.
I have a much harder time accepting the Simulation Hypothesis though because there are so many alternative philosophical considerations that could be pursued. Maybe we are (I am) Boltzmann brains. Maybe we live in an inflationary universe that expands 10^37 fold every second. Maybe minds do not need instantiation, or anything like a rock could be an instantiation. Etc.
Going one meta level up, I can’t help but feel like a hypocrite to lament the lack of attention given to intelligence explosion and x-risks by the general public yet fail to seriously consider all these other big weird philosophical ideas. Are we (the rationalist community) doing the same as people outside it, just with a slightly shifted Overton Window? When is it Ok to sweep ideas under the rug and throw hands up in the air?
re: inverse proportinality: Good point, I’ll have to think about that more. Maybe it does neatly cancel out, or even worse, since my utility function isn’t linear in happy lives lived, maybe it more than cancels out.
I for one have seriously investigated all those weird philosophical ideas you mentioned. ;) And I think our community has been pretty good about taking these ideas seriously, especially compared to, well, literally every other community, including academic philosophy. Our overton window definitely includes all these ideas, I’d say.
But I agree with your general point that there is a tension we should explore. Even if we are OK seriously discussing these ideas, we often don’t actually live by them. Our overton window includes them, but our median opinion doesn’t. Why not?
I think there is a good answer, and it has to do with humility/caution. Philosophy is weird. If you follow every argument where it leads you, you very quickly find that your beliefs don’t add up to normality, or anything close. Faith that beliefs will (approximately) add up to normality seems to be important for staying sane and productive, and moreover, seems to have been vindicated often in the past: crazy-sounding arguments turn out to have flaws in them, or maybe they work but there is an additional argument we hadn’t considered that combines with it to add up to normality.
I agree that Lesswrong is probably the place where crazy philosophical ideas are given the most serious consideration; elsewhere it’s usually just mentioned as a mind-blowing trivia over dinner parties, if at all. I think there are two reasons why these ideas are so troubling:
They are big. Failing to take account of even one of them will derail one’s worldview completely
Being humble and not taking an explicit position is still just taking the default position effectively
But alas, I guess that’s just the epistemological reality we live in. We’ll just have to make working assumptions and carry on.
Thank you for the offer, I am however currently reluctant to interact with people I met on the internet in this way. But know that your openness and forthcomingness is greatly appreciated :)
I’d like to see someone explore the apparent contradiction in more detail. Even if I were convinced that we will almost certainly fail, I might still prioritize x-risk reduction, since the stakes are so high.
Anyhow, my guess is that most people think the doomsday argument probably doesn’t work. I am not sure myself. If it does work though, its conclusion is not that we will all go extinct soon, but rather that ancestor simulations are one of the main uses of cosmic resources.
If ancestor simulations are one of the main uses of cosmic resources, we probably will go extinct soon (somewhat depending on how you define extinction), because we’re probably in an ancestor simulation that will be turned off. If the simulators were to keep us alive for billions of years, it would be pretty unlikely that we didn’t find ourselves living in those billions of years, by the same logic as the doomsday argument.
Yeah. It depends on how you define extinction. I agree that most simulations don’t last very long. (You don’t even need the doomsday argument to get that conclusion, I think)
In this case, it isn’t so much that “stakes are high and chances are low so they might cancel out”, rather there is an exact inverse proportionality between the stakes and the chances because the Doomsday Argument operates directly through the number of observers.
I feel like being in a simulation is just as terrible a predicament as doom soon; given all the horrible things that happen in our world the simulators are clearly UnFriendly, they could easily turn off the simulation or thwart our efforts at creating an AI. Basically we’re already living in a post-Singularity dystopia so it’s too late to work on it.
I have a much harder time accepting the Simulation Hypothesis though because there are so many alternative philosophical considerations that could be pursued. Maybe we are (I am) Boltzmann brains. Maybe we live in an inflationary universe that expands 10^37 fold every second. Maybe minds do not need instantiation, or anything like a rock could be an instantiation. Etc.
Going one meta level up, I can’t help but feel like a hypocrite to lament the lack of attention given to intelligence explosion and x-risks by the general public yet fail to seriously consider all these other big weird philosophical ideas. Are we (the rationalist community) doing the same as people outside it, just with a slightly shifted Overton Window? When is it Ok to sweep ideas under the rug and throw hands up in the air?
re: inverse proportinality: Good point, I’ll have to think about that more. Maybe it does neatly cancel out, or even worse, since my utility function isn’t linear in happy lives lived, maybe it more than cancels out.
I for one have seriously investigated all those weird philosophical ideas you mentioned. ;) And I think our community has been pretty good about taking these ideas seriously, especially compared to, well, literally every other community, including academic philosophy. Our overton window definitely includes all these ideas, I’d say.
But I agree with your general point that there is a tension we should explore. Even if we are OK seriously discussing these ideas, we often don’t actually live by them. Our overton window includes them, but our median opinion doesn’t. Why not?
I think there is a good answer, and it has to do with humility/caution. Philosophy is weird. If you follow every argument where it leads you, you very quickly find that your beliefs don’t add up to normality, or anything close. Faith that beliefs will (approximately) add up to normality seems to be important for staying sane and productive, and moreover, seems to have been vindicated often in the past: crazy-sounding arguments turn out to have flaws in them, or maybe they work but there is an additional argument we hadn’t considered that combines with it to add up to normality.
I agree that Lesswrong is probably the place where crazy philosophical ideas are given the most serious consideration; elsewhere it’s usually just mentioned as a mind-blowing trivia over dinner parties, if at all. I think there are two reasons why these ideas are so troubling:
They are big. Failing to take account of even one of them will derail one’s worldview completely
Being humble and not taking an explicit position is still just taking the default position effectively
But alas, I guess that’s just the epistemological reality we live in. We’ll just have to make working assumptions and carry on.
Agreed. If you want to talk more about these ideas sometime, I’d be happy to video chat!
Thank you for the offer, I am however currently reluctant to interact with people I met on the internet in this way. But know that your openness and forthcomingness is greatly appreciated :)