I was reading the simulation argument again for kicks and a few errors tripped me up for a bit. I figured I’d point them out here in case anyone’s interested or noticed the same problems. If I made a mistake in my analysis please let me know so I can promptly put a bolded warning saying so at the top of this article. (I do not endorse the method of thinking that involves naive anthropics or probabilities of simulation but nonetheless I am willing to play with that framework sometimes for communication’s sake.)
We can reasonably argue that all three of the propositions at the end of section 4 of the paper, titled “The core of the simulation argument”, are false. Most human-level technological civilizations can survive to reach a posthuman stage (fp=.9), and want to (fI=.9) and are able to run lots of ancestor simulations; and yet there can conceivably be no observers with human-type experiences that live in simulations (fsim=0). Why? Because not all human-level technological civilizations are human technological civilizations; it could easily be argued that most aren’t. Human technological civilizations could be part of the fraction of human-level technological civilizations that do not survive to reach a posthuman stage, or survive but do not want to run lots of ancestor simulations. Thus there will be no human ancestor-simulations even if there are many many alien ancestor-simulations who humans do not share an observer moment reference class with.
Nitpicking? Not quite. This forces us to change “fraction of all human-level technological civilizations that survive to reach posthuman stage” to “probability of human civilization reaching posthuman stage”, but then some of the discussion in the paper’s Interpretation section (section 6) sounds pretty weird because it’s comparing human civilization to other human-level civilizations. The equivocation on “posthuman” causes various other statements and passages in the original article to be false-esque or ambiguous, and these would need to be changed. fsim should be changed to fancestor_sim as well; we might be in non-ancestor simulations. The fraction of us in ancestor-simulations is just one possible lower bound for the fraction of us in simulations generally. Luckily, besides that error I think the paper mostly avoids insinuating that we are unlikely to be in a simulation if we are not in an ancestor-simulation.
The abstract of the paper differs from section 4, and uses “human” instead of “human-level”. “This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.” Using the “descended from humans” definition of “posthuman”, we see that this argument works; however, it is not supported by section 4, which currently fails to specify human civilizations only. Using the “very technologically advanced” definition of “posthuman”, we see that this argument fails for the reasons given above. Either way the wording should be made clearer, especially so considering that section 6 talks about posthuman civilizations that aren’t posthuman. It also doesn’t match the conclusion despite the similar structure.
The conclusion is more like section 4, and thus fails in the same way as section 4.Not only that, the conclusion says something really weird: “In the dark forest of our current ignorance, it seems sensible to apportion one’s credence roughly evenly between (1), (2), and (3).” I hope this isn’t implying the credences should sum to 1, which would be absurd. After making the corrections suggested above it is easy to see that the 90% confidence in all of (1), (2), and (3) is justifiable. (A vaguely plausible scenario to go with that one is where human civilization gets uFAIed, alien civilizations that don’t get uFAIed don’t waste time simulating their ancestors but instead simulate millions of possible sibling civilizations that got uFAIed for reasons of acausal trade plus diminishing marginal utility functions or summat, and thus we’re in one of those sibling simulations while aliens try to compute our values and our game theoretic trustworthiness et cetera.)
All that said, despite the current problems with the structure of its less important supporting arguments, the final sentence remains true: “Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation.”
Simulation Argument errors
I was reading the simulation argument again for kicks and a few errors tripped me up for a bit. I figured I’d point them out here in case anyone’s interested or noticed the same problems. If I made a mistake in my analysis please let me know so I can promptly put a bolded warning saying so at the top of this article. (I do not endorse the method of thinking that involves naive anthropics or probabilities of simulation but nonetheless I am willing to play with that framework sometimes for communication’s sake.)
We can reasonably argue that all three of the propositions at the end of section 4 of the paper, titled “The core of the simulation argument”, are false. Most human-level technological civilizations can survive to reach a posthuman stage (fp=.9), and want to (fI=.9) and are able to run lots of ancestor simulations; and yet there can conceivably be no observers with human-type experiences that live in simulations (fsim=0). Why? Because not all human-level technological civilizations are human technological civilizations; it could easily be argued that most aren’t. Human technological civilizations could be part of the fraction of human-level technological civilizations that do not survive to reach a posthuman stage, or survive but do not want to run lots of ancestor simulations. Thus there will be no human ancestor-simulations even if there are many many alien ancestor-simulations who humans do not share an observer moment reference class with.
Nitpicking? Not quite. This forces us to change “fraction of all human-level technological civilizations that survive to reach posthuman stage” to “probability of human civilization reaching posthuman stage”, but then some of the discussion in the paper’s Interpretation section (section 6) sounds pretty weird because it’s comparing human civilization to other human-level civilizations. The equivocation on “posthuman” causes various other statements and passages in the original article to be false-esque or ambiguous, and these would need to be changed. fsim should be changed to fancestor_sim as well; we might be in non-ancestor simulations. The fraction of us in ancestor-simulations is just one possible lower bound for the fraction of us in simulations generally. Luckily, besides that error I think the paper mostly avoids insinuating that we are unlikely to be in a simulation if we are not in an ancestor-simulation.
The abstract of the paper differs from section 4, and uses “human” instead of “human-level”. “This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.” Using the “descended from humans” definition of “posthuman”, we see that this argument works; however, it is not supported by section 4, which currently fails to specify human civilizations only. Using the “very technologically advanced” definition of “posthuman”, we see that this argument fails for the reasons given above. Either way the wording should be made clearer, especially so considering that section 6 talks about posthuman civilizations that aren’t posthuman. It also doesn’t match the conclusion despite the similar structure.
The conclusion is more like section 4, and thus fails in the same way as section 4. Not only that, the conclusion says something really weird: “In the dark forest of our current ignorance, it seems sensible to apportion one’s credence roughly evenly between (1), (2), and (3).” I hope this isn’t implying the credences should sum to 1, which would be absurd. After making the corrections suggested above it is easy to see that the 90% confidence in all of (1), (2), and (3) is justifiable. (A vaguely plausible scenario to go with that one is where human civilization gets uFAIed, alien civilizations that don’t get uFAIed don’t waste time simulating their ancestors but instead simulate millions of possible sibling civilizations that got uFAIed for reasons of acausal trade plus diminishing marginal utility functions or summat, and thus we’re in one of those sibling simulations while aliens try to compute our values and our game theoretic trustworthiness et cetera.)
All that said, despite the current problems with the structure of its less important supporting arguments, the final sentence remains true: “Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation.”