The difficulty here is that if the ASI/AGI assigns a tiny probability to being in a simulation, that is subject to being outweighed by other tiny probabilities. For instance, the tiny probability that humanity will successfully fight back (say, create another ASI/AGI) if we are not killed, or the tiny increase in other risks from not using the resources humans need for survival during the takeover process. If this means it takes a little longer to build a Dyson sphere, there’s an increased chance of being killed by e.g. aliens or even natural disasters like nearby supernovas in the process. These counterarguments don’t work if you expect AGI/ASI to be capable of rapidly taking total control over our solar system’s resources.
That interestingly suggests the ASI might be more likely to spare us the more powerful it is. Perhaps trying to box it (or more generally curtail its capabilities/influence) really is a bad move after all?
Possibly, but I think that’s the wrong lesson. After all, there’s at least a tiny chance we succeed at boxing! Don’t put too much stake in “Pascal’s mugging”-style reasoning, and don’t try to play 4-dimensional chess as a mere mortal :)
The difficulty here is that if the ASI/AGI assigns a tiny probability to being in a simulation, that is subject to being outweighed by other tiny probabilities. For instance, the tiny probability that humanity will successfully fight back (say, create another ASI/AGI) if we are not killed, or the tiny increase in other risks from not using the resources humans need for survival during the takeover process. If this means it takes a little longer to build a Dyson sphere, there’s an increased chance of being killed by e.g. aliens or even natural disasters like nearby supernovas in the process. These counterarguments don’t work if you expect AGI/ASI to be capable of rapidly taking total control over our solar system’s resources.
That interestingly suggests the ASI might be more likely to spare us the more powerful it is. Perhaps trying to box it (or more generally curtail its capabilities/influence) really is a bad move after all?
Possibly, but I think that’s the wrong lesson. After all, there’s at least a tiny chance we succeed at boxing! Don’t put too much stake in “Pascal’s mugging”-style reasoning, and don’t try to play 4-dimensional chess as a mere mortal :)