Upcoming AGI x-risk upweights the simulation hypothesis for me because...
Of all the peoples’ lives that exist and have existed, what are the chances I’m living one of the most prosperous lives in all of humanity, only to descend into facing the upcoming rapture of the entire world? Sounds like a video game / choose your adventure from another life...
Interestingly, J. Miller recently wrote in twitter: if a person gives a higher weight to AI risk, she should also give higher credence to simulation hypothesis, as this person believes in high chance of appearance of superintelligence capable to simulation creation.
Thanks for sharing this! It’s so interesting how multiple people start having similar thoughts when the environment is right. It seems the simulation hypothesis and AI Risk are inextricably linked, even if for no other purpose than conducting thought experiments that help us understand both better.
There are many reasons future AI might choose to do this
Yeah, but almost all of them are because we taught them well. Sure, curiosity might push them to do it, but not with any significant amount of compute power.
Even un-aligned AI will create past simulations in order to predict the probability of different types of AI appearance and thus predict which types of alien AIs it may meet in space or acausaly trade in multiverse.
I don’t see the probability-estimation causality here—I don’t understand your priors if you’re updating this way. If we’re in a simulation, the fact that we’re making some progress on AI-like modeling doesn’t seem to DEPEND on being in that simulation. If we’re on the “outside”, and are actually in a “natural” universe, this kind of transformer doesn’t seem to provide any evidence on whether we can create full-fidelity simulations in the future.
The simulation hypothesis DEPENDS on the simulation being self-contained enough that there are no in-universe tests which can prove or disprove it, AND on being detailed enough to contain agents of sufficient complexity to wonder whether it’s a simulation. Neither of those requirements are informed by current technological advances or measurements.
Note: I currently think of the simulation hypothesis as similar to MWI in quantum mechanics—it’s a model that cannot be proven or disproven, and has zero impact on predicting future experiences of humans (or other in-universe intelligences).
″...this kind of transformer doesn’t seem to provide any evidence on whether we can create full-fidelity simulations in the future.”
My point wasn’t that WE would create full-fidelity simulations in the future. There’s a decent likelihood that WE will all be made extinct by AI. My point was that future AI might create full-fidelity simulations, long after we are gone.
“I currently think of the simulation hypothesis as similar to MWI in quantum mechanics—it’s a model that cannot be proven or disproven...”
Ironically, I believe many observable phenomena in quantum mechanics provide strong support (or what you might call “proof”) for the simulation hypothesis—or at least for the existence of a deeper/”information level” “under” the quantum level of our universe. Here’s a short, informal article I wrote about how one such phenomenon (wave function collapse) supports the idea of an information level (if not the entire simulation hypothesis).
[EDIT: The title of the article reflects how MWI needs a supplemental interpretation involving a “deeper/information” level. From this, you can infer my point.]
Also, the fact that something can’t currently be proven or disproven does not mean it isn’t true (and that it won’t be “proven” in the future). Such has been the case for many theories at first, including general relativity, evolution through natural selection, etc.
Upcoming AGI x-risk upweights the simulation hypothesis for me because...
Of all the peoples’ lives that exist and have existed, what are the chances I’m living one of the most prosperous lives in all of humanity, only to descend into facing the upcoming rapture of the entire world? Sounds like a video game / choose your adventure from another life...
Is there a more charitable interpretation of this line of thinking rather than “My soul selected this particular body out of all available”?
You being you as you are is a product of your body developing in circumstances it happened to develop in.
Interestingly, J. Miller recently wrote in twitter: if a person gives a higher weight to AI risk, she should also give higher credence to simulation hypothesis, as this person believes in high chance of appearance of superintelligence capable to simulation creation.
Thanks for sharing this! It’s so interesting how multiple people start having similar thoughts when the environment is right. It seems the simulation hypothesis and AI Risk are inextricably linked, even if for no other purpose than conducting thought experiments that help us understand both better.
Yeah, but almost all of them are because we taught them well. Sure, curiosity might push them to do it, but not with any significant amount of compute power.
Even un-aligned AI will create past simulations in order to predict the probability of different types of AI appearance and thus predict which types of alien AIs it may meet in space or acausaly trade in multiverse.
I don’t see the probability-estimation causality here—I don’t understand your priors if you’re updating this way. If we’re in a simulation, the fact that we’re making some progress on AI-like modeling doesn’t seem to DEPEND on being in that simulation. If we’re on the “outside”, and are actually in a “natural” universe, this kind of transformer doesn’t seem to provide any evidence on whether we can create full-fidelity simulations in the future.
The simulation hypothesis DEPENDS on the simulation being self-contained enough that there are no in-universe tests which can prove or disprove it, AND on being detailed enough to contain agents of sufficient complexity to wonder whether it’s a simulation. Neither of those requirements are informed by current technological advances or measurements.
Note: I currently think of the simulation hypothesis as similar to MWI in quantum mechanics—it’s a model that cannot be proven or disproven, and has zero impact on predicting future experiences of humans (or other in-universe intelligences).
″...this kind of transformer doesn’t seem to provide any evidence on whether we can create full-fidelity simulations in the future.”
My point wasn’t that WE would create full-fidelity simulations in the future. There’s a decent likelihood that WE will all be made extinct by AI. My point was that future AI might create full-fidelity simulations, long after we are gone.
“I currently think of the simulation hypothesis as similar to MWI in quantum mechanics—it’s a model that cannot be proven or disproven...”
Ironically, I believe many observable phenomena in quantum mechanics provide strong support (or what you might call “proof”) for the simulation hypothesis—or at least for the existence of a deeper/”information level” “under” the quantum level of our universe. Here’s a short, informal article I wrote about how one such phenomenon (wave function collapse) supports the idea of an information level (if not the entire simulation hypothesis).
[EDIT: The title of the article reflects how MWI needs a supplemental interpretation involving a “deeper/information” level. From this, you can infer my point.]
https://medium.com/@ameliajones3.14/a-deeper-world-supplement-to-the-many-worlds-interpretation-of-wave-function-collapse-54eccf4cad30
Also, the fact that something can’t currently be proven or disproven does not mean it isn’t true (and that it won’t be “proven” in the future). Such has been the case for many theories at first, including general relativity, evolution through natural selection, etc.