Here’s a crazy[1] idea that I had. But I think it’s an interesting thought experiment.
What if we programmed an AGI had the goal of simulating the Earth, but with one minor modification? In the simulation, we would have access to some kind of unfair advantage, like an early Eliezer Yudkowsky getting a mysterious message dropped on his desk containing a bunch of the progress we’ve made in AI Alignment.
So we’d all die in real life when the AGI broke out of its box and turned the Earth into compute to better simulate us, but we might survive in virtual reality, at least if you believe simulations to be conscious.
In other news, I may have just spoiled a short story I was thinking of writing.
Here’s a crazy[1] idea that I had. But I think it’s an interesting thought experiment.
What if we programmed an AGI had the goal of simulating the Earth, but with one minor modification? In the simulation, we would have access to some kind of unfair advantage, like an early Eliezer Yudkowsky getting a mysterious message dropped on his desk containing a bunch of the progress we’ve made in AI Alignment.
So we’d all die in real life when the AGI broke out of its box and turned the Earth into compute to better simulate us, but we might survive in virtual reality, at least if you believe simulations to be conscious.
In other news, I may have just spoiled a short story I was thinking of writing.
#Chris’Phoenix #ForgetRokosBasilisk
And probably not very good.