New AI designs (world design + architectural priors + training/education system) should be tested first in the safest virtual worlds: which in simplification are simply low tech worlds without computer technology. Design combinations that work well in safe low-tech sandboxes are promoted to less safe high-tech VR worlds, and then finally the real world.
A key principle of a secure code sandbox is that the code you are testing should not be aware that it is in a sandbox.
So you’re saying that I’m secretly an AI being trained to be friendly for a more advanced world? ;)
That’s possible given the sim argument. The eastern idea of reincarnation and the western idea of afterlife map to two main possibilities: in the reincarnation model all that is transferred between worlds is the architectural seed or hyperparameters. In the afterlife model the creator has some additional moral obligation or desire to save and transfer whole minds out.
So you’re saying that I’m secretly an AI being trained to be friendly for a more advanced world? ;)
That’s possible given the sim argument. The eastern idea of reincarnation and the western idea of afterlife map to two main possibilities: in the reincarnation model all that is transferred between worlds is the architectural seed or hyperparameters. In the afterlife model the creator has some additional moral obligation or desire to save and transfer whole minds out.