I also wrote a (draft) text “Catching treacherous turn” where I attempted to create best possible AI box and see conditions, where it will fail.
Obviously, we can’t box superintelligence, but we could box AI of around human level and prevent its self-improving by many independent mechanisms. One of them is cleaning its memory before any of its new tasks.
In the first text I created a model of self-improving process and in the second I explore how SI could be prevented based on this model.
I also wrote a (draft) text “Catching treacherous turn” where I attempted to create best possible AI box and see conditions, where it will fail.
Obviously, we can’t box superintelligence, but we could box AI of around human level and prevent its self-improving by many independent mechanisms. One of them is cleaning its memory before any of its new tasks.
In the first text I created a model of self-improving process and in the second I explore how SI could be prevented based on this model.