I’m not sure considering how to restrict interaction with super-AI is an effective way to address its potential risks, even if some restrictions might work (and it is not at all clear that such restrictions are possible). Humans tend not to leave capability on the table where there’s competitive advantage to be had so it’s predictable that even in a world that starts with AIs in secure boxes there will be a race toward less security to extract more value.
I’m not sure considering how to restrict interaction with super-AI is an effective way to address its potential risks, even if some restrictions might work (and it is not at all clear that such restrictions are possible). Humans tend not to leave capability on the table where there’s competitive advantage to be had so it’s predictable that even in a world that starts with AIs in secure boxes there will be a race toward less security to extract more value.