What if someone proves that advanced AGI (or even some dumb but sophisticated AI) cannot be “contained” nor reliably guaranteed to be “friendly”/”aligned”/etc. (whatever it may mean) ? Can be something vaguely goedelian, along the lines of “any sufficiently advanced system …”.
Those godelian style arguments (like the no-free-lunch theorems) work much better in theory than in practice. We only need a reasonably high probability of the AI being contained or friendly or aligned...
What if someone proves that advanced AGI (or even some dumb but sophisticated AI) cannot be “contained” nor reliably guaranteed to be “friendly”/”aligned”/etc. (whatever it may mean) ? Can be something vaguely goedelian, along the lines of “any sufficiently advanced system …”.
Those godelian style arguments (like the no-free-lunch theorems) work much better in theory than in practice. We only need a reasonably high probability of the AI being contained or friendly or aligned...