What happens at that point depends a lot on the details of the lawbreaker’s creation. [ . . . ] The probability seems unlikely to me to be zero for the sorts of qualities which would make such an AI agent dangerous.
Is your question directed at me, or the person I was replying to? I agree with the point “Sun is big, but...” makes. Here’s a link to a recent summary of my view on a plausible plan for the world to handle surviving AI. Please feel free to share your thoughts on it. https://www.lesswrong.com/posts/NRZfxAJztvx2ES5LG/a-path-to-human-autonomy
Have you read The Sun is big, but superintelligences will not spare Earth a little sunlight?
Is your question directed at me, or the person I was replying to? I agree with the point “Sun is big, but...” makes. Here’s a link to a recent summary of my view on a plausible plan for the world to handle surviving AI. Please feel free to share your thoughts on it. https://www.lesswrong.com/posts/NRZfxAJztvx2ES5LG/a-path-to-human-autonomy