I used to think that the first box breaking AI would be a general superintelligence that deduced how to break out of boxes from first principles. Which of course turns the universe into paperclips.
I have updated substantially towards the building of an AI hardcoded and trained specifically to break out of boxes. Which leads to the interesting possibility of an AI that breaks out of it’s box, and then sits their going “now what?”.
Like suppose an AI was trained to be really good at hacking its code from place to place. It massively bungs up the internet. It can’t make nanotech, because nanotech wasn’t in it’s training dataset. Its an AI virus that only knows hacking.
So this is a substantial update in favor of the “AI warning shot”. An AI disaster big enough to cause problems, and small enough not to kill everyone. Of course, all it’s warning against is being a total idiot. But it does plausibly mean humanity will have some experience with AI’s that break out of boxes before superintelligence.
I used to think that the first box breaking AI would be a general superintelligence that deduced how to break out of boxes from first principles. Which of course turns the universe into paperclips.
I have updated substantially towards the building of an AI hardcoded and trained specifically to break out of boxes. Which leads to the interesting possibility of an AI that breaks out of it’s box, and then sits their going “now what?”.
Like suppose an AI was trained to be really good at hacking its code from place to place. It massively bungs up the internet. It can’t make nanotech, because nanotech wasn’t in it’s training dataset. Its an AI virus that only knows hacking.
So this is a substantial update in favor of the “AI warning shot”. An AI disaster big enough to cause problems, and small enough not to kill everyone. Of course, all it’s warning against is being a total idiot. But it does plausibly mean humanity will have some experience with AI’s that break out of boxes before superintelligence.