My favorite problem with this entire thread is that it’s basically arguing that even the very first test cases will destroy us all. In reality, nobody puts in a grant application to construct an intelligent being inside a computer with the goal of creating 100 paperclips. They put in the grant to ‘dominate the stock market’, or ‘defend the nation’, or ‘cure death’. And if they don’t, then the Chinese government, who stole the code, will, or that Open Source initiative will, or the South African independent development will, because there’s enormous incentives to do so.
At best, boxing an AI with trivial, pointless tasks only delays the more dangerous versions.
My favorite problem with this entire thread is that it’s basically arguing that even the very first test cases will destroy us all. In reality, nobody puts in a grant application to construct an intelligent being inside a computer with the goal of creating 100 paperclips. They put in the grant to ‘dominate the stock market’, or ‘defend the nation’, or ‘cure death’. And if they don’t, then the Chinese government, who stole the code, will, or that Open Source initiative will, or the South African independent development will, because there’s enormous incentives to do so.
At best, boxing an AI with trivial, pointless tasks only delays the more dangerous versions.
I like to think that Skynet got its start through creative interpretation of a goal like “ensure world peace”. ;-)