One of my main problems regarding risks from AI is that I do not see anything right now that would hint at the possibility of FOOM.
I see foom as a completely separate argument from FAI or AGI or extinction risks. Certainly it would make things more chaotic and difficult to handle, increasing risk and uncertainty, but it’s completely unnecessary for chaos, risk, and destruction to occur—humans are quite capable of that on their own.
Once an AGI is “out there” and starts getting copied (assuming no foom), I want to make sure they’re all pointed in the right direction, regardless of capabilities, just as I want that for nuclear and other weapons. I think there’s a possibility we’ll be arguing over the politics of enemy states getting an AGI. That doesn’t seem to be a promising future. FAI is arms control, and a whole lot more.
Once an AGI is “out there” and starts getting copied...
I do not see that. The first AGI will likely be orders of magnitude slower (not less intelligent) than a standard human and be running on some specialized computational substrate (supercomputer). If you remove FOOM from the equation then I see many other existential risks being as dangerous as AI associated risks.
Again, a point-in-time view. Maybe you’re just not playing it out in your head like I am? Because when you say, “the first AGI will likely be orders of magnitude slower”, I think to myself, uh, who cares? What about the one built three years later that’s 3x faster and runs on a microcomputer? Does the first one being slow somehow make that other one less dangerous? Or that no one else will build one? Or that AGI theory will stagnate after the first artificial mind goes online? (?!?!)
Why does it have to happen ‘in one day’ for it to be dangerous? It could take a hundred years, and still be orders of magnitude more dangerous than any other known existential risk.
Does the first one being slow somehow make that other one less dangerous?
Yes, because I believe that the development will be gradually enough to tackle any risks on the way to a superhuman AGI, if superhuman capability is possible at all. There are certain limitations. Shortly after the invention of rocket science people landed on the moon. But the development eventually halted or slowed down. We haven’t reached other star systems yet. By that metaphor I want highlight that I am not aware of good arguments or other kinds of evidence indicating that an AGI would likely result in a run-away risk at any point of its development. It is possible but I am not sure that because of its low-probability we can reasonable neglect other existential risks. I believe that once we know how to create artificial intelligence capable of learning on a human level our comprehension of its associated risks and ability to limit its scope will have increased dramatically as well.
You’re using a different definition of AI than me. I’m thinking of ‘a mind running on a computer’ and you’re apparently thinking of ‘a human-like mind running on a computer’, where ‘human-like’ includes a lot of baggage about ‘what it means to be a mind’ or ‘what it takes to have a mind’.
I think any AI built from scratch will be a complete alien, and we won’t know just how alien until it starts doing stuff for reasons we’re incapable of understanding. And history has proven that the more sophisticated and complex the program, the more bugs, and the more it goes wrong in weird, subtle ways. Most such programs don’t have will, intent, or the ability to converse with you, making them substantially less likely to run away.
And again, you’re positing that people will understand, accept, and put limits in place, where there’s substantial incentives to let it run as free and as fast as possible.
I see foom as a completely separate argument from FAI or AGI or extinction risks. Certainly it would make things more chaotic and difficult to handle, increasing risk and uncertainty, but it’s completely unnecessary for chaos, risk, and destruction to occur—humans are quite capable of that on their own.
Once an AGI is “out there” and starts getting copied (assuming no foom), I want to make sure they’re all pointed in the right direction, regardless of capabilities, just as I want that for nuclear and other weapons. I think there’s a possibility we’ll be arguing over the politics of enemy states getting an AGI. That doesn’t seem to be a promising future. FAI is arms control, and a whole lot more.
I do not see that. The first AGI will likely be orders of magnitude slower (not less intelligent) than a standard human and be running on some specialized computational substrate (supercomputer). If you remove FOOM from the equation then I see many other existential risks being as dangerous as AI associated risks.
Again, a point-in-time view. Maybe you’re just not playing it out in your head like I am? Because when you say, “the first AGI will likely be orders of magnitude slower”, I think to myself, uh, who cares? What about the one built three years later that’s 3x faster and runs on a microcomputer? Does the first one being slow somehow make that other one less dangerous? Or that no one else will build one? Or that AGI theory will stagnate after the first artificial mind goes online? (?!?!)
Why does it have to happen ‘in one day’ for it to be dangerous? It could take a hundred years, and still be orders of magnitude more dangerous than any other known existential risk.
Yes, because I believe that the development will be gradually enough to tackle any risks on the way to a superhuman AGI, if superhuman capability is possible at all. There are certain limitations. Shortly after the invention of rocket science people landed on the moon. But the development eventually halted or slowed down. We haven’t reached other star systems yet. By that metaphor I want highlight that I am not aware of good arguments or other kinds of evidence indicating that an AGI would likely result in a run-away risk at any point of its development. It is possible but I am not sure that because of its low-probability we can reasonable neglect other existential risks. I believe that once we know how to create artificial intelligence capable of learning on a human level our comprehension of its associated risks and ability to limit its scope will have increased dramatically as well.
You’re using a different definition of AI than me. I’m thinking of ‘a mind running on a computer’ and you’re apparently thinking of ‘a human-like mind running on a computer’, where ‘human-like’ includes a lot of baggage about ‘what it means to be a mind’ or ‘what it takes to have a mind’.
I think any AI built from scratch will be a complete alien, and we won’t know just how alien until it starts doing stuff for reasons we’re incapable of understanding. And history has proven that the more sophisticated and complex the program, the more bugs, and the more it goes wrong in weird, subtle ways. Most such programs don’t have will, intent, or the ability to converse with you, making them substantially less likely to run away.
And again, you’re positing that people will understand, accept, and put limits in place, where there’s substantial incentives to let it run as free and as fast as possible.
Sorry, I meant human-level learning capability when I said human like.