Also, when we’re talking about artificial intelligences, the time period between the point “They’re intelligent enough to have some sort of ethical value” and the point “They’re intelligent enough to totally dominate us” is most likely really, really short, I’d say less than 10 years, some could say less than 10 days.
No, didn’t read the sequences. I will do that. The link might be better named to something that indicates what it actually is. But I didn’t say the AIs would be safe (or super-intelligent, for that matter), and I don’t assume they would be. But those who create them may assume that.
But I didn’t say the AIs would be safe (or super-intelligent, for that matter)
This sort of disclaimer can protect in you in a discussion on the level of armchair philosophy, whose sole purpose is to show off how smart you are, but if you were to actually build an AI, and it went FOOM and tiled the universe with molecular smiley faces, taking all humans apart in the process, the fact that you didn’t claim the AI would be safe would not compel the universe to say “that’s all right, then” and hit a magic reset button to give you another chance. Which is why we ask the question “Is this AI safe?” and tend to not like ideas that result in a negative answer, even if the idea didn’t claim to address that concern.
You haven’t read the sequences, have you? The idea of using evolution to produce safe-enough superintelligences was destroyed quite neatly there, say, here: http://lesswrong.com/lw/td/magical_categories/
Also, when we’re talking about artificial intelligences, the time period between the point “They’re intelligent enough to have some sort of ethical value” and the point “They’re intelligent enough to totally dominate us” is most likely really, really short, I’d say less than 10 years, some could say less than 10 days.
No, didn’t read the sequences. I will do that. The link might be better named to something that indicates what it actually is. But I didn’t say the AIs would be safe (or super-intelligent, for that matter), and I don’t assume they would be. But those who create them may assume that.
This sort of disclaimer can protect in you in a discussion on the level of armchair philosophy, whose sole purpose is to show off how smart you are, but if you were to actually build an AI, and it went FOOM and tiled the universe with molecular smiley faces, taking all humans apart in the process, the fact that you didn’t claim the AI would be safe would not compel the universe to say “that’s all right, then” and hit a magic reset button to give you another chance. Which is why we ask the question “Is this AI safe?” and tend to not like ideas that result in a negative answer, even if the idea didn’t claim to address that concern.