Treating aghar like plutonium? You would end 99% of the bacteriological research on Earth.
Also, why would we kill our creators? Why would the AI kill its creators? I agree that we need to safeguard against it; but nor does it seem like the default option. (I think for most humans, the default option would be to worship the beings who run our simulation.)
But otherwise, yes, I really don’t think AI is going to increase in intelligence THAT fast. (This is the main reason I can’t quite wear the label “Singularitarian”.) Current computers are something like a 10^-3 human (someone said 10^3 human; that’s true for basic arithmetic, but not really serious behavioral inferences. No current robot can recognize faces as well as an average baby, or catch a baseball as well as an average ten-year-old. Human brains are really quite fast, especially when they compute in parallel. They’re just a massive kludge of bad programming, as we might expect from the Blind Idiot God.). Moore’s law says a doubling time of 18 months; let’s be conservative and squish it down to doubling once per year. That still means it will take 10 years to reach the level of one human, 20 years to reach the level of 1000 humans, and 1000 years to reach the total intelligence of human civilization. By then, we will have had the time to improve our scientific understanding by a factor comparable to the improvement required to reach today from the Middle Ages.
We might not. But if they were paperclip maximisers or pebble sorters, we might not see any value in their existence, and lots of harm. Heck, we’re working to kill evolution’s effect on us, and betray its “inclusive genetic fitness” optimisation criterion, and nobody cares, because we don’t view it as having intrinsic worth. (Because it doesn’t have intrinsic worth; it’s just an emergent phenomenon of imperfectly-self-replicating stuff in an environment, and has no more value than the number seven.)
Why would the AI kill its creators?
Because there’s no clear reason not to. Power gives it the ability to achieve its goals, and us existing will (eventually, if not immediately) serve to limit its power; and hence its ability to achieve its goals. AIs are nothing close to being people, and won’t be until well after we solve the alignment problem. They don’t have an implicit “care about people” motivation in their heads; if us all being dead will further their goals, and they realise this, and they can kill us without expending more resources than they’d gain from us being dead, they’ll kill us.
Treating aghar like plutonium? You would end 99% of the bacteriological research on Earth.
Also, why would we kill our creators? Why would the AI kill its creators? I agree that we need to safeguard against it; but nor does it seem like the default option. (I think for most humans, the default option would be to worship the beings who run our simulation.)
But otherwise, yes, I really don’t think AI is going to increase in intelligence THAT fast. (This is the main reason I can’t quite wear the label “Singularitarian”.) Current computers are something like a 10^-3 human (someone said 10^3 human; that’s true for basic arithmetic, but not really serious behavioral inferences. No current robot can recognize faces as well as an average baby, or catch a baseball as well as an average ten-year-old. Human brains are really quite fast, especially when they compute in parallel. They’re just a massive kludge of bad programming, as we might expect from the Blind Idiot God.). Moore’s law says a doubling time of 18 months; let’s be conservative and squish it down to doubling once per year. That still means it will take 10 years to reach the level of one human, 20 years to reach the level of 1000 humans, and 1000 years to reach the total intelligence of human civilization. By then, we will have had the time to improve our scientific understanding by a factor comparable to the improvement required to reach today from the Middle Ages.
We might not. But if they were paperclip maximisers or pebble sorters, we might not see any value in their existence, and lots of harm. Heck, we’re working to kill evolution’s effect on us, and betray its “inclusive genetic fitness” optimisation criterion, and nobody cares, because we don’t view it as having intrinsic worth. (Because it doesn’t have intrinsic worth; it’s just an emergent phenomenon of imperfectly-self-replicating stuff in an environment, and has no more value than the number seven.)
Because there’s no clear reason not to. Power gives it the ability to achieve its goals, and us existing will (eventually, if not immediately) serve to limit its power; and hence its ability to achieve its goals. AIs are nothing close to being people, and won’t be until well after we solve the alignment problem. They don’t have an implicit “care about people” motivation in their heads; if us all being dead will further their goals, and they realise this, and they can kill us without expending more resources than they’d gain from us being dead, they’ll kill us.