...as for the 3rd last paragraph, yes, once a 2008 AGI has the ability to contact 2008 humans, humanity is doomed if the AGI deems fit.
But I don’t see why a 2050 world couldn’t merely use quantum encyption communications, monitored for AGI. And monitor supercomputing applications.
Even the specific method describing how AGI gets protein nanorobots might be flawed in a world certainly ravaged by designer pandemic terrorist attacks. All chemists (and other 2050 WMD professions) are likely to be monitored with RF tags. All labs, even the types of at-home PCR biochemistry today, are likely to be monitored. Maybe there are other methods the Bayesian AGI could escape (such as?). Wouldn’t X-raying mail for beakers, and treating the protein medium aghar like plutonium is now treated, suffice?
Communications jamming equipment uniformly distributed throughout Earth, might permanently box an AGI that somehow (magic?!) escapes a supercomputer application screen. If AGI needs computer hardware/software made in the next two or three decades it might be unstoppable. Beyond that, humans will already be using such AGI hardware requirements to commission WMDs and the muscular NSA 2050 will already be attentive to such phenomena.
Treating aghar like plutonium? You would end 99% of the bacteriological research on Earth.
Also, why would we kill our creators? Why would the AI kill its creators? I agree that we need to safeguard against it; but nor does it seem like the default option. (I think for most humans, the default option would be to worship the beings who run our simulation.)
But otherwise, yes, I really don’t think AI is going to increase in intelligence THAT fast. (This is the main reason I can’t quite wear the label “Singularitarian”.) Current computers are something like a 10^-3 human (someone said 10^3 human; that’s true for basic arithmetic, but not really serious behavioral inferences. No current robot can recognize faces as well as an average baby, or catch a baseball as well as an average ten-year-old. Human brains are really quite fast, especially when they compute in parallel. They’re just a massive kludge of bad programming, as we might expect from the Blind Idiot God.). Moore’s law says a doubling time of 18 months; let’s be conservative and squish it down to doubling once per year. That still means it will take 10 years to reach the level of one human, 20 years to reach the level of 1000 humans, and 1000 years to reach the total intelligence of human civilization. By then, we will have had the time to improve our scientific understanding by a factor comparable to the improvement required to reach today from the Middle Ages.
We might not. But if they were paperclip maximisers or pebble sorters, we might not see any value in their existence, and lots of harm. Heck, we’re working to kill evolution’s effect on us, and betray its “inclusive genetic fitness” optimisation criterion, and nobody cares, because we don’t view it as having intrinsic worth. (Because it doesn’t have intrinsic worth; it’s just an emergent phenomenon of imperfectly-self-replicating stuff in an environment, and has no more value than the number seven.)
Why would the AI kill its creators?
Because there’s no clear reason not to. Power gives it the ability to achieve its goals, and us existing will (eventually, if not immediately) serve to limit its power; and hence its ability to achieve its goals. AIs are nothing close to being people, and won’t be until well after we solve the alignment problem. They don’t have an implicit “care about people” motivation in their heads; if us all being dead will further their goals, and they realise this, and they can kill us without expending more resources than they’d gain from us being dead, they’ll kill us.
...as for the 3rd last paragraph, yes, once a 2008 AGI has the ability to contact 2008 humans, humanity is doomed if the AGI deems fit. But I don’t see why a 2050 world couldn’t merely use quantum encyption communications, monitored for AGI. And monitor supercomputing applications. Even the specific method describing how AGI gets protein nanorobots might be flawed in a world certainly ravaged by designer pandemic terrorist attacks. All chemists (and other 2050 WMD professions) are likely to be monitored with RF tags. All labs, even the types of at-home PCR biochemistry today, are likely to be monitored. Maybe there are other methods the Bayesian AGI could escape (such as?). Wouldn’t X-raying mail for beakers, and treating the protein medium aghar like plutonium is now treated, suffice? Communications jamming equipment uniformly distributed throughout Earth, might permanently box an AGI that somehow (magic?!) escapes a supercomputer application screen. If AGI needs computer hardware/software made in the next two or three decades it might be unstoppable. Beyond that, humans will already be using such AGI hardware requirements to commission WMDs and the muscular NSA 2050 will already be attentive to such phenomena.
Treating aghar like plutonium? You would end 99% of the bacteriological research on Earth.
Also, why would we kill our creators? Why would the AI kill its creators? I agree that we need to safeguard against it; but nor does it seem like the default option. (I think for most humans, the default option would be to worship the beings who run our simulation.)
But otherwise, yes, I really don’t think AI is going to increase in intelligence THAT fast. (This is the main reason I can’t quite wear the label “Singularitarian”.) Current computers are something like a 10^-3 human (someone said 10^3 human; that’s true for basic arithmetic, but not really serious behavioral inferences. No current robot can recognize faces as well as an average baby, or catch a baseball as well as an average ten-year-old. Human brains are really quite fast, especially when they compute in parallel. They’re just a massive kludge of bad programming, as we might expect from the Blind Idiot God.). Moore’s law says a doubling time of 18 months; let’s be conservative and squish it down to doubling once per year. That still means it will take 10 years to reach the level of one human, 20 years to reach the level of 1000 humans, and 1000 years to reach the total intelligence of human civilization. By then, we will have had the time to improve our scientific understanding by a factor comparable to the improvement required to reach today from the Middle Ages.
We might not. But if they were paperclip maximisers or pebble sorters, we might not see any value in their existence, and lots of harm. Heck, we’re working to kill evolution’s effect on us, and betray its “inclusive genetic fitness” optimisation criterion, and nobody cares, because we don’t view it as having intrinsic worth. (Because it doesn’t have intrinsic worth; it’s just an emergent phenomenon of imperfectly-self-replicating stuff in an environment, and has no more value than the number seven.)
Because there’s no clear reason not to. Power gives it the ability to achieve its goals, and us existing will (eventually, if not immediately) serve to limit its power; and hence its ability to achieve its goals. AIs are nothing close to being people, and won’t be until well after we solve the alignment problem. They don’t have an implicit “care about people” motivation in their heads; if us all being dead will further their goals, and they realise this, and they can kill us without expending more resources than they’d gain from us being dead, they’ll kill us.