I think raw intelligence, while important, is not the primary factor that explains why humanity-as-a-species is much more powerful than chimpanzees-as-a-species. Notably, humans were once much less powerful, in our hunter-gatherer days, but over time, through the gradual process of accumulating technology, knowledge, and culture, humans now possess vast productive capacities that far outstrip our ancient powers.
Slightly relatedly, I think it’s possible that “causal inference is hard”. The idea is: once someone has worked something out, they can share it and people can pick it up easily, but it’s hard to figure the thing out to begin with—even with a lot of prior experience and efficient inference, most new inventions still need a lot of trial and error. Thus the reason the process of technology accumulation is gradual is, crudely, because causal inference is hard.
Even if this is true, one way things could still go badly is if most doom scenarios are locked behind a bunch of hard trial and error, but the easiest one isn’t. On the other hand, if both of these things are true then there could be meaningful safety benefits gained from censoring certain kinds of data.
This is what struck me as the least likely to be true from the above AI doom scenario.
Is diamondoid nanotechnology possible? Very likely it is or something functionally equivalent.
Can a sufficiently advanced superintelligence infer how to build it from scratch solely based on human data? Or will it need a large R&D center with many, many robotic systems that conduct experiments in parallel to extract the information required about our specific details of physics in our actual universe. Not the very slightly incorrect approximations a simulator will give you.
The ‘huge R&D center so big you can’t see the end of it’ is somewhat easier to regulate the ‘invisible dust the AI assembles with clueless stooges’.
Any individual doomsday mechanism we can think of, I would agree is not nearly so simple for an AGI to execute as Yudkowsky implies. I do think that it’s quite likely we’re just not able to think of mechanisms even theoretically that an AGI could think of, and one or more of those might actually be quite easy to do secretly and quickly. I wouldn’t call it guaranteed by any means, but intuitively this seems like the sort of thing that raw cognitive power might have a significant bearing on.
I agree. One frightening mechanism I thought of is : “ok, assume the AGI can’t craft the bioweapon or nanotechnology killbots without collecting vast amounts of information through carefully selected and performed experiments. (Basically enormous complexes full of robotics). How does it get the resources it needs?
And the answer is it scams humans into doing it. We have many examples of humans trusting someone they shouldn’t even when the evidence was readily available that they shouldn’t.
Any “huge R&D center” constraint is trivialized in a future where agile, powerful robots will be ubiquitous and an AGI can use robots to create an underground lab in the middle of nowhere, using its superintelligence to be undetectable in all ways that are physically possible. An AGI will also be able to use robots and 3D printers to fabricate purpose-built machines that enable it to conduct billions of physical experiments a day. Sure, it would be harder to construct something like a massive particle accelerator, but 1) that isn’t needed to make killer nanobots 2) even that isn’t impossible for a sufficiently intelligent machine to create covertly and quickly.
Slightly relatedly, I think it’s possible that “causal inference is hard”. The idea is: once someone has worked something out, they can share it and people can pick it up easily, but it’s hard to figure the thing out to begin with—even with a lot of prior experience and efficient inference, most new inventions still need a lot of trial and error. Thus the reason the process of technology accumulation is gradual is, crudely, because causal inference is hard.
Even if this is true, one way things could still go badly is if most doom scenarios are locked behind a bunch of hard trial and error, but the easiest one isn’t. On the other hand, if both of these things are true then there could be meaningful safety benefits gained from censoring certain kinds of data.
This is what struck me as the least likely to be true from the above AI doom scenario.
Is diamondoid nanotechnology possible? Very likely it is or something functionally equivalent.
Can a sufficiently advanced superintelligence infer how to build it from scratch solely based on human data? Or will it need a large R&D center with many, many robotic systems that conduct experiments in parallel to extract the information required about our specific details of physics in our actual universe. Not the very slightly incorrect approximations a simulator will give you.
The ‘huge R&D center so big you can’t see the end of it’ is somewhat easier to regulate the ‘invisible dust the AI assembles with clueless stooges’.
Any individual doomsday mechanism we can think of, I would agree is not nearly so simple for an AGI to execute as Yudkowsky implies. I do think that it’s quite likely we’re just not able to think of mechanisms even theoretically that an AGI could think of, and one or more of those might actually be quite easy to do secretly and quickly. I wouldn’t call it guaranteed by any means, but intuitively this seems like the sort of thing that raw cognitive power might have a significant bearing on.
I agree. One frightening mechanism I thought of is : “ok, assume the AGI can’t craft the bioweapon or nanotechnology killbots without collecting vast amounts of information through carefully selected and performed experiments. (Basically enormous complexes full of robotics). How does it get the resources it needs?
And the answer is it scams humans into doing it. We have many examples of humans trusting someone they shouldn’t even when the evidence was readily available that they shouldn’t.
Any “huge R&D center” constraint is trivialized in a future where agile, powerful robots will be ubiquitous and an AGI can use robots to create an underground lab in the middle of nowhere, using its superintelligence to be undetectable in all ways that are physically possible. An AGI will also be able to use robots and 3D printers to fabricate purpose-built machines that enable it to conduct billions of physical experiments a day. Sure, it would be harder to construct something like a massive particle accelerator, but 1) that isn’t needed to make killer nanobots 2) even that isn’t impossible for a sufficiently intelligent machine to create covertly and quickly.