Hanson’s arguments here don’t apply to nation-states that start off with a large portion of world research resources and stronger ability to keep secrets, as we saw with nuclear weapons. He has a quite separate argument against that, which is that governments are too stupid to notice early brain emulation or AI technology and recognize that it is about to turn the world upside down.
I don’t buy it at that level of confidence. Robin says the Manhattan Project was an anomaly in wartime, and that past efforts to restrict the spread of technologies like encryption and supercomputers didn’t work for long (I’d say the cost-benefit for secrecy here was much, much worse than for human-level AI/WBE). My reply is that a delay of 4 years, like that between the US and Soviet nuclear tests, would be a long, long time for WBE or human-level AI to drive a local intelligence explosion. Software is easier to steal, but even so.
Your reply is focused on keeping secrets. I meant my comment to apply to the second claim—the one about governments being “too stupid”. That claim might be right—but it is not obvious. Government departments focused on this sort of thing (of which there are several) will understand—and no doubt already understand. The issue is more whether the communication lines are free, whether the top military brass take thier own boffins seriously—and whether they go on to get approval from head office.
As for secrecy—the NSA has a long history of extreme secrecy. The main reason most people don’t know about their secret tech projects is because their secrecy is so good. If they develop a superintelligence, I figure it will be a secret one that will probably remain chained up in their basement. They are the main reason, my graph has some probability mass already.
Hanson’s arguments here don’t apply to nation-states that start off with a large portion of world research resources and stronger ability to keep secrets, as we saw with nuclear weapons. He has a quite separate argument against that, which is that governments are too stupid to notice early brain emulation or AI technology and recognize that it is about to turn the world upside down.
What: even the NSA and IARPA—whose job it is?
I don’t buy it at that level of confidence. Robin says the Manhattan Project was an anomaly in wartime, and that past efforts to restrict the spread of technologies like encryption and supercomputers didn’t work for long (I’d say the cost-benefit for secrecy here was much, much worse than for human-level AI/WBE). My reply is that a delay of 4 years, like that between the US and Soviet nuclear tests, would be a long, long time for WBE or human-level AI to drive a local intelligence explosion. Software is easier to steal, but even so.
Your reply is focused on keeping secrets. I meant my comment to apply to the second claim—the one about governments being “too stupid”. That claim might be right—but it is not obvious. Government departments focused on this sort of thing (of which there are several) will understand—and no doubt already understand. The issue is more whether the communication lines are free, whether the top military brass take thier own boffins seriously—and whether they go on to get approval from head office.
As for secrecy—the NSA has a long history of extreme secrecy. The main reason most people don’t know about their secret tech projects is because their secrecy is so good. If they develop a superintelligence, I figure it will be a secret one that will probably remain chained up in their basement. They are the main reason, my graph has some probability mass already.