Does the small group of people know that the danger is real, or do they just feel that the danger is real and happen to be right? I think that makes a huge difference. If they know that the danger is real then they should just present their undeniable proof to the world. If it is undeniable proof then other smart people will see it and be convinced and they will convince politicians to act The policy implications are clear. All AI research should be banned and illicit AI research should be a serious crime. An international agency should be formed to conduct safe AI research in complete secrecy. If any countries feel like defecting and conducting their own research then total war would be justified in order to stop them from doing so.
If they just feel like they are right and happen to be right but don’t actually know that they are right then they should go ahead and present their best argument and see if they convince everyone. If only a few people are convinced then they should attempt to extract as much money as possible from those people and settle down to a comfortable life of writing harry potter fan fiction and complaining about how hard it is to lose weight. Worrying about the problem won’t solve anything. If they were truly heroic people they might try to build a small scale UFAI that would eat a city and convince everyone that UFAI was a legitimate problem, but that seems difficult and people wouldn’t like them very much after that. Alternatively, they could just wait around and hope that eventually undeniable proof appears or someone else builds a small scale UFAI or something. Thats probably a better solution because then instead of being hated as terrorists they would be revered as people who (accidentally) made a correct prediction. If proof never appears or the first AI is bad enough to take over the world then oh well, they’ll just die like everyone else or lead pointless lives like everyone else does. Maybe they will feel slightly more bitter than everyone else at the end but that’s life.
Does the small group of people know that the danger is real, or do they just feel that the danger is real and happen to be right? I think that makes a huge difference. If they know that the danger is real then they should just present their undeniable proof to the world. If it is undeniable proof then other smart people will see it and be convinced and they will convince politicians to act The policy implications are clear. All AI research should be banned and illicit AI research should be a serious crime. An international agency should be formed to conduct safe AI research in complete secrecy. If any countries feel like defecting and conducting their own research then total war would be justified in order to stop them from doing so.
If they just feel like they are right and happen to be right but don’t actually know that they are right then they should go ahead and present their best argument and see if they convince everyone. If only a few people are convinced then they should attempt to extract as much money as possible from those people and settle down to a comfortable life of writing harry potter fan fiction and complaining about how hard it is to lose weight. Worrying about the problem won’t solve anything. If they were truly heroic people they might try to build a small scale UFAI that would eat a city and convince everyone that UFAI was a legitimate problem, but that seems difficult and people wouldn’t like them very much after that. Alternatively, they could just wait around and hope that eventually undeniable proof appears or someone else builds a small scale UFAI or something. Thats probably a better solution because then instead of being hated as terrorists they would be revered as people who (accidentally) made a correct prediction. If proof never appears or the first AI is bad enough to take over the world then oh well, they’ll just die like everyone else or lead pointless lives like everyone else does. Maybe they will feel slightly more bitter than everyone else at the end but that’s life.