I think very fast UFAI is unlikely, so I tend to worry about the rest of the bottleneck. Slow AI* has its own dangers and is not a genie I would like to let out of the bottle unless I really need it. Even if the first Slow AI is Friendly it doesn’t guarantee the next 1000 will be, so it depends on the interaction between the AI and the society that makes it.
Not that I expect to code it all myself. I really should be thinking about setting up an institution to develop and hide the information in such a way that it is distributed but doesn’t leak. The time to release the information/code would be when there had been a non-trivial depopulation of earth and it was having trouble reforming an industrial society (or other time industrial earth was in danger). The reason not release it straight away would be to hope for better understanding of the future trajectory of the Slow AIs.
There might be an argument for releasing the information if we could show we would never get a better understanding of the future of the Slow AIs.
*By slow AI I mean AI that has as much likelihood of Fooming as unenhanced humans do, due to sharing similar organization and limitations of intelligence.
I think very fast UFAI is unlikely, so I tend to worry about the rest of the bottleneck. Slow AI* has its own dangers and is not a genie I would like to let out of the bottle unless I really need it. Even if the first Slow AI is Friendly it doesn’t guarantee the next 1000 will be, so it depends on the interaction between the AI and the society that makes it.
Not that I expect to code it all myself. I really should be thinking about setting up an institution to develop and hide the information in such a way that it is distributed but doesn’t leak. The time to release the information/code would be when there had been a non-trivial depopulation of earth and it was having trouble reforming an industrial society (or other time industrial earth was in danger). The reason not release it straight away would be to hope for better understanding of the future trajectory of the Slow AIs.
There might be an argument for releasing the information if we could show we would never get a better understanding of the future of the Slow AIs.
*By slow AI I mean AI that has as much likelihood of Fooming as unenhanced humans do, due to sharing similar organization and limitations of intelligence.