Thanks for this and subsequent comment which generally helped me to update my views on the problem and become even more cautious in discussing things.
Some thoughts appeared in my mind while reading, maybe I will have more thoughts later:
1. It looks like that all the talk about infohazards could be boiled down to just one thesis: “biorisk is much more serious x-risk than AI safety, but we decided not to acknowledge it, as it could be harmful”.
2. Almost all work in AI safety is based on “red-teaming”: someone comes with an idea X how to make AI safe, and EY appears and say “Actually, this will spectacularly fail because...”. However, the fact that future AI may read that thread of comments and act accordingly to the red-team advise is not considered, because AI is assumed superintelligent and able to create all our ideas from scratch.
3. The idea of infohazards is based on the idea of intellectual advantage of “EA people” over “bad people” when even an arm-chaired futurist could create a dozen ideas how to destroy the world, while professional scientists of some rogue county are sitting completely clueless and have to go to obscure forums for search of inspiration. From the outside point of view, this could look like arrogance. But it could also be interpreted that, in fact, we live in the world where it is very easy to create plausible ways of its destruction, which contributes to the idea of oversaturation of infohazards.
4. People, who study x-risks are most dangerous people in the world as they actually know how to destroy the world. More over, if a “bad agent” ever appear, he is more likely to be some deranged LW-commentator than North Korean officer.
Thanks for this and subsequent comment which generally helped me to update my views on the problem and become even more cautious in discussing things.
Some thoughts appeared in my mind while reading, maybe I will have more thoughts later:
1. It looks like that all the talk about infohazards could be boiled down to just one thesis: “biorisk is much more serious x-risk than AI safety, but we decided not to acknowledge it, as it could be harmful”.
2. Almost all work in AI safety is based on “red-teaming”: someone comes with an idea X how to make AI safe, and EY appears and say “Actually, this will spectacularly fail because...”. However, the fact that future AI may read that thread of comments and act accordingly to the red-team advise is not considered, because AI is assumed superintelligent and able to create all our ideas from scratch.
3. The idea of infohazards is based on the idea of intellectual advantage of “EA people” over “bad people” when even an arm-chaired futurist could create a dozen ideas how to destroy the world, while professional scientists of some rogue county are sitting completely clueless and have to go to obscure forums for search of inspiration. From the outside point of view, this could look like arrogance. But it could also be interpreted that, in fact, we live in the world where it is very easy to create plausible ways of its destruction, which contributes to the idea of oversaturation of infohazards.
4. People, who study x-risks are most dangerous people in the world as they actually know how to destroy the world. More over, if a “bad agent” ever appear, he is more likely to be some deranged LW-commentator than North Korean officer.