The main problem I see that are relevant to infohazards are that it encourages a “Great Man Theory” of progress in science, which is basically false, and this still holds despite vast disparities in ability, since no one person or small group is able to single handedly solve scientific fields/problems by themselves, and the culture of AI safety already has a bit of a problem with using the “Great Man Theory” too liberally.
I found other parts of the post a lot more convincing than this part of the post, and almost didn’t read it because you highlighted this part of the post. Thankfully I did!
Here are the headings:
Infohazards prevent seeking feedback and critical evaluation of work
Thinking your work is infohazardous leads to overrating novelty or power of your work
Infohazards assume an incorrect model of scientific progress
Infohazards prevent results from becoming common knowledge and impose significant frictions
Infohazards imply a lack of trust, but any solution will require trust
Infohazards amplify in-group and social status dynamics
I found other parts of the post a lot more convincing than this part of the post, and almost didn’t read it because you highlighted this part of the post. Thankfully I did!
Here are the headings:
Infohazards prevent seeking feedback and critical evaluation of work
Thinking your work is infohazardous leads to overrating novelty or power of your work
Infohazards assume an incorrect model of scientific progress
Infohazards prevent results from becoming common knowledge and impose significant frictions
Infohazards imply a lack of trust, but any solution will require trust
Infohazards amplify in-group and social status dynamics
Infohazards can be abused as tools of power
Infohazards fail the ‘skin in the game’ test