If the idea is obvious enough to AI researchers (evolutionary approaches are not uncommon—they have entire conferences dedicated to the sub-field)), then avoiding discussion by Bostrom et al. doesn’t reduce information hazard, it just silences the voices of the x-risk savvy while evolutionary AI researchers march on, probably less aware of the risks of what they are doing than if the x-risk savvy keep discussing it.
So, to the extent this idea is obvious / independently discoverable by AI researchers, this approach should not be taken in this case.
If the idea is obvious enough to AI researchers (evolutionary approaches are not uncommon—they have entire conferences dedicated to the sub-field)), then avoiding discussion by Bostrom et al. doesn’t reduce information hazard, it just silences the voices of the x-risk savvy while evolutionary AI researchers march on, probably less aware of the risks of what they are doing than if the x-risk savvy keep discussing it.
So, to the extent this idea is obvious / independently discoverable by AI researchers, this approach should not be taken in this case.