This seems like an information hazard, since it has the form: This estimate for process X which may destroy the value of the future seems too low, also, not many people are currently studying X, which is surprising.
If X is some genetically engineered variety of smallpox, it seems clear that mentioning those facts is hazardous.
If the World didn’t know about brain emulation, calling it an under-scrutinized area would be dangerous, relative to, say, just mention to a few safety savvy, x-risk savvy neuroscientist friends who would go on to design safety protocols for it, as well as, if possible, slow down progress in the field.
If the idea is obvious enough to AI researchers (evolutionary approaches are not uncommon—they have entire conferences dedicated to the sub-field)), then avoiding discussion by Bostrom et al. doesn’t reduce information hazard, it just silences the voices of the x-risk savvy while evolutionary AI researchers march on, probably less aware of the risks of what they are doing than if the x-risk savvy keep discussing it.
So, to the extent this idea is obvious / independently discoverable by AI researchers, this approach should not be taken in this case.
This seems like an information hazard, since it has the form: This estimate for process X which may destroy the value of the future seems too low, also, not many people are currently studying X, which is surprising.
If X is some genetically engineered variety of smallpox, it seems clear that mentioning those facts is hazardous.
If the World didn’t know about brain emulation, calling it an under-scrutinized area would be dangerous, relative to, say, just mention to a few safety savvy, x-risk savvy neuroscientist friends who would go on to design safety protocols for it, as well as, if possible, slow down progress in the field.
Same should be done in this case.
If the idea is obvious enough to AI researchers (evolutionary approaches are not uncommon—they have entire conferences dedicated to the sub-field)), then avoiding discussion by Bostrom et al. doesn’t reduce information hazard, it just silences the voices of the x-risk savvy while evolutionary AI researchers march on, probably less aware of the risks of what they are doing than if the x-risk savvy keep discussing it.
So, to the extent this idea is obvious / independently discoverable by AI researchers, this approach should not be taken in this case.