I agree that fearmongering is thin ice, and can easily backfire, and it must be done carefully and ethically, but is it worse than the alternative in which people are unaware of AGI-related risks? I don’t think that anybody can say with certainty
Agreed. We sail between Scylla and Charybdis—too much or too little fear are both dangerous and it is difficult to tell how much is too much.
I had an earlier pro-fearmongering comment which, on further thought, I replaced with a repeat of my first comment (since there seems to be no “delete comment”).
I want the people working on AI to be fearful, and careful. I don’t think I want the general public, or especially regulators, to be fearful. Because ignorant meddling seems far more likely to do harm than good—if we survive this at all, it’ll likely be because of (a) the (fear-driven) care of AI researchers and (b) the watchfulness and criticism of knowledgeable skeptics who fear a runaway breakout. Corrective (b) is likely to disappear or become ineffective if the research is driven underground even a tiny bit.
Given that (b) is the only check on researchers who are insufficiently careful and working underground, I don’t want anything done to reduce the effectiveness of (b). Even modest regulatory suppression of research, or demands for fully “safe” AI development (probably an impossibility) seem likely to make those funding and performing the research more secretive, less open, and less likely to be stopped or redirected in time by (b).
I think there is no safe path forward. Only differing types and degrees of risk. We must steer between the rocks the best we can.
I agree that fearmongering is thin ice, and can easily backfire, and it must be done carefully and ethically, but is it worse than the alternative in which people are unaware of AGI-related risks? I don’t think that anybody can say with certainty
Agreed. We sail between Scylla and Charybdis—too much or too little fear are both dangerous and it is difficult to tell how much is too much.
I had an earlier pro-fearmongering comment which, on further thought, I replaced with a repeat of my first comment (since there seems to be no “delete comment”).
I want the people working on AI to be fearful, and careful. I don’t think I want the general public, or especially regulators, to be fearful. Because ignorant meddling seems far more likely to do harm than good—if we survive this at all, it’ll likely be because of (a) the (fear-driven) care of AI researchers and (b) the watchfulness and criticism of knowledgeable skeptics who fear a runaway breakout. Corrective (b) is likely to disappear or become ineffective if the research is driven underground even a tiny bit.
Given that (b) is the only check on researchers who are insufficiently careful and working underground, I don’t want anything done to reduce the effectiveness of (b). Even modest regulatory suppression of research, or demands for fully “safe” AI development (probably an impossibility) seem likely to make those funding and performing the research more secretive, less open, and less likely to be stopped or redirected in time by (b).
I think there is no safe path forward. Only differing types and degrees of risk. We must steer between the rocks the best we can.