Fearmongering may backfire, leading to research restrictions that push the work underground, where it proceeds with less care, less caution, and less public scrutiny.
Too much fear could doom us as easily as too little. With the money and potential strategic advantage at stake, AI could develop underground with insufficient caution and no public scrutiny. We wouldn’t know we’re dead until the AI breaks out and already is in full control.
All things considered, I’d rather the work proceeds in the relatively open way it’s going now.
I agree that fearmongering is thin ice, and can easily backfire, and it must be done carefully and ethically, but is it worse than the alternative in which people are unaware of AGI-related risks? I don’t think that anybody can say with certainty
Agreed. We sail between Scylla and Charybdis—too much or too little fear are both dangerous and it is difficult to tell how much is too much.
I had an earlier pro-fearmongering comment which, on further thought, I replaced with a repeat of my first comment (since there seems to be no “delete comment”).
I want the people working on AI to be fearful, and careful. I don’t think I want the general public, or especially regulators, to be fearful. Because ignorant meddling seems far more likely to do harm than good—if we survive this at all, it’ll likely be because of (a) the (fear-driven) care of AI researchers and (b) the watchfulness and criticism of knowledgeable skeptics who fear a runaway breakout. Corrective (b) is likely to disappear or become ineffective if the research is driven underground even a tiny bit.
Given that (b) is the only check on researchers who are insufficiently careful and working underground, I don’t want anything done to reduce the effectiveness of (b). Even modest regulatory suppression of research, or demands for fully “safe” AI development (probably an impossibility) seem likely to make those funding and performing the research more secretive, less open, and less likely to be stopped or redirected in time by (b).
I think there is no safe path forward. Only differing types and degrees of risk. We must steer between the rocks the best we can.
Fearmongering may backfire, leading to research restrictions that push the work underground, where it proceeds with less care, less caution, and less public scrutiny.
Too much fear could doom us as easily as too little. With the money and potential strategic advantage at stake, AI could develop underground with insufficient caution and no public scrutiny. We wouldn’t know we’re dead until the AI breaks out and already is in full control.
All things considered, I’d rather the work proceeds in the relatively open way it’s going now.
A movie or two would be fine, and might do some good if well-done. But in general—be careful what you wish for.
Fearmongering may backfire, leading to research restrictions that push the work underground, where it proceeds with less care, less caution, and less public scrutiny.
Too much fear could doom us as easily as too little. With the money and potential strategic advantage at stake, AI could develop underground with insufficient caution and no public scrutiny. We wouldn’t know we’re dead until the AI breaks out and already is in full control.
All things considered, I’d rather the work proceeds in the relatively open way it’s going now.
I agree that fearmongering is thin ice, and can easily backfire, and it must be done carefully and ethically, but is it worse than the alternative in which people are unaware of AGI-related risks? I don’t think that anybody can say with certainty
Agreed. We sail between Scylla and Charybdis—too much or too little fear are both dangerous and it is difficult to tell how much is too much.
I had an earlier pro-fearmongering comment which, on further thought, I replaced with a repeat of my first comment (since there seems to be no “delete comment”).
I want the people working on AI to be fearful, and careful. I don’t think I want the general public, or especially regulators, to be fearful. Because ignorant meddling seems far more likely to do harm than good—if we survive this at all, it’ll likely be because of (a) the (fear-driven) care of AI researchers and (b) the watchfulness and criticism of knowledgeable skeptics who fear a runaway breakout. Corrective (b) is likely to disappear or become ineffective if the research is driven underground even a tiny bit.
Given that (b) is the only check on researchers who are insufficiently careful and working underground, I don’t want anything done to reduce the effectiveness of (b). Even modest regulatory suppression of research, or demands for fully “safe” AI development (probably an impossibility) seem likely to make those funding and performing the research more secretive, less open, and less likely to be stopped or redirected in time by (b).
I think there is no safe path forward. Only differing types and degrees of risk. We must steer between the rocks the best we can.
Fearmongering may backfire, leading to research restrictions that push the work underground, where it proceeds with less care, less caution, and less public scrutiny.
Too much fear could doom us as easily as too little. With the money and potential strategic advantage at stake, AI could develop underground with insufficient caution and no public scrutiny. We wouldn’t know we’re dead until the AI breaks out and already is in full control.
All things considered, I’d rather the work proceeds in the relatively open way it’s going now.