I don’t want to be provocative, but if there was political will to stop AGI research it could probably be stalled for a long time. In order to get that political will, not only in the West but in China as well, a pretty effective way to do it might be figure out a way to use a pre-AGI model to cause mayhem/harm that’s bad enough to get the world’s attention, while not being apocalyptic.
As a random example, if AI is used somehow to take down the internet for a few days, the discourse and political urgency regarding AGI would change drastically. A close analogue is how quickly the world started caring about Gain-of-function-research after Covid.
I didn’t downvote this just because I disagree with it (that’s not how I downvote), but if I could hazard a guess at why people might downvote, it’d be that some might think it’s a ‘thermonuclear idea’.
I don’t want to be provocative, but if there was political will to stop AGI research it could probably be stalled for a long time. In order to get that political will, not only in the West but in China as well, a pretty effective way to do it might be figure out a way to use a pre-AGI model to cause mayhem/harm that’s bad enough to get the world’s attention, while not being apocalyptic.
As a random example, if AI is used somehow to take down the internet for a few days, the discourse and political urgency regarding AGI would change drastically. A close analogue is how quickly the world started caring about Gain-of-function-research after Covid.
I fear you might be right.
This is a dangerous road to tread—perhaps it is inevitable.
Re: taboos in EA, I think it would be good if somebody who downvoted this comment said why.
I didn’t downvote this just because I disagree with it (that’s not how I downvote), but if I could hazard a guess at why people might downvote, it’d be that some might think it’s a ‘thermonuclear idea’.