Nice post! I admit I myself underestimated the ferocity of the public lockdowns in March, and totally didn’t predict the R0=1 control system phenomenon. So I’m convinced.
I’d love to see more thought about how the MNM effect might look in an AI scenario. Like you said, maybe denials and assurances followed by freakouts and bans. But maybe we could predict what sorts of events would trigger the shift?
There’s a theory which I endorse which goes something like “Change only happens in a crisis. The leaders and the people flail around and grab whatever policy solutions happen to be lying around in prestigious places, and implement them. So, doing academic policy work can be surprisingly impactful; even if no one listens to you now, they might when it really matters.”
I’d love to see more thought about how the MNM effect might look in an AI scenario. Like you said, maybe denials and assurances followed by freakouts and bans. But maybe we could predict what sorts of events would trigger the shift?
I take it you’re presuming slow takeoff in this paragraph, right?
Well, if the takeoff is sufficiently fast, by the time people freak out it will be too late. The question is, how slow does the takeoff need to be, for the MNM effect to kick in at some not-useless point? And what other factors does it depend on, besides speed? It would be great to have a better understanding of this.
Some factors that seem important for whether or not you get the MNM effect—rate of increase of the danger (sudden, not gradual), intuitive understanding of the danger, level of social trust and agreement over facts, historical memory of the disaster, how certain the threat is, coordination problems, how dangerous the threat is, how tractable the problem seems
Nice post! I admit I myself underestimated the ferocity of the public lockdowns in March, and totally didn’t predict the R0=1 control system phenomenon. So I’m convinced.
I’d love to see more thought about how the MNM effect might look in an AI scenario. Like you said, maybe denials and assurances followed by freakouts and bans. But maybe we could predict what sorts of events would trigger the shift?
There’s a theory which I endorse which goes something like “Change only happens in a crisis. The leaders and the people flail around and grab whatever policy solutions happen to be lying around in prestigious places, and implement them. So, doing academic policy work can be surprisingly impactful; even if no one listens to you now, they might when it really matters.”
I take it you’re presuming slow takeoff in this paragraph, right?
Well, if the takeoff is sufficiently fast, by the time people freak out it will be too late. The question is, how slow does the takeoff need to be, for the MNM effect to kick in at some not-useless point? And what other factors does it depend on, besides speed? It would be great to have a better understanding of this.
Some factors that seem important for whether or not you get the MNM effect—rate of increase of the danger (sudden, not gradual), intuitive understanding of the danger, level of social trust and agreement over facts, historical memory of the disaster, how certain the threat is, coordination problems, how dangerous the threat is, how tractable the problem seems