I’d love to see more thought about how the MNM effect might look in an AI scenario. Like you said, maybe denials and assurances followed by freakouts and bans. But maybe we could predict what sorts of events would trigger the shift?
I take it you’re presuming slow takeoff in this paragraph, right?
Well, if the takeoff is sufficiently fast, by the time people freak out it will be too late. The question is, how slow does the takeoff need to be, for the MNM effect to kick in at some not-useless point? And what other factors does it depend on, besides speed? It would be great to have a better understanding of this.
Some factors that seem important for whether or not you get the MNM effect—rate of increase of the danger (sudden, not gradual), intuitive understanding of the danger, level of social trust and agreement over facts, historical memory of the disaster, how certain the threat is, coordination problems, how dangerous the threat is, how tractable the problem seems
I take it you’re presuming slow takeoff in this paragraph, right?
Well, if the takeoff is sufficiently fast, by the time people freak out it will be too late. The question is, how slow does the takeoff need to be, for the MNM effect to kick in at some not-useless point? And what other factors does it depend on, besides speed? It would be great to have a better understanding of this.
Some factors that seem important for whether or not you get the MNM effect—rate of increase of the danger (sudden, not gradual), intuitive understanding of the danger, level of social trust and agreement over facts, historical memory of the disaster, how certain the threat is, coordination problems, how dangerous the threat is, how tractable the problem seems