A thing that feels somewhat relevant here is the Dark Forest Theory of AI Mass Movements. New people keep showing up, seeing a Mass Movement Shaped Hole, and being like “Are y’all blind? Why are you not shouting from the rooftops to shut down AI everywhere and get everyone scared?”
And the answer is “well, I do think maybe LWers are biased against mainstream politics in some counterproductive ways, but there are a lot of genuine reasons to be wary of mass movements. They are dumb, hard-to-aim at exactly the right things, and we probably need some very specific solutions here in order to be helpful rather than anti-helpful or neutral-at-best. And political polarization could make this a lot harder to talk sanely about.”
One of the downsides of mass-movement shaped solutions is making it harder to engage in trades like you propose here.
There’s a problem, where, AI is pretty obviously scary in a lot of ways, and a Mass Movement To Shutdown AI may happen to us whether we want it or not. And if x-risk professionals aren’t involved trying to help steer it it may be a much stupider worse version of itself.
So, I don’t know if it’s actually tractable to make the trade of “avoid mass movements that are likely to drive the dial down” (at least in a legible-enough way to make such a trade)
It does seem more tractable to proactively drive up the dial in other target ways, and be proactive about shouting that. (i.e. various x-risk-oriented grantmaking bodies also giving grants to other kinds of technical progress, lobbying to remove regulations that everyone agrees are bad, etc).
A thing that feels somewhat relevant here is the Dark Forest Theory of AI Mass Movements. New people keep showing up, seeing a Mass Movement Shaped Hole, and being like “Are y’all blind? Why are you not shouting from the rooftops to shut down AI everywhere and get everyone scared?”
And the answer is “well, I do think maybe LWers are biased against mainstream politics in some counterproductive ways, but there are a lot of genuine reasons to be wary of mass movements. They are dumb, hard-to-aim at exactly the right things, and we probably need some very specific solutions here in order to be helpful rather than anti-helpful or neutral-at-best. And political polarization could make this a lot harder to talk sanely about.”
One of the downsides of mass-movement shaped solutions is making it harder to engage in trades like you propose here.
There’s a problem, where, AI is pretty obviously scary in a lot of ways, and a Mass Movement To Shutdown AI may happen to us whether we want it or not. And if x-risk professionals aren’t involved trying to help steer it it may be a much stupider worse version of itself.
So, I don’t know if it’s actually tractable to make the trade of “avoid mass movements that are likely to drive the dial down” (at least in a legible-enough way to make such a trade)
It does seem more tractable to proactively drive up the dial in other target ways, and be proactive about shouting that. (i.e. various x-risk-oriented grantmaking bodies also giving grants to other kinds of technical progress, lobbying to remove regulations that everyone agrees are bad, etc).