Don’t have time to respond in detail but a few quick clarifications/responses:
Sure, don’t feel obligated to respond, and I invite the people disagree-voting my comments to hop in as well.
— There are lots of groups focused on comms/governance. MIRI is unique only insofar as it started off as a “technical research org” and has recently pivoted more toward comms/governance.
That’s fair, when you said “pretty much any other organization in the space” I was thinking of technical orgs.
MIRI’s uniqueness does seem to suggest it has a comparative advantage for technical comms. Are there any organizations focused on that?
by MIRI’s lights, getting policymakers to understand alignment issues would be more likely to result in alignment progress than having more conversations with people in the technical alignment space
By ‘alignment progress’ do you mean an increased rate of insights per year? Due to increased alignment funding?
Anyway, I don’t think you’re going to get “shut it all down” without either a warning shot or a congressional hearing.
If you just extrapolate trends, it wouldn’t particularly surprise me to see Alex Turner at a congressional hearing arguing against “shut it all down”. Big AI has an incentive to find the best witnesses it can, and Alex Turner seems to be getting steadily more annoyed. (As am I, fwiw.)
Again, extrapolating trends, I expect MIRI’s critics like Nora Belrose will increasingly shift from the “inside game” of trying to engage w/ MIRI directly to a more “outside game” strategy of explaining to outsiders why they don’t think MIRI is credible. After the US “shuts it down”, countries like the UAE (accused of sponsoring genocide in Sudan) will likely try to quietly scoop up US AI talent. If MIRI is considered discredited in the technical community, I expect many AI researchers to accept that offer instead of retooling their career. Remember, a key mistake the board made in the OpenAI drama was underestimating the amount of leverage that individual AI researchers have, and not trying to gain mindshare with them.
Pause maximalism (by which I mean focusing 100% on getting a pause and not trying to speed alignment progress) only makes sense to me if we’re getting a ~complete ~indefinite pause. I’m not seeing a clear story for how that actually happens, absent a much broader doomer consensus. And if you’re not able to persuade your friends, you shouldn’t expect to persuade your enemies.
Right now I think MIRI only gets their stated objective in a world where we get a warning shot which creates a broader doom consensus. In that world it’s not clear advocacy makes a difference on the margin.
Sure, don’t feel obligated to respond, and I invite the people disagree-voting my comments to hop in as well.
That’s fair, when you said “pretty much any other organization in the space” I was thinking of technical orgs.
MIRI’s uniqueness does seem to suggest it has a comparative advantage for technical comms. Are there any organizations focused on that?
By ‘alignment progress’ do you mean an increased rate of insights per year? Due to increased alignment funding?
Anyway, I don’t think you’re going to get “shut it all down” without either a warning shot or a congressional hearing.
If you just extrapolate trends, it wouldn’t particularly surprise me to see Alex Turner at a congressional hearing arguing against “shut it all down”. Big AI has an incentive to find the best witnesses it can, and Alex Turner seems to be getting steadily more annoyed. (As am I, fwiw.)
Again, extrapolating trends, I expect MIRI’s critics like Nora Belrose will increasingly shift from the “inside game” of trying to engage w/ MIRI directly to a more “outside game” strategy of explaining to outsiders why they don’t think MIRI is credible. After the US “shuts it down”, countries like the UAE (accused of sponsoring genocide in Sudan) will likely try to quietly scoop up US AI talent. If MIRI is considered discredited in the technical community, I expect many AI researchers to accept that offer instead of retooling their career. Remember, a key mistake the board made in the OpenAI drama was underestimating the amount of leverage that individual AI researchers have, and not trying to gain mindshare with them.
Pause maximalism (by which I mean focusing 100% on getting a pause and not trying to speed alignment progress) only makes sense to me if we’re getting a ~complete ~indefinite pause. I’m not seeing a clear story for how that actually happens, absent a much broader doomer consensus. And if you’re not able to persuade your friends, you shouldn’t expect to persuade your enemies.
Right now I think MIRI only gets their stated objective in a world where we get a warning shot which creates a broader doom consensus. In that world it’s not clear advocacy makes a difference on the margin.