Don’t have time to respond in detail but a few quick clarifications/responses:
— I expect policymakers to have the most relevant/important questions about policy and to be the target audience most relevant for enacting policies. Not solving technical alignment. (Though I do suspect that by MIRI’s lights, getting policymakers to understand alignment issues would be more likely to result in alignment progress than having more conversations with people in the technical alignment space.)
— There are lots of groups focused on comms/governance. MIRI is unique only insofar as it started off as a “technical research org” and has recently pivoted more toward comms/governance.
— I do agree that MIRI has had relatively low output for a group of its size/resources/intellectual caliber. I would love to see more output from MIRI in general. Insofar as it is constrained, I think they should be prioritizing “curious policy newcomers” over people like Matthew and Alex.
— Minor but I don’t think MIRI is getting “outargued” by those individuals and I think that frame is a bit too zero-sum.
— Controlling for overall level of output, I suspect I’m more excited than you about MIRI spending less time on LW and more time on comms/policy work with policy communities (EG Malo contributing to the Schumer insight forums, MIRI responding to government RFCs).
— My guess is we both agree that MIRI could be doing more on both fronts and just generally having higher output. My impression is they are working on this and have been focusing on hiring; I think if their output stayed relatively the same 3-6 months from now I will be fairly disappointed.
Don’t have time to respond in detail but a few quick clarifications/responses:
Sure, don’t feel obligated to respond, and I invite the people disagree-voting my comments to hop in as well.
— There are lots of groups focused on comms/governance. MIRI is unique only insofar as it started off as a “technical research org” and has recently pivoted more toward comms/governance.
That’s fair, when you said “pretty much any other organization in the space” I was thinking of technical orgs.
MIRI’s uniqueness does seem to suggest it has a comparative advantage for technical comms. Are there any organizations focused on that?
by MIRI’s lights, getting policymakers to understand alignment issues would be more likely to result in alignment progress than having more conversations with people in the technical alignment space
By ‘alignment progress’ do you mean an increased rate of insights per year? Due to increased alignment funding?
Anyway, I don’t think you’re going to get “shut it all down” without either a warning shot or a congressional hearing.
If you just extrapolate trends, it wouldn’t particularly surprise me to see Alex Turner at a congressional hearing arguing against “shut it all down”. Big AI has an incentive to find the best witnesses it can, and Alex Turner seems to be getting steadily more annoyed. (As am I, fwiw.)
Again, extrapolating trends, I expect MIRI’s critics like Nora Belrose will increasingly shift from the “inside game” of trying to engage w/ MIRI directly to a more “outside game” strategy of explaining to outsiders why they don’t think MIRI is credible. After the US “shuts it down”, countries like the UAE (accused of sponsoring genocide in Sudan) will likely try to quietly scoop up US AI talent. If MIRI is considered discredited in the technical community, I expect many AI researchers to accept that offer instead of retooling their career. Remember, a key mistake the board made in the OpenAI drama was underestimating the amount of leverage that individual AI researchers have, and not trying to gain mindshare with them.
Pause maximalism (by which I mean focusing 100% on getting a pause and not trying to speed alignment progress) only makes sense to me if we’re getting a ~complete ~indefinite pause. I’m not seeing a clear story for how that actually happens, absent a much broader doomer consensus. And if you’re not able to persuade your friends, you shouldn’t expect to persuade your enemies.
Right now I think MIRI only gets their stated objective in a world where we get a warning shot which creates a broader doom consensus. In that world it’s not clear advocacy makes a difference on the margin.
Don’t have time to respond in detail but a few quick clarifications/responses:
— I expect policymakers to have the most relevant/important questions about policy and to be the target audience most relevant for enacting policies. Not solving technical alignment. (Though I do suspect that by MIRI’s lights, getting policymakers to understand alignment issues would be more likely to result in alignment progress than having more conversations with people in the technical alignment space.)
— There are lots of groups focused on comms/governance. MIRI is unique only insofar as it started off as a “technical research org” and has recently pivoted more toward comms/governance.
— I do agree that MIRI has had relatively low output for a group of its size/resources/intellectual caliber. I would love to see more output from MIRI in general. Insofar as it is constrained, I think they should be prioritizing “curious policy newcomers” over people like Matthew and Alex. — Minor but I don’t think MIRI is getting “outargued” by those individuals and I think that frame is a bit too zero-sum.
— Controlling for overall level of output, I suspect I’m more excited than you about MIRI spending less time on LW and more time on comms/policy work with policy communities (EG Malo contributing to the Schumer insight forums, MIRI responding to government RFCs). — My guess is we both agree that MIRI could be doing more on both fronts and just generally having higher output. My impression is they are working on this and have been focusing on hiring; I think if their output stayed relatively the same 3-6 months from now I will be fairly disappointed.
Sure, don’t feel obligated to respond, and I invite the people disagree-voting my comments to hop in as well.
That’s fair, when you said “pretty much any other organization in the space” I was thinking of technical orgs.
MIRI’s uniqueness does seem to suggest it has a comparative advantage for technical comms. Are there any organizations focused on that?
By ‘alignment progress’ do you mean an increased rate of insights per year? Due to increased alignment funding?
Anyway, I don’t think you’re going to get “shut it all down” without either a warning shot or a congressional hearing.
If you just extrapolate trends, it wouldn’t particularly surprise me to see Alex Turner at a congressional hearing arguing against “shut it all down”. Big AI has an incentive to find the best witnesses it can, and Alex Turner seems to be getting steadily more annoyed. (As am I, fwiw.)
Again, extrapolating trends, I expect MIRI’s critics like Nora Belrose will increasingly shift from the “inside game” of trying to engage w/ MIRI directly to a more “outside game” strategy of explaining to outsiders why they don’t think MIRI is credible. After the US “shuts it down”, countries like the UAE (accused of sponsoring genocide in Sudan) will likely try to quietly scoop up US AI talent. If MIRI is considered discredited in the technical community, I expect many AI researchers to accept that offer instead of retooling their career. Remember, a key mistake the board made in the OpenAI drama was underestimating the amount of leverage that individual AI researchers have, and not trying to gain mindshare with them.
Pause maximalism (by which I mean focusing 100% on getting a pause and not trying to speed alignment progress) only makes sense to me if we’re getting a ~complete ~indefinite pause. I’m not seeing a clear story for how that actually happens, absent a much broader doomer consensus. And if you’re not able to persuade your friends, you shouldn’t expect to persuade your enemies.
Right now I think MIRI only gets their stated objective in a world where we get a warning shot which creates a broader doom consensus. In that world it’s not clear advocacy makes a difference on the margin.