What do you see as the most important messages to spread to (a) the public and (b) policymakers?
That’s a great question that I’d prefer to address more comprehensively in a separate post, and I should admit up front that the post might not be imminent as we are currently hard at work on getting the messaging right and it’s not a super quick process.
What mistakes do you think MIRI has made in the last 6 months?
Huh, I do not have a list prepared and I am not entirely sure where to draw the line around what’s interesting to discuss and what’s not; furthermore it often takes some time to have strong intuitions about what turned out to be a mistake. Do you have any candidates for the list in mind?
Does MIRI also plan to get involved in policy discussions (e.g. communicating directly with policymakers, and/or advocating for specific policies)?
We are limited in our ability to directly influence policy by our 501(c)3 status; that said, we do have some latitude there and we are exercising it within the limits of the law. See for example this tweet by Eliezer.
Does MIRI need any help? (Or perhaps more precisely “Does MIRI need any help from the right kind of person with the right kind of skills, and if so, what would that person or those skills look like?”)
Yes, I expect to be hiring in the comms department relatively soon but have not actually posted any job listings yet. I will post to LessWrong about it when I do.
Does MIRI need any help? (Or perhaps more precisely “Does MIRI need any help from the right kind of person with the right kind of skills, and if so, what would that person or those skills look like?”)
Yes, I expect to be hiring in the comms department relatively soon but have not actually posted any job listings yet. I will post to LessWrong about it when I do.
That said, I’d be excited for folks who think they might have useful background or skills to contribute and would be excited to work at MIRI, to reach out and let us know they exist, or pitch us on why they might be a good addition to the team.
I didn’t have any candidate mistakes in mind. After thinking about it for 2 minutes, here are some possible candidates (though it’s not clear to me that any of them were actually mistakes):
Eliezer’s TIME article explicitly mentions that states should “be willing to destroy a rogue datacenter by airstrike.” On one hand, it seems important to clearly/directly communicate the kind of actions that Eliezer believes the world will need to be willing to take in order to keep us safe. On the other hand, I think this particular phrasing might have made the point easier to critique/meme. On net, it’s unclear to me if this was a mistake, but it seems plausible that there could’ve been a way to rephrase this particular sentence that maintains clarity while reducing the potential for misinterpretations or deliberate attacks.
I sometimes wonder if MIRI should have a policy division that is explicitly trying to influence policymakers. It seems like we have a particularly unique window over the next (few? several?) months, where Congress is unusually-motivated to educate themselves about AI. Until recently, my understanding is that virtually no one was advocating from a MIRI perspective (e.g., alignment is difficult, we can’t just rely on dangerous capability evaluations, we need a global moratorium and the infrastructure required for it). I think the Center for AI Policy is now trying to fill the gap, and it seems plausible to me that MIRI should either start their own policy team ( or assume more of a leadership role at the Center for AI Policy—EG being directly involved in hiring, having people in-person in DC, being involved in strategy/tactics, etc. This is of course conditional on the current CAIP CEO being open to this, and I suspect he would be).
Perhaps MIRI should be critiquing some of the existing policy proposals. I think MIRI has played an important role in “breaking” alignment proposals (i.e., raising awareness about the limitations of a lot of alignment proposals, raising counterarguments, explaining failure modes). Maybe MIRI should be “breaking” some of the policy proposals or finding other ways to directly improve the ways policymakers are reasoning about AI risk.
Again, it seems plausible to me that none of these were mistakes; I’d put them more in the “hm… this seems unclear to me but seems worth considering” category.
Re: the wording about airstrikes in TIME: yeah, we did not anticipate how that was going to be received and it’s likely we would have wordsmithed it a bit more to make the meaning more clear had we realized. I’m comfortable calling that a mistake. (I was not yet employed at MIRI at the time but I was involved in editing the draft of the op-ed so it’s at least as much on me as anybody else who was involved.)
Re: policy division: we are limited by our 501(c)3 status as to how much of our budget we can spend on policy work, and here ‘budget’ includes the time of salaried employees. Malo and Eliezer both spend some fraction of their time on policy but I view it as unlikely that we’ll spin up a whole ‘division’ about that. Instead, yes, we partner with/provide technical advice to CAIP and other allied organizations. I don’t view failure-to-start-a-policy-division as a mistake and in fact I think we’re using our resources fairly well here.
Re: critiquing existing policy proposals: there is undoubtedly more we could do here, though I lean more in the direction of ‘let’s say what we think would be almost good enough’ rather than simply critiquing what’s wrong with other proposals.
Thanks for this; seems reasonable to me. One quick note is that my impression is that it’s fairly easy to set up a 501(c)4. So even if [the formal institution known as MIRI] has limits, I think MIRI would be able to start a “sister org” that de facto serves as the policy arm. (I believe this is accepted practice & lots of orgs have sister policy orgs.)
(This doesn’t matter right now, insofar as you don’t think it would be an efficient allocation of resources to spin up a whole policy division. Just pointing it out in case your belief changes and the 501(c)3 thing felt like the limiting factor).
Does MIRI also plan to get involved in policy discussions (e.g. communicating directly with policymakers, and/or advocating for specific policies)?
We are limited in our ability to directly influence policy by our 501(c)3 status; that said, we do have some latitude there and we are exercising it within the limits of the law. See for example this tweet by Eliezer.
To expand on this a bit, I and a couple others at MIRI have been spending some time syncing up and strategizing with other people and orgs who are more directly focused on policy work themselves. We’ve also spent some time chatting with folks in government that we already know and have good relationships with. I expect we’ll continue to do a decent amount of this going forward.
It’s much less clear to me that it makes sense for us to end up directly engaging in policy discussions with policymakers as an important focus of ours (compared to focusing on broad public comms), given that this is pretty far outside of our area of expertise. It’s definitely something I’m interested in exploring though, and chatting about with folks who have expertise in the space.
That’s a great question that I’d prefer to address more comprehensively in a separate post, and I should admit up front that the post might not be imminent as we are currently hard at work on getting the messaging right and it’s not a super quick process.
Huh, I do not have a list prepared and I am not entirely sure where to draw the line around what’s interesting to discuss and what’s not; furthermore it often takes some time to have strong intuitions about what turned out to be a mistake. Do you have any candidates for the list in mind?
We are limited in our ability to directly influence policy by our 501(c)3 status; that said, we do have some latitude there and we are exercising it within the limits of the law. See for example this tweet by Eliezer.
Yes, I expect to be hiring in the comms department relatively soon but have not actually posted any job listings yet. I will post to LessWrong about it when I do.
That said, I’d be excited for folks who think they might have useful background or skills to contribute and would be excited to work at MIRI, to reach out and let us know they exist, or pitch us on why they might be a good addition to the team.
Ditto.
Thanks for this response!
I didn’t have any candidate mistakes in mind. After thinking about it for 2 minutes, here are some possible candidates (though it’s not clear to me that any of them were actually mistakes):
Eliezer’s TIME article explicitly mentions that states should “be willing to destroy a rogue datacenter by airstrike.” On one hand, it seems important to clearly/directly communicate the kind of actions that Eliezer believes the world will need to be willing to take in order to keep us safe. On the other hand, I think this particular phrasing might have made the point easier to critique/meme. On net, it’s unclear to me if this was a mistake, but it seems plausible that there could’ve been a way to rephrase this particular sentence that maintains clarity while reducing the potential for misinterpretations or deliberate attacks.
I sometimes wonder if MIRI should have a policy division that is explicitly trying to influence policymakers. It seems like we have a particularly unique window over the next (few? several?) months, where Congress is unusually-motivated to educate themselves about AI. Until recently, my understanding is that virtually no one was advocating from a MIRI perspective (e.g., alignment is difficult, we can’t just rely on dangerous capability evaluations, we need a global moratorium and the infrastructure required for it). I think the Center for AI Policy is now trying to fill the gap, and it seems plausible to me that MIRI should either start their own policy team ( or assume more of a leadership role at the Center for AI Policy—EG being directly involved in hiring, having people in-person in DC, being involved in strategy/tactics, etc. This is of course conditional on the current CAIP CEO being open to this, and I suspect he would be).
Perhaps MIRI should be critiquing some of the existing policy proposals. I think MIRI has played an important role in “breaking” alignment proposals (i.e., raising awareness about the limitations of a lot of alignment proposals, raising counterarguments, explaining failure modes). Maybe MIRI should be “breaking” some of the policy proposals or finding other ways to directly improve the ways policymakers are reasoning about AI risk.
Again, it seems plausible to me that none of these were mistakes; I’d put them more in the “hm… this seems unclear to me but seems worth considering” category.
Re: the wording about airstrikes in TIME: yeah, we did not anticipate how that was going to be received and it’s likely we would have wordsmithed it a bit more to make the meaning more clear had we realized. I’m comfortable calling that a mistake. (I was not yet employed at MIRI at the time but I was involved in editing the draft of the op-ed so it’s at least as much on me as anybody else who was involved.)
Re: policy division: we are limited by our 501(c)3 status as to how much of our budget we can spend on policy work, and here ‘budget’ includes the time of salaried employees. Malo and Eliezer both spend some fraction of their time on policy but I view it as unlikely that we’ll spin up a whole ‘division’ about that. Instead, yes, we partner with/provide technical advice to CAIP and other allied organizations. I don’t view failure-to-start-a-policy-division as a mistake and in fact I think we’re using our resources fairly well here.
Re: critiquing existing policy proposals: there is undoubtedly more we could do here, though I lean more in the direction of ‘let’s say what we think would be almost good enough’ rather than simply critiquing what’s wrong with other proposals.
Thanks for this; seems reasonable to me. One quick note is that my impression is that it’s fairly easy to set up a 501(c)4. So even if [the formal institution known as MIRI] has limits, I think MIRI would be able to start a “sister org” that de facto serves as the policy arm. (I believe this is accepted practice & lots of orgs have sister policy orgs.)
(This doesn’t matter right now, insofar as you don’t think it would be an efficient allocation of resources to spin up a whole policy division. Just pointing it out in case your belief changes and the 501(c)3 thing felt like the limiting factor).
To expand on this a bit, I and a couple others at MIRI have been spending some time syncing up and strategizing with other people and orgs who are more directly focused on policy work themselves. We’ve also spent some time chatting with folks in government that we already know and have good relationships with. I expect we’ll continue to do a decent amount of this going forward.
It’s much less clear to me that it makes sense for us to end up directly engaging in policy discussions with policymakers as an important focus of ours (compared to focusing on broad public comms), given that this is pretty far outside of our area of expertise. It’s definitely something I’m interested in exploring though, and chatting about with folks who have expertise in the space.