“We think MIRI is literally useless” is a decent reason not to fund MIRI at all, and is broadly consistent with Holden’s early thoughts on the matter. But it’s a weird reason to give MIRI $500K but OpenAI $30M. It’s possible that no one has the capacity to do direct work on the long-run AI alignment problem right now. In that case, backwards-chaining to how to build the capacity seems really important.
While I disagree with Holden that MIRI is near-useless, I think his stated reasons for giving MIRI $500k are pretty good reasons that I’d do myself if I had that money and thought MIRI was near-useless.
(Namely, that so far MIRI has had a lot of good impacts regardless of the quality of their research, in terms of general community building, and that this should be rewarded so that other orgs are incentivized to do things like that)
“We think MIRI is literally useless” is a decent reason not to fund MIRI at all, and is broadly consistent with Holden’s early thoughts on the matter. But it’s a weird reason to give MIRI $500K but OpenAI $30M. It’s possible that no one has the capacity to do direct work on the long-run AI alignment problem right now. In that case, backwards-chaining to how to build the capacity seems really important.
While I disagree with Holden that MIRI is near-useless, I think his stated reasons for giving MIRI $500k are pretty good reasons that I’d do myself if I had that money and thought MIRI was near-useless.
(Namely, that so far MIRI has had a lot of good impacts regardless of the quality of their research, in terms of general community building, and that this should be rewarded so that other orgs are incentivized to do things like that)