They would need to compete with lots of other projects working on AI Alignment. But yes, I fundamentally agree: if there was a project that convincingly had a >1% chance of solving AI alignment it seems very likely it would be able to raise ~1M/year (maybe even ~10?)
They would need to compete with lots of other projects working on AI Alignment.
I don’t think that’s the case. I think that if OpenPhil would believe that there’s more room for funding in promising AI alignment research they would spend more money on it than they currently do.
I think the main reason that they aren’t giving MIRI more money than they are giving, is that they don’t believe that MIRI would spend more money effectively.
They would need to compete with lots of other projects working on AI Alignment.
But yes, I fundamentally agree: if there was a project that convincingly had a >1% chance of solving AI alignment it seems very likely it would be able to raise ~1M/year (maybe even ~10?)
I don’t think that’s the case. I think that if OpenPhil would believe that there’s more room for funding in promising AI alignment research they would spend more money on it than they currently do.
I think the main reason that they aren’t giving MIRI more money than they are giving, is that they don’t believe that MIRI would spend more money effectively.