not just sitting on piles of cash because it would be “weird” to pay a Fields medalist 500k a year.
They literally paid Kmett 400k/year foryears to work on some approach to explainable AI in Haskell.
I think people in this thread vastly overestimate how much money MIRI has (they have ~10M, see the 990s and the donations page https://intelligence.org/topcontributors/), and underestimate how much would top people cost. I think the top 1% earners in the US all make >500k/year? Maybe if not the top 1% the top 0.5%?
Even Kmett (who is famous in the Haskell community, but is no Terence Tao) is almost certainly making way more than 500k$ now
From a rando outsider’s perspective, MIRI has not made any public indication that they are funding-constrained, particularly given that their donation page says explicitly that:
We’re not running a formal fundraiser this year but are participating in end-of-year matching events, including Giving Tuesday.
Which more or less sounds like “we don’t need any more money but if you want to give us some that’s cool”
I think people in this thread vastly overestimate how much money MIRI has, and underestimate how much would top people cost.
This implies that MIRI is very much funding-constrained, and unless you have elite talent then you should earn to give to organizations that will recruit those with elite talent. This applies to me and most people reading this, who are only around 2-4 sigmas above the mean.
I highly doubt most people reading this are “around 2-4 sigmas above the mean”, if that’s even a meaningful concept.
The choice between earning to give and direct work is definitely nontrivial though: there are many precedents of useful work done by “average” individuals, even in mathematics.
But I do get the feeling that MIRI thinks the relative value of hiring random expensive people would be <0, which seems consistent with how other groups trying to solve hard problems approach things. E.g. I don’t see Tesla paying billions to famous mathematicians/smart people to “solve self-driving”.
If MIRI would want to hire someone like Terence Tao for a million dollars per year they likely couldn’t simply do that out of their normal budget. To do this they would need to convince donors to give them additional money for that purpose.
If there would be a general sense that this would be the way forward in MIRI and MIRI could express that to donors, I would expect they would get the donor money for it.
They would need to compete with lots of other projects working on AI Alignment. But yes, I fundamentally agree: if there was a project that convincingly had a >1% chance of solving AI alignment it seems very likely it would be able to raise ~1M/year (maybe even ~10?)
They would need to compete with lots of other projects working on AI Alignment.
I don’t think that’s the case. I think that if OpenPhil would believe that there’s more room for funding in promising AI alignment research they would spend more money on it than they currently do.
I think the main reason that they aren’t giving MIRI more money than they are giving, is that they don’t believe that MIRI would spend more money effectively.
They literally paid Kmett 400k/year for years to work on some approach to explainable AI in Haskell.
I think people in this thread vastly overestimate how much money MIRI has (they have ~10M, see the 990s and the donations page https://intelligence.org/topcontributors/), and underestimate how much would top people cost.
I think the top 1% earners in the US all make >500k/year? Maybe if not the top 1% the top 0.5%?
Even Kmett (who is famous in the Haskell community, but is no Terence Tao) is almost certainly making way more than 500k$ now
From a rando outsider’s perspective, MIRI has not made any public indication that they are funding-constrained, particularly given that their donation page says explicitly that:
Which more or less sounds like “we don’t need any more money but if you want to give us some that’s cool”
This implies that MIRI is very much funding-constrained, and unless you have elite talent then you should earn to give to organizations that will recruit those with elite talent. This applies to me and most people reading this, who are only around 2-4 sigmas above the mean.
I highly doubt most people reading this are “around 2-4 sigmas above the mean”, if that’s even a meaningful concept.
The choice between earning to give and direct work is definitely nontrivial though: there are many precedents of useful work done by “average” individuals, even in mathematics.
But I do get the feeling that MIRI thinks the relative value of hiring random expensive people would be <0, which seems consistent with how other groups trying to solve hard problems approach things.
E.g. I don’t see Tesla paying billions to famous mathematicians/smart people to “solve self-driving”.
Edit: Yudkowsky answered https://www.lesswrong.com/posts/34Gkqus9vusXRevR8/late-2021-miri-conversations-ama-discussion?commentId=9K2ioAJGDRfRuDDCs , apparently I was wrong and it’s because you can’t just pay top people to work on problems that don’t interest them.
If MIRI would want to hire someone like Terence Tao for a million dollars per year they likely couldn’t simply do that out of their normal budget. To do this they would need to convince donors to give them additional money for that purpose.
If there would be a general sense that this would be the way forward in MIRI and MIRI could express that to donors, I would expect they would get the donor money for it.
They would need to compete with lots of other projects working on AI Alignment.
But yes, I fundamentally agree: if there was a project that convincingly had a >1% chance of solving AI alignment it seems very likely it would be able to raise ~1M/year (maybe even ~10?)
I don’t think that’s the case. I think that if OpenPhil would believe that there’s more room for funding in promising AI alignment research they would spend more money on it than they currently do.
I think the main reason that they aren’t giving MIRI more money than they are giving, is that they don’t believe that MIRI would spend more money effectively.