“How to allocate funding for public goods” is a pretty important question that I feel like civilization is overall struggling with. “How to incentivize honesty, minimize grift, and find value” are all hard questions.
The EA/X-risk-ecosystem is somewhat lucky in that there’s a group of funders, thinkers and doers who are jointly interested in the longterm health of the overall network and some shared concern for epistemics such that it’s a tractable question to ask “how do we make tradeoffs in grantmaking applications?”
Are we trading off “how much value we’re getting from this round of grants” against “how much longterm grift or epistemic warping are we encouraging in future rounds”? Are there third-options that could get more total value, both now and in the future?
I think it’s a complex question how to optimize a grantmaking ecosystem, but I like this post for laying out one set of considerations and pointing towards how to brainstorm better solutions.
For purposes of curation, I think it’s a bit of a point-against-the-post that it’s focused on the EA community and is a bit inside-baseball-y, but I think the general lessons here are pretty relevant to the broader societal landscape. (I also think there’s enough EA people reading LessWrong that occasional posts somewhat focused on this-particular-funding-landscape is also fine)
I’m actually fairly curious how much the Silicon Valley funding landscape has the capacity to optimize itself for the longterm. I assume it’s much larger and more subject to things like “unilaterally trying to optimize for epistemics doesn’t really move the needle on what the rest of the ecosystem is doing overall, so you can’t invest in the collective future as easily”. But there might also be a relatively small number of major funders who can talk to each other and coordinate? (but, also, the difference this and being a kinda corrupt cartel is also kinda blurry, so watch out for that?)
Curated.
“How to allocate funding for public goods” is a pretty important question that I feel like civilization is overall struggling with. “How to incentivize honesty, minimize grift, and find value” are all hard questions.
The EA/X-risk-ecosystem is somewhat lucky in that there’s a group of funders, thinkers and doers who are jointly interested in the longterm health of the overall network and some shared concern for epistemics such that it’s a tractable question to ask “how do we make tradeoffs in grantmaking applications?”
Are we trading off “how much value we’re getting from this round of grants” against “how much longterm grift or epistemic warping are we encouraging in future rounds”? Are there third-options that could get more total value, both now and in the future?
I think it’s a complex question how to optimize a grantmaking ecosystem, but I like this post for laying out one set of considerations and pointing towards how to brainstorm better solutions.
Since one person commented privately:
For purposes of curation, I think it’s a bit of a point-against-the-post that it’s focused on the EA community and is a bit inside-baseball-y, but I think the general lessons here are pretty relevant to the broader societal landscape. (I also think there’s enough EA people reading LessWrong that occasional posts somewhat focused on this-particular-funding-landscape is also fine)
I’m actually fairly curious how much the Silicon Valley funding landscape has the capacity to optimize itself for the longterm. I assume it’s much larger and more subject to things like “unilaterally trying to optimize for epistemics doesn’t really move the needle on what the rest of the ecosystem is doing overall, so you can’t invest in the collective future as easily”. But there might also be a relatively small number of major funders who can talk to each other and coordinate? (but, also, the difference this and being a kinda corrupt cartel is also kinda blurry, so watch out for that?)