For purposes of curation, I think it’s a bit of a point-against-the-post that it’s focused on the EA community and is a bit inside-baseball-y, but I think the general lessons here are pretty relevant to the broader societal landscape. (I also think there’s enough EA people reading LessWrong that occasional posts somewhat focused on this-particular-funding-landscape is also fine)
I’m actually fairly curious how much the Silicon Valley funding landscape has the capacity to optimize itself for the longterm. I assume it’s much larger and more subject to things like “unilaterally trying to optimize for epistemics doesn’t really move the needle on what the rest of the ecosystem is doing overall, so you can’t invest in the collective future as easily”. But there might also be a relatively small number of major funders who can talk to each other and coordinate? (but, also, the difference this and being a kinda corrupt cartel is also kinda blurry, so watch out for that?)
Since one person commented privately:
For purposes of curation, I think it’s a bit of a point-against-the-post that it’s focused on the EA community and is a bit inside-baseball-y, but I think the general lessons here are pretty relevant to the broader societal landscape. (I also think there’s enough EA people reading LessWrong that occasional posts somewhat focused on this-particular-funding-landscape is also fine)
I’m actually fairly curious how much the Silicon Valley funding landscape has the capacity to optimize itself for the longterm. I assume it’s much larger and more subject to things like “unilaterally trying to optimize for epistemics doesn’t really move the needle on what the rest of the ecosystem is doing overall, so you can’t invest in the collective future as easily”. But there might also be a relatively small number of major funders who can talk to each other and coordinate? (but, also, the difference this and being a kinda corrupt cartel is also kinda blurry, so watch out for that?)