I don’t think this is the actual bottleneck here. Noteably, Eliezer, Nate, and John don’t spend much of any of their time assessing research at all (at least recently) as far as I can tell.
I don’t think a public market will add much information. Probably better to just have grantmakers with more context forecast and see how well they do. You need faster feedback loops than 1 yr to get anywhere though, but you can do this by practicing on a bunch of already done research.
My current view is that more of the bottleneck in grantmaking is not having good stuff to fund rather than grantmakers not funding stuff, though I do still think the Open Phil should fund notably more aggressively than they currently do, that marginal LTFF dollars look great, and that it’s bad that Open Phil was substantially restricted in what they can fund recently (which I expect to have substantial chilling effects in addition to those areas).
Noteably, Eliezer, Nate, and John don’t spend much of any of their time assessing research at all (at least recently) as far as I can tell.
Perhaps not specific research projects, but they’ve communicated a lot regarding their models of what types of research are good/bad. (See e. g. Eliezer’s list of lethalities, John’s Why Not Just… sequence, this post of Nate’s.)
I would assume this is because this doesn’t scale and their reviews are not, in any given instance, the ultimate deciding factor regarding what people do or what gets funded. Spending time evaluating specific research proposals is therefore cost-inefficient compared to reviewing general research trends/themes.
My current view is that more of the bottleneck in grantmaking is not having good stuff to fund
Because no entity that I know of is currently explicitly asking for proposals that Eliezer/Nate/John would fund. Why would people bother coming up with such proposals in these circumstances? The system explicitly doesn’t select for it.
I expect that if there were an actual explicit financial pressure to goodhart to their preferences, much more research proposals that successfully do so would be around.
Some notes:
I don’t think this is the actual bottleneck here. Noteably, Eliezer, Nate, and John don’t spend much of any of their time assessing research at all (at least recently) as far as I can tell.
I don’t think a public market will add much information. Probably better to just have grantmakers with more context forecast and see how well they do. You need faster feedback loops than 1 yr to get anywhere though, but you can do this by practicing on a bunch of already done research.
My current view is that more of the bottleneck in grantmaking is not having good stuff to fund rather than grantmakers not funding stuff, though I do still think the Open Phil should fund notably more aggressively than they currently do, that marginal LTFF dollars look great, and that it’s bad that Open Phil was substantially restricted in what they can fund recently (which I expect to have substantial chilling effects in addition to those areas).
Perhaps not specific research projects, but they’ve communicated a lot regarding their models of what types of research are good/bad. (See e. g. Eliezer’s list of lethalities, John’s Why Not Just… sequence, this post of Nate’s.)
I would assume this is because this doesn’t scale and their reviews are not, in any given instance, the ultimate deciding factor regarding what people do or what gets funded. Spending time evaluating specific research proposals is therefore cost-inefficient compared to reviewing general research trends/themes.
Because no entity that I know of is currently explicitly asking for proposals that Eliezer/Nate/John would fund. Why would people bother coming up with such proposals in these circumstances? The system explicitly doesn’t select for it.
I expect that if there were an actual explicit financial pressure to goodhart to their preferences, much more research proposals that successfully do so would be around.