There’s something of a problem with sensitivity; if the x-risk from AI is ~0.1, and the difference in x-risk from some grant is ~10^-6, then any difference in the forecasts is going to be completely swamped by noise.
(while people in the market could fix any inconsistency between the predictions, they would only be able to look forward to 0.001% returns over the next century)
Making long term predictions is hard. That’s a fundamental problem. Having proxies can be convenient, but it’s not going to tell you anything you don’t already know.
There’s something of a problem with sensitivity; if the x-risk from AI is ~0.1, and the difference in x-risk from some grant is ~10^-6, then any difference in the forecasts is going to be completely swamped by noise.
(while people in the market could fix any inconsistency between the predictions, they would only be able to look forward to 0.001% returns over the next century)
Making long term predictions is hard. That’s a fundamental problem. Having proxies can be convenient, but it’s not going to tell you anything you don’t already know.
Yea, in cases like these, having intermediate metrics seems pretty essential.