The take away isn’t that it “isn’t possible”, since one failure can’t possibly speak to all possible approaches. However, it does give you evidence about the kind of thing that’s likely to happen if you try—and apparently that’s “making things worse”. Maybe next time goes better, but it’s not likely to go better on it’s own, or by virtue of throwing more money at it.
The problem isn’t that you can’t buy things with money, it’s that you get what you pay for and not (necessarily) what you want. If you give a ten million dollar budget to someone to buy predictions, then they might use it to pay a superforcaster to invest time into thinking about their problem, they might use it to set up and subsidize a prediction market, or they might just use it to buy time from the “top of the line” psychics. They will get “predictions” regardless, but the latter predictions do not get better simply by throwing more money at the problem. If you do not know how to buy results themselves, then throwing more money at the problem is just going to get you more of whatever it is you are buying—which apparently wasn’t AI safety last time.
Nuclear weapon research is fairly measurable, and therefore fairly buyable. AI capability research is too, at least if you’re looking for marginal improvements. AI alignment research is much harder to measure, and so you’re going to have a much harder time buying what you actually want to buy.
The take away isn’t that it “isn’t possible”, since one failure can’t possibly speak to all possible approaches. However, it does give you evidence about the kind of thing that’s likely to happen if you try—and apparently that’s “making things worse”. Maybe next time goes better, but it’s not likely to go better on it’s own, or by virtue of throwing more money at it.
The problem isn’t that you can’t buy things with money, it’s that you get what you pay for and not (necessarily) what you want. If you give a ten million dollar budget to someone to buy predictions, then they might use it to pay a superforcaster to invest time into thinking about their problem, they might use it to set up and subsidize a prediction market, or they might just use it to buy time from the “top of the line” psychics. They will get “predictions” regardless, but the latter predictions do not get better simply by throwing more money at the problem. If you do not know how to buy results themselves, then throwing more money at the problem is just going to get you more of whatever it is you are buying—which apparently wasn’t AI safety last time.
Nuclear weapon research is fairly measurable, and therefore fairly buyable. AI capability research is too, at least if you’re looking for marginal improvements. AI alignment research is much harder to measure, and so you’re going to have a much harder time buying what you actually want to buy.