Generally, I agree that exploring more effective ways to scale things up is a good idea—including radical reassessment of recruitment.
However, there’s a fundamental point that ought to be borne in mind when making arguments along the lines of [structures like X solve problems well]:
We did not choose this problem.
We don’t get to change our minds and pick another problem if it looks difficult/impossible.
[There’s a sense in which we chose the AGI problem, and safety/alignment is just a sub-problem that comes up, but I don’t think that’s a sensible take on the situation: super-human adversarial agency is not a sub-problem like any other in history]
Companies choose problems that look solvable, and if a particular sub-problem turns out to be much harder than expected they’ll tend to find workarounds or to abandon the problem and pick one that looks more approachable. On the highest level, a company only needs to make money; conveniently, that is a problem with many known solutions.
The same applies to universities and individuals: the most productive ones focus on problems that seem solvable. When they bump into the kind of problems that have a significant chance of being unsolvable, most will pivot to something else (and most who don’t, fail).
Paying a load of smart people is a good way to ensure that some problems get solved. Paying a load of smart people is a good way to solve [some particular problem you chose] when you chose a suitable problem.
That’s not the situation here: we must solve this particular fixed problem, and it’s a problem no sane person would choose to need to solve.
Again, I agree with considering taking some radical steps (including throwing money at the problem) to get the right people working on this. However, my argument for that would be entirely from first principles.
Any argument based around “This is what successful companies/universities/people do” implicitly assumes that the metric for ‘success’ usefully translates. I don’t think it does—unless you have a reference class for successful approaches to [extremely hard problems organisations had no choice but to solve] (noting that “they had no choice” is usually a huge exaggeration).
For example, even the Manhattan project isn’t a great comparison, since it was set up precisely because the problem looked difficult-but-solvable.
We’re in an unprecedented situation, so any [what usually works] approach can only hope to address sub-problems (the kind of sub-problems that [what usually works] will find). We still need some expectation that the solution to such sub-problems will help solve our initial problem.
Generally, I agree that exploring more effective ways to scale things up is a good idea—including radical reassessment of recruitment.
However, there’s a fundamental point that ought to be borne in mind when making arguments along the lines of [structures like X solve problems well]:
We did not choose this problem.
We don’t get to change our minds and pick another problem if it looks difficult/impossible.
[There’s a sense in which we chose the AGI problem, and safety/alignment is just a sub-problem that comes up, but I don’t think that’s a sensible take on the situation: super-human adversarial agency is not a sub-problem like any other in history]
Companies choose problems that look solvable, and if a particular sub-problem turns out to be much harder than expected they’ll tend to find workarounds or to abandon the problem and pick one that looks more approachable. On the highest level, a company only needs to make money; conveniently, that is a problem with many known solutions.
The same applies to universities and individuals: the most productive ones focus on problems that seem solvable. When they bump into the kind of problems that have a significant chance of being unsolvable, most will pivot to something else (and most who don’t, fail).
Paying a load of smart people is a good way to ensure that some problems get solved.
Paying a load of smart people is a good way to solve [some particular problem you chose] when you chose a suitable problem.
That’s not the situation here: we must solve this particular fixed problem, and it’s a problem no sane person would choose to need to solve.
Again, I agree with considering taking some radical steps (including throwing money at the problem) to get the right people working on this. However, my argument for that would be entirely from first principles.
Any argument based around “This is what successful companies/universities/people do” implicitly assumes that the metric for ‘success’ usefully translates. I don’t think it does—unless you have a reference class for successful approaches to [extremely hard problems organisations had no choice but to solve] (noting that “they had no choice” is usually a huge exaggeration).
For example, even the Manhattan project isn’t a great comparison, since it was set up precisely because the problem looked difficult-but-solvable.
We’re in an unprecedented situation, so any [what usually works] approach can only hope to address sub-problems (the kind of sub-problems that [what usually works] will find). We still need some expectation that the solution to such sub-problems will help solve our initial problem.