Yeah, this isn’t obviously wrong from where I’m standing:
“the rules of science aren’t strict enough and if scientists just cared enough to actually make an effort and try to solve the problem, rather than being happy to meet the low bar of what’s socially demanded of them, then science would progress a lot faster”
But it’s imprecise. Eliezer is saying that the amount of extra individual effort, rationality, creative institution redesign, etc. to yield significant outperformance isn’t trivial. (In my own experience, people tend to put too few things in the “doable but fairly difficult” bin, and too many things in the “fairly easy” and “effectively impossible” bins.)
Eliezer is also saying that the dimension along which you’re trying to improve science makes a huge difference. E.g., fields like decision theory may be highly exploitable in AI-grade solutions and ideas even if biomedical research turns out to be more or less inexploitable in cancer cures. (Though see Sarah Constantin’s “Is Cancer Progress Stagnating?” and follow-up posts on that particular example.)
Yeah, this isn’t obviously wrong from where I’m standing:
But it’s imprecise. Eliezer is saying that the amount of extra individual effort, rationality, creative institution redesign, etc. to yield significant outperformance isn’t trivial. (In my own experience, people tend to put too few things in the “doable but fairly difficult” bin, and too many things in the “fairly easy” and “effectively impossible” bins.)
Eliezer is also saying that the dimension along which you’re trying to improve science makes a huge difference. E.g., fields like decision theory may be highly exploitable in AI-grade solutions and ideas even if biomedical research turns out to be more or less inexploitable in cancer cures. (Though see Sarah Constantin’s “Is Cancer Progress Stagnating?” and follow-up posts on that particular example.)
If you want to find hidden inefficiencies to exploit, don’t look for unknown slopes; look for pixelation along the boundaries of well-worn maps.