Isn’t this true in a somewhat weaker form? It takes individuals and groups putting in effort at personal risk to move society forward. The fact that we are stuck in inadequate equilibriums is evidence that we have not progressed as far as we could.
Scientists moving from Elsevier to open access happened because enough of them cared enough to put in the effort and take the risk to their personal success. If they had cared a little bit more on average, it would have happened earlier. If they had cared a little less, maybe it would have taken a few more years.
If humans had 10% more instinct for altruism, how many more of these coordination problems would alreadybe solved? There is a deficit of caring about solving civilizational problems. That doesn’t change the observation that most people are reacting to their own incentives and we can’t really blame them.
Yeah, this isn’t obviously wrong from where I’m standing:
“the rules of science aren’t strict enough and if scientists just cared enough to actually make an effort and try to solve the problem, rather than being happy to meet the low bar of what’s socially demanded of them, then science would progress a lot faster”
But it’s imprecise. Eliezer is saying that the amount of extra individual effort, rationality, creative institution redesign, etc. to yield significant outperformance isn’t trivial. (In my own experience, people tend to put too few things in the “doable but fairly difficult” bin, and too many things in the “fairly easy” and “effectively impossible” bins.)
Eliezer is also saying that the dimension along which you’re trying to improve science makes a huge difference. E.g., fields like decision theory may be highly exploitable in AI-grade solutions and ideas even if biomedical research turns out to be more or less inexploitable in cancer cures. (Though see Sarah Constantin’s “Is Cancer Progress Stagnating?” and follow-up posts on that particular example.)
Isn’t this true in a somewhat weaker form? It takes individuals and groups putting in effort at personal risk to move society forward. The fact that we are stuck in inadequate equilibriums is evidence that we have not progressed as far as we could.
Scientists moving from Elsevier to open access happened because enough of them cared enough to put in the effort and take the risk to their personal success. If they had cared a little bit more on average, it would have happened earlier. If they had cared a little less, maybe it would have taken a few more years.
If humans had 10% more instinct for altruism, how many more of these coordination problems would alreadybe solved? There is a deficit of caring about solving civilizational problems. That doesn’t change the observation that most people are reacting to their own incentives and we can’t really blame them.
Yeah, this isn’t obviously wrong from where I’m standing:
But it’s imprecise. Eliezer is saying that the amount of extra individual effort, rationality, creative institution redesign, etc. to yield significant outperformance isn’t trivial. (In my own experience, people tend to put too few things in the “doable but fairly difficult” bin, and too many things in the “fairly easy” and “effectively impossible” bins.)
Eliezer is also saying that the dimension along which you’re trying to improve science makes a huge difference. E.g., fields like decision theory may be highly exploitable in AI-grade solutions and ideas even if biomedical research turns out to be more or less inexploitable in cancer cures. (Though see Sarah Constantin’s “Is Cancer Progress Stagnating?” and follow-up posts on that particular example.)
If you want to find hidden inefficiencies to exploit, don’t look for unknown slopes; look for pixelation along the boundaries of well-worn maps.