I once encountered a case of (honest) misunderstanding from someone who thought that when I cited something as an example of civilizational inadequacy (or as I put it at the time, “People are crazy and the world is mad”), the thing I was trying to argue was that the Great Stagnation was just due to unimpressive / unqualified / low-status (“stupid”) scientists. He thought I thought that all we needed to do was take people in our social circle and have them go into biotech, or put scientists through a CFAR unit, and we’d see huge breakthroughs.
Datapoint: I also totally thought that by “people are crazy and the world is mad”, you meant something like this too… in fact, it wasn’t until this sequence that I became convinced for certain that you didn’t mean that. E.g. a lot of the Eld Science stuff in the Old Sequences seemed to be saying something like “the rules of science aren’t strict enough and if scientists just cared enough to actually make an effort and try to solve the problem, rather than being happy to meet the low bar of what’s socially demanded of them, then science would progress a lot faster”. A bunch of stuff in HPMOR seemed to have this vibe, too; giving the strong impression that most societal problems are due to failures of individual rationality, and which could be fixed if people just cared enough.
Yes, I explicitly remember Jeffryssai’s answer to the question of what Einstein & co. got wrong was that “They thought it was acceptable to take 50 years to develop the next big revolution” or something, and I recall my takeaway from that being “If you think it’s okay to doss around and just think about fun parts of physics, as opposed to trying to figure out the critical path to the next big insight and working hard on that, then you’ll fail to save humanity”. And the ‘if people just cared enough’ mistake is also one I held for a long time—it wasn’t until I met it’s strong form in EA that I realised that the problem isn’t that people don’t care enough.
Interestingly enough, James Watson (as in Watson & Crick) does literally think that the problem with biology is that biologists today don’t work hard enough, that they just don’t spend enough hours a week in the lab.
I’m not sure this is true. But even if it were true, you could still view it as an incentive problem rather than a character problem. (The fact that there’s enough money & prestige in science for it to serve as an even remotely plausible career to go into for reasons of vanilla personal advancement means that it’ll attract some people who are vanilla upper-middle-class strivers rather than obsessive truth-seekers, and those people will work less intensely.) You wouldn’t fix the problem by injecting ten workaholics into the pool of researchers.
Isn’t this true in a somewhat weaker form? It takes individuals and groups putting in effort at personal risk to move society forward. The fact that we are stuck in inadequate equilibriums is evidence that we have not progressed as far as we could.
Scientists moving from Elsevier to open access happened because enough of them cared enough to put in the effort and take the risk to their personal success. If they had cared a little bit more on average, it would have happened earlier. If they had cared a little less, maybe it would have taken a few more years.
If humans had 10% more instinct for altruism, how many more of these coordination problems would alreadybe solved? There is a deficit of caring about solving civilizational problems. That doesn’t change the observation that most people are reacting to their own incentives and we can’t really blame them.
Yeah, this isn’t obviously wrong from where I’m standing:
“the rules of science aren’t strict enough and if scientists just cared enough to actually make an effort and try to solve the problem, rather than being happy to meet the low bar of what’s socially demanded of them, then science would progress a lot faster”
But it’s imprecise. Eliezer is saying that the amount of extra individual effort, rationality, creative institution redesign, etc. to yield significant outperformance isn’t trivial. (In my own experience, people tend to put too few things in the “doable but fairly difficult” bin, and too many things in the “fairly easy” and “effectively impossible” bins.)
Eliezer is also saying that the dimension along which you’re trying to improve science makes a huge difference. E.g., fields like decision theory may be highly exploitable in AI-grade solutions and ideas even if biomedical research turns out to be more or less inexploitable in cancer cures. (Though see Sarah Constantin’s “Is Cancer Progress Stagnating?” and follow-up posts on that particular example.)
Datapoint: I also totally thought that by “people are crazy and the world is mad”, you meant something like this too… in fact, it wasn’t until this sequence that I became convinced for certain that you didn’t mean that. E.g. a lot of the Eld Science stuff in the Old Sequences seemed to be saying something like “the rules of science aren’t strict enough and if scientists just cared enough to actually make an effort and try to solve the problem, rather than being happy to meet the low bar of what’s socially demanded of them, then science would progress a lot faster”. A bunch of stuff in HPMOR seemed to have this vibe, too; giving the strong impression that most societal problems are due to failures of individual rationality, and which could be fixed if people just cared enough.
Yes, I explicitly remember Jeffryssai’s answer to the question of what Einstein & co. got wrong was that “They thought it was acceptable to take 50 years to develop the next big revolution” or something, and I recall my takeaway from that being “If you think it’s okay to doss around and just think about fun parts of physics, as opposed to trying to figure out the critical path to the next big insight and working hard on that, then you’ll fail to save humanity”. And the ‘if people just cared enough’ mistake is also one I held for a long time—it wasn’t until I met it’s strong form in EA that I realised that the problem isn’t that people don’t care enough.
Interestingly enough, James Watson (as in Watson & Crick) does literally think that the problem with biology is that biologists today don’t work hard enough, that they just don’t spend enough hours a week in the lab.
I’m not sure this is true. But even if it were true, you could still view it as an incentive problem rather than a character problem. (The fact that there’s enough money & prestige in science for it to serve as an even remotely plausible career to go into for reasons of vanilla personal advancement means that it’ll attract some people who are vanilla upper-middle-class strivers rather than obsessive truth-seekers, and those people will work less intensely.) You wouldn’t fix the problem by injecting ten workaholics into the pool of researchers.
Isn’t this true in a somewhat weaker form? It takes individuals and groups putting in effort at personal risk to move society forward. The fact that we are stuck in inadequate equilibriums is evidence that we have not progressed as far as we could.
Scientists moving from Elsevier to open access happened because enough of them cared enough to put in the effort and take the risk to their personal success. If they had cared a little bit more on average, it would have happened earlier. If they had cared a little less, maybe it would have taken a few more years.
If humans had 10% more instinct for altruism, how many more of these coordination problems would alreadybe solved? There is a deficit of caring about solving civilizational problems. That doesn’t change the observation that most people are reacting to their own incentives and we can’t really blame them.
Yeah, this isn’t obviously wrong from where I’m standing:
But it’s imprecise. Eliezer is saying that the amount of extra individual effort, rationality, creative institution redesign, etc. to yield significant outperformance isn’t trivial. (In my own experience, people tend to put too few things in the “doable but fairly difficult” bin, and too many things in the “fairly easy” and “effectively impossible” bins.)
Eliezer is also saying that the dimension along which you’re trying to improve science makes a huge difference. E.g., fields like decision theory may be highly exploitable in AI-grade solutions and ideas even if biomedical research turns out to be more or less inexploitable in cancer cures. (Though see Sarah Constantin’s “Is Cancer Progress Stagnating?” and follow-up posts on that particular example.)
If you want to find hidden inefficiencies to exploit, don’t look for unknown slopes; look for pixelation along the boundaries of well-worn maps.