The concepts here seem pretty easy to understand if you have Econ 101 concepts. Not sure what the issue you are pointing to is?
As a comparison: early AI risk related content on LessWrong did not engage much with the AI literature, though most of the analysis (e.g. orthogonality and convergent instrumental goals) followed from concepts found in standard textbooks such as Artificial Intelligence: A Modern Approach. Perhaps the content would have been better-received by the mainstream if it did engage more, though there are also tradeoffs; this would have made the content take longer to write, and plausibly also harder to understand (though that’s less clear), and the discussion would be less filtered to prefer including people disposed to thinking conceptually.
The point you are making could be that appearing to engage with the literature more is necessary to appeal to readers who are pattern-matching based on shibboleths rather than thinking conceptually regardless of shibboleths, but if so that should be made explicit, and my sense is that Ben does not actually care about appealing to these readers.
It’s not called econ 101 because it’s the only material you need.
Engaging with previous work on the subject is just like any other way of being less wrong—if you’re already convinced you’re right, it feels like a tedious box to be checked with no chance of influencing your conclusions. Yes, there is some signalling value to me, but the signalling has value precisely because I assign high probability that there is relevant, important prior work here. (EDIT: where “here” largely means the monetary policy bits, though I would still be positively signalled by some reference-dropping on the cultural stuff).
This seems somewhat reasonable, but there are some caveats here:
If the field contains a bunch of errors (like psychology), then citing work in the field is likely to add more noise than signal, unless you have good filtering criteria. At some level of badness, you are not much worse off searching through the Library of Babel than the literature. The LessWrong sequences sometimes cited social science findings (e.g. the Robbers Cave Experiment) that are in serious doubt due to the replication crisis, e.g. the Robbers Cave Experiment was basically p-hacked. They would have been better off just referring to common-sense social intuitions.
Among academic fields, IMO economics is on the worse end. There are a bunch of common barely-justified assumptions (e.g. that GDP is good, unemployment is bad, and people use something like CDT) that, while they could be true, are actually counterintuitive on a first-principles analysis. On the outside view, you would expect lots of disinformation in a field as highly political as economics. I basically don’t trust things I read in economics papers except (a) things I can check myself, (b) reports of basic facts about the world (e.g. that some country’s oil exports increased in some time period), and (c) math results that I would expect peer review to catch errors in. (I’m not saying there isn’t good literature out there, just that you have to do your own reasoning to check which of it is any good; see He Who Pays The Piper Must Know The Tune).
That said, I think this particular post largely overlaps with Modern Monetary Theory, and would have benefited by noting this similarity. (I doubt that this would have actually changed the post’s conclusions, though)
There are a bunch of common barely-justified assumptions (e.g. that GDP is good, unemployment is bad, and people use something like CDT) that, while they could be true, are actually counterintuitive on a first-principles analysis.
Oskar Morgenstern (who, with John von Neumann, formulated the VNM utility theorem) is a good source for this sort of thing. (See “Does GNP Measure Growth and Welfare?” [PDF], The Limits of Economics, On the Accuracy of Economic Observations.)
In hindsight, it totally would have been better if LessWrong had engaged more with AI: A Modern Approach, though for somewhat different reasons. (I’m not saying anything not implicit in the standard textbooks as far as I know, though.)
I think it would’ve been good because it would have pushed things in a more technical direction, and helped formalise a bunch of our ideas regarding planning, search spaces, and reasoning. Personally I really enjoyed my uni intro to AI class on that textbook for reading things like “The way we form a heuristic in this type of search is by relaxing the constraints of the problem—making the problem easy enough that we can compute an answer quickly” ← these ideas helped me think about my own heuristics.
Related to the more technical direction, some of my favourite EY writings are the more technical ones like InEq, QM, Words, and Technical Explanation, because they’re able to communicate the core insight so crisply.
I was thinking specifically about engagement with the details of the narrative and content of what’s currently called “AI research,” in addition to the abstract idea of general intelligence.
The concepts here seem pretty easy to understand if you have Econ 101 concepts. Not sure what the issue you are pointing to is?
As a comparison: early AI risk related content on LessWrong did not engage much with the AI literature, though most of the analysis (e.g. orthogonality and convergent instrumental goals) followed from concepts found in standard textbooks such as Artificial Intelligence: A Modern Approach. Perhaps the content would have been better-received by the mainstream if it did engage more, though there are also tradeoffs; this would have made the content take longer to write, and plausibly also harder to understand (though that’s less clear), and the discussion would be less filtered to prefer including people disposed to thinking conceptually.
The point you are making could be that appearing to engage with the literature more is necessary to appeal to readers who are pattern-matching based on shibboleths rather than thinking conceptually regardless of shibboleths, but if so that should be made explicit, and my sense is that Ben does not actually care about appealing to these readers.
It’s not called econ 101 because it’s the only material you need.
Engaging with previous work on the subject is just like any other way of being less wrong—if you’re already convinced you’re right, it feels like a tedious box to be checked with no chance of influencing your conclusions. Yes, there is some signalling value to me, but the signalling has value precisely because I assign high probability that there is relevant, important prior work here. (EDIT: where “here” largely means the monetary policy bits, though I would still be positively signalled by some reference-dropping on the cultural stuff).
This seems somewhat reasonable, but there are some caveats here:
If the field contains a bunch of errors (like psychology), then citing work in the field is likely to add more noise than signal, unless you have good filtering criteria. At some level of badness, you are not much worse off searching through the Library of Babel than the literature. The LessWrong sequences sometimes cited social science findings (e.g. the Robbers Cave Experiment) that are in serious doubt due to the replication crisis, e.g. the Robbers Cave Experiment was basically p-hacked. They would have been better off just referring to common-sense social intuitions.
Among academic fields, IMO economics is on the worse end. There are a bunch of common barely-justified assumptions (e.g. that GDP is good, unemployment is bad, and people use something like CDT) that, while they could be true, are actually counterintuitive on a first-principles analysis. On the outside view, you would expect lots of disinformation in a field as highly political as economics. I basically don’t trust things I read in economics papers except (a) things I can check myself, (b) reports of basic facts about the world (e.g. that some country’s oil exports increased in some time period), and (c) math results that I would expect peer review to catch errors in. (I’m not saying there isn’t good literature out there, just that you have to do your own reasoning to check which of it is any good; see He Who Pays The Piper Must Know The Tune).
That said, I think this particular post largely overlaps with Modern Monetary Theory, and would have benefited by noting this similarity. (I doubt that this would have actually changed the post’s conclusions, though)
Oskar Morgenstern (who, with John von Neumann, formulated the VNM utility theorem) is a good source for this sort of thing. (See “Does GNP Measure Growth and Welfare?” [PDF], The Limits of Economics, On the Accuracy of Economic Observations.)
In hindsight, it totally would have been better if LessWrong had engaged more with AI: A Modern Approach, though for somewhat different reasons. (I’m not saying anything not implicit in the standard textbooks as far as I know, though.)
I think it would’ve been good because it would have pushed things in a more technical direction, and helped formalise a bunch of our ideas regarding planning, search spaces, and reasoning. Personally I really enjoyed my uni intro to AI class on that textbook for reading things like “The way we form a heuristic in this type of search is by relaxing the constraints of the problem—making the problem easy enough that we can compute an answer quickly” ← these ideas helped me think about my own heuristics.
Related to the more technical direction, some of my favourite EY writings are the more technical ones like InEq, QM, Words, and Technical Explanation, because they’re able to communicate the core insight so crisply.
Was that what you had in mind?
I was thinking specifically about engagement with the details of the narrative and content of what’s currently called “AI research,” in addition to the abstract idea of general intelligence.