Levels of abstraction: if we are dealing with several factors at the object level, and then all of a sudden one factor several layers of abstraction up, it is cause for alarm. In general, we want to deal with multi-part things at a relatively consistent level of abstraction.
Steps of inference: the least effective argument or evidence is likely to be one that is more inferential steps away than the rest.
Significant figures: if you get factors with 7, 8, or 9 significant figures, and then one factor only has 2, the precision of the whole calculation is reduced to 2. Conversely, if the first factor you encounter has 9 significant figures, it is a bad plan to assume all the other factors have the same.
I feel like scope sensitivity is better for this purpose than for the original one, but I suppose renaming that one to “size insensitivity” or “magnitude neglect” is too much to ask now. Generating a bunch of plausible names mostly by smashing together similar feeling words:
Dimension matching [1]
Degree matching [1]
Grade matching [1]
Scope limiting
Locality bias [2]
Hyperlocality [2]
Under-generalization (of the question) [3]
Single Study Syndrome [3]
Under-contextualization [4]
Typical context bias [4]
Reference Classless [5]
[1] Following the sense of scope, and looting math terms.
[2] Leaning into the first two examples, where the analysis is too local or maybe insufficiently global.
When there was a big surge in people talking about the Dredge Act and the Jones Act last year, I would see conversations in the wild address these three points: unions supporting these bills because they effectively guarantee their member’s jobs; shipbuilders supporting these bills because they immunize them from foreign competition; and the economy as measured in GDP.
Union contracts and business decisions are both at the same level of abstraction: thinking about what another group of people thinks and what they do because of it. The GDP is a towering pile of calculations to reduce every transaction in the country to a percentage. It would need a lot of additional argumentation to build the link between the groups of people making decisions about the thing we are asking about, and the economy as a whole. Even if it didn’t it would still have problems like double-counting the activity of the unions/businesses under consideration, and including lots of irrelevant information like the entertainment sector of the economy, which has nothing to do with intra-US shipping and dredging.
Different levels of abstraction have different relationships to the question we are investigating. It is hard to put these different relationships together well for ourselves, and very hard to communicate them to others, so we should be pretty skeptical about mixing them, in my view.
Other ideas that have a similar flavor for me:
Levels of abstraction: if we are dealing with several factors at the object level, and then all of a sudden one factor several layers of abstraction up, it is cause for alarm. In general, we want to deal with multi-part things at a relatively consistent level of abstraction.
Steps of inference: the least effective argument or evidence is likely to be one that is more inferential steps away than the rest.
Significant figures: if you get factors with 7, 8, or 9 significant figures, and then one factor only has 2, the precision of the whole calculation is reduced to 2. Conversely, if the first factor you encounter has 9 significant figures, it is a bad plan to assume all the other factors have the same.
I feel like scope sensitivity is better for this purpose than for the original one, but I suppose renaming that one to “size insensitivity” or “magnitude neglect” is too much to ask now. Generating a bunch of plausible names mostly by smashing together similar feeling words:
Dimension matching [1]
Degree matching [1]
Grade matching [1]
Scope limiting
Locality bias [2]
Hyperlocality [2]
Under-generalization (of the question) [3]
Single Study Syndrome [3]
Under-contextualization [4]
Typical context bias [4]
Reference Classless [5]
[1] Following the sense of scope, and looting math terms.
[2] Leaning into the first two examples, where the analysis is too local or maybe insufficiently global.
[3] Leaning into Scott Alexander posts, obliquely referring to Generalizing From One Example and Beware the Man of One Study respectively.
[4] This feels like typical mind fallacy, but for the context they are investigating, ie assuming the same context holds everywhere.
[5] In the sense of reference class forecasting
Interesting point about levels of abstraction, I think I agree, but what is a good example?
When there was a big surge in people talking about the Dredge Act and the Jones Act last year, I would see conversations in the wild address these three points: unions supporting these bills because they effectively guarantee their member’s jobs; shipbuilders supporting these bills because they immunize them from foreign competition; and the economy as measured in GDP.
Union contracts and business decisions are both at the same level of abstraction: thinking about what another group of people thinks and what they do because of it. The GDP is a towering pile of calculations to reduce every transaction in the country to a percentage. It would need a lot of additional argumentation to build the link between the groups of people making decisions about the thing we are asking about, and the economy as a whole. Even if it didn’t it would still have problems like double-counting the activity of the unions/businesses under consideration, and including lots of irrelevant information like the entertainment sector of the economy, which has nothing to do with intra-US shipping and dredging.
Different levels of abstraction have different relationships to the question we are investigating. It is hard to put these different relationships together well for ourselves, and very hard to communicate them to others, so we should be pretty skeptical about mixing them, in my view.