For the news industry, because it causes extreme emotional reactions, which give them guaranteed large demand and thus give them lots of money.
For the public, it’s primarily due to the fact that we don’t realize how uninformative the outliers actually are, because our intuitions were geared towards small-scale societies, and we don’t realize that in large worlds, the tails become truly extreme but also uninformative.
From gwern:
Because weirdness, however weird or often reported, increasingly tells us nothing about the world at large. If you lived in a small village of 100 people and you heard 10 anecdotes about bad behavior, the extremes are not that extreme, and you can learn from them (they may even give a good idea of what humans in general are like); if you live in a ‘global village’ of 10 billion people and hear 10 anecdotes, you learn… nothing, really, because those few extreme anecdotes represent extraordinary flukes which are the confluence of countless individual flukes, which will never happen again in precisely that way (an expat Iranian fitness instructor is never going to shoot up YouTube HQ again, we can safely say), and offer no lessons applicable to the billions of other people. One could live a thousand lifetimes without encountering such extremes first-hand, rather than vicariously.
Edit: There is a big case where it’s correct to be insistent about outliers, in that if you think something is distributed log-normally/power law, then the outliers contain most of what you value, and thus the outliers are the most important things to get right.
Why do people tend to talk in funky discrete ways when careful quantitative analysis generally finds everything to be continuous?
Basically, because it’s both easier to understand a binary/discrete system than it is to reason about continuous quantities, and we write stories that rely on binary/discrete effects, rather than continuous change because it’s more fun.
However, there’s a perfectly respectable argument to say that the universe is at it’s core a discrete, not continuous entity, though I’m not endorsing it here.
Scott Aaronson defends it in this blog post here, and he essentially argues that the Bekenstein bound forces nature to be fundamentally finite:
Also, probably working memory limitations are a problem.
Why are people dissatisfied with GWAS?
People are dissatisfied with GWAS because it fundamentally challenges one of our more important beliefs: That we essentially have full internal locus of control over our traits. It’s a more mentally healthy mindset to hold than an external locus of control, so I do think there are real reasons here, but predictably underestimates uncontrollable factors like genetics.
It also has similarities to why people hate the scaling hypothesis and the Bitter Lesson in AI.
What’s wrong with symptom treatment?
While there is a definite over-resistance to symptom treatment, which comes from a similar place to opposing incremental changes instead of radical revolutions, part of the issue is that when you don’t intervene on the causes, the treatment of symptoms can fade away, thus offering only temporary solutions. Now I think this is less of a problem than people think, it can be a problem to think about.
Cf Gwen on “Everything is Correlated”:
To the extent that these key variables are unmodifiable, the many peripheral variables may also be unmodifiable (which may be related to the broad failure of social intervention programs). Any intervention on those peripheral variables, being ‘downstream’, will tend to either be ‘hollow’ or fade out or have no effect at all on the true desired goals no matter how consistently they are correlated.
Why does medicine have such funky qualitative reasoning?
A lot of it is because medicine in general is very regulated and generally wants to make people safe at all costs, which means that it often doesn’t consider other outcomes that are relevant, so an overfocus on the negative is likely.
Why do people seem to be afraid of measuring things?
Basically, because if they measure things, it makes it less special and illegible in their mind, and critically this usually means the thing loses status (or at best retains status.)
More generally, if you are subject to measurement, than you can realistically only lose status or have your status remain the same, and thus it’s not worth it to measure a lot of things.
Also, if you measure something accurately, it can be a blow to certain political opinions, and given that most people are low-decouplers, this means that they have an incentive not to measure something.
Why is there no greater consensus for large-scale models?
Basically, it comes from the same place as disliking the Bitter Lesson and the scaling hypothesis: Because it means your specially handcrafted inductive bias for small models don’t work.
More importantly, science is a lot, lot harder in a polycausal world.
Can we “rescue” the notion of objectivity?
I think not, tentatively in the limit, and in the most general case the answer is absolutely not, so the question is over what domains we want to rescue objectivity in.
This articles gives hints that we may not be able to get objectivity consistent with other assumptions about our reality.
I’ll take a stab at some of these questions:
Why are people so insistent about outliers?
For the news industry, because it causes extreme emotional reactions, which give them guaranteed large demand and thus give them lots of money.
For the public, it’s primarily due to the fact that we don’t realize how uninformative the outliers actually are, because our intuitions were geared towards small-scale societies, and we don’t realize that in large worlds, the tails become truly extreme but also uninformative.
From gwern:
Link:
https://gwern.net/littlewood#epistemological-implications
Edit: There is a big case where it’s correct to be insistent about outliers, in that if you think something is distributed log-normally/power law, then the outliers contain most of what you value, and thus the outliers are the most important things to get right.
Why do people tend to talk in funky discrete ways when careful quantitative analysis generally finds everything to be continuous?
Basically, because it’s both easier to understand a binary/discrete system than it is to reason about continuous quantities, and we write stories that rely on binary/discrete effects, rather than continuous change because it’s more fun.
However, there’s a perfectly respectable argument to say that the universe is at it’s core a discrete, not continuous entity, though I’m not endorsing it here.
Scott Aaronson defends it in this blog post here, and he essentially argues that the Bekenstein bound forces nature to be fundamentally finite:
https://scottaaronson.blog/?p=2820
https://en.wikipedia.org/wiki/Bekenstein_bound
Why do people want “the” cause?
A lot of it is that people deeply think that the world must be simple and in particular monocausal, because it would make our lives easier.
More importantly, science is a lot, lot harder in a polycausal world, which Scott Alexander explains in his post here:
https://slatestarcodex.com/2018/09/13/the-omnigenic-model-as-metaphor-for-life/
Also, probably working memory limitations are a problem.
Why are people dissatisfied with GWAS?
People are dissatisfied with GWAS because it fundamentally challenges one of our more important beliefs: That we essentially have full internal locus of control over our traits. It’s a more mentally healthy mindset to hold than an external locus of control, so I do think there are real reasons here, but predictably underestimates uncontrollable factors like genetics.
It also has similarities to why people hate the scaling hypothesis and the Bitter Lesson in AI.
What’s wrong with symptom treatment?
While there is a definite over-resistance to symptom treatment, which comes from a similar place to opposing incremental changes instead of radical revolutions, part of the issue is that when you don’t intervene on the causes, the treatment of symptoms can fade away, thus offering only temporary solutions. Now I think this is less of a problem than people think, it can be a problem to think about.
Cf Gwen on “Everything is Correlated”:
https://gwern.net/everything#importance
Why does medicine have such funky qualitative reasoning?
A lot of it is because medicine in general is very regulated and generally wants to make people safe at all costs, which means that it often doesn’t consider other outcomes that are relevant, so an overfocus on the negative is likely.
Why do people seem to be afraid of measuring things?
Basically, because if they measure things, it makes it less special and illegible in their mind, and critically this usually means the thing loses status (or at best retains status.)
More generally, if you are subject to measurement, than you can realistically only lose status or have your status remain the same, and thus it’s not worth it to measure a lot of things.
Also, if you measure something accurately, it can be a blow to certain political opinions, and given that most people are low-decouplers, this means that they have an incentive not to measure something.
Why is there no greater consensus for large-scale models?
Basically, it comes from the same place as disliking the Bitter Lesson and the scaling hypothesis: Because it means your specially handcrafted inductive bias for small models don’t work.
https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson.pdf
More importantly, science is a lot, lot harder in a polycausal world.
Can we “rescue” the notion of objectivity?
I think not, tentatively in the limit, and in the most general case the answer is absolutely not, so the question is over what domains we want to rescue objectivity in.
This articles gives hints that we may not be able to get objectivity consistent with other assumptions about our reality.
https://www.quantamagazine.org/metaphysical-experiments-test-hidden-assumptions-about-reality-20240730/