Divergence causes isolated demands for rigor
If you question enough things you can poke whole in any theory, argument, statement, or any other information-transmitting abstraction.
The fully-fledged argument for this is made fairly well in Beware isolated demands for rigor, so I won’t repeat it in full.
The gist of it is that you can always disprove an argument if you dig deep enough into the epistemology and context backing it up, but, by doing so, you can make any productive discussion impossible.
I
For example:
A: I found that molecule X cures brain cancer.
B: But does it really “cure” cancer? Are you certain? Or did you just observe it in some percentage of patients?
A: No, I observed it in an n=50 study, all of them showed no sign of cancer after 1 month of treatment.
B: But 50 is an awfully low number, I think to have any certainty you need a bigger number.
A: I mean, I think it’s fairly stunning, considering any other treatment has only p<0.001 to lead to those kinds of outcomes in 50 people. But I get your point, let me run a bigger trial.
A: Ok, I ran a trail on 5000 people, cancer cured in 100% of them based on 6 different state of the art assays, p < 0.000… that this finding is spurious, etc.
B: But really, there’s some underlying assumption you’re making about the world by giving us that p-value, it wouldn’t hold if you started with different priors about the relationship between truth and the null hypothesis.
A: Ok, point taken, but whatever method I use to determine the p-value you could argue the same thing, and there’s basically no method of analysis you could apply here that wouldn’t reach the same conclusion about the results being stunning.
B: Why, of course, you can. Actually, I needn’t even do that, think about your claim 100%. That’s a fairly tight claim, what if in the next hundred years a single individual would not be cured by this molecule. Who knows, maybe he has a weird gene coding for an enzyme that destroys it?
A: But couldn’t make that point about literally any single drug?
B: Look, let’s focus on your claim specifically, ok, we’re not talking about other drugs here. Anyway, you claim the cancer went away but how did you deduce it “went away”, how accurate was your measurement?
A: Well, for one, we used a radioactive carbon glucose assay plus a drug to increase blood uptake to the brain and brain-blood-barrier permeability and then ran an MRI.
B: Oh my, that’s not a very accurate method is it? I mean, you need huge clumps, hundred of thousands of cells, to detect them that way. Besides which, even though it is common, not all cancerous cells absorb more glucose than neighboring cells. Maybe you’ve just stumbled upon the first incidence of ketogenic cancer.
A: But, that would make almost all studies about cancer invalid.
B: And anyway, doesn’t the human brain contain thousands of cancer cells at any moment in anyone? With the immune system fighting to keep them from causing a tumor.
A: I guess it does, but we don’t really call it cancer until it becomes a potentially life-threatening tumor.
B: Oh and by the way, what do you mean by “cure”, can you come up with a rigorous definition of that word? Why are you so sure cancer is “cured”, maybe some strange people have a weird happiness determination mechanisms and they really enjoy cancer? Maybe the brain cancer damaged an area of their brain that was cause for their unhappiness. Maybe it made them realize the shortness of life and thus finally made them do meaningful things.
A: But then you’re just saying that nobody in medicine should use the word “cure” anymore, this argument applies to literally all of medicine.
B: Even then, saying your molecule cures cancer is being naive, you’re confusing correlation to causation. Until we have a perfect model of the human brain at a cellular level it’s really impossible to know if this is a cause or if your drug is really just correlated with something cancer curing. Maybe it’s the drug wrapping that emanates some cancer-curing aura.
A: Fuck you!
II
I think people have isolated demand for rigor from their outgroup.
The obvious example here is something like:
Politicians from my tribe are good and virtuous people that have to be realistic when navigating a web of complex incentives and make tradeoffs in order to achieve a good long terms result, appease their voters and keep good relationships with their party. Thus I don’t expect them to always vote in an “ideal” way. Now and then they will vote on something I consider immoral or wrong but we have to be realistic and improve the world one step at a time, which involves being open to compromise.
Hey look, that politician from the opposite tribe voted on something immoral, the only reason for voting on something immoral is being a piece of sub-human trash. That’s him, the politician from the other tribe, a piece of sub-human trash because they are voted on that thing which is immoral and bad !!!
This behavior starts way before our most stupid political instincts kick in, you can see it in children.
The “unpopular” kid is someone that’s divergent from the rest of the group for various reasons: different social stratum, different religion, different skin color, different accent, different background knowledge from parents. They will become the but of jokes and insults that could equally well apply to any of the other kids.
Yeah, maybe you’re good at the game, but your mom is fat.
Statistically, “your mom is fat” descriptor applies to ~60% of the kids in the UK, it is also probably unrelate to whatever games the kids are playing. But “Your mom is fat” becomes a valid insult when speaking to the divergent kid, the outgroup.
III
“Your mom is fat” lives on into adulthood and seems to be present even in very rigorous fields.
An interesting example here is the Bates method for improving eyesight. It seems to be done by doing something like: starting at various images, looking towards the sun with your eyes tight shut, and tracking objects with your vision.
Wikipedia tells me that this is a discredited method based on a lack of evidence and the underlying theory being false, which seems fine. But let’s look into why it’s so:
There is no good evidence that any kind of training can change the refractive power of the eye. Moreover, certain aspects of the Bates method can put its followers at risk: they might damage their eyes through overexposure to sunlight, not wear their corrective lenses while driving, or neglect conventional eye care, possibly allowing serious conditions to develop.
For some of those claims, I can find in a single secondary source (nr 5 on Wikipedia) and it specifically mentions the sunlight damage to the eye being caused only if people open their eyes… while looking at the sun, with their eyes closed. So, let’s look at this argument again and split it into parts:
There is no good evidence that any kind of training can change the refractive power of the eye
It doesn’t really seem like a fully-fledged clinical trial was run to do this, even more so, the method was advocated for so long ago that it’s hard to believe people even knew how to run a clinical trial that would have been believable to today’s medical establishment.
Still, this seems like a fair point: There’s no proof this works besides some case studies that could be fake/biased.
Then follows:
Moreover, certain aspects of the Bates method can put its followers at risk:
they might damage their eyes through overexposure to sunlight
Based on this same reasoning let’s find some risk categories in the glass-wearing demographics:
Users of photochromic lenses and eyesight correcting sunglasses can cause damage to the eye because the user will be tempted to look into the sun despite a damaging effect remaining.
A user of blue-light filtering lenses might cause damage to his eyesight and sleep schedule, by believing it’s less harmful to look at his phone/tablet before bed and thus doing it more.
not wear their corrective lenses while driving
Based on this same reasoning let’s find some risk categories in the glass-wearing demographics:
Every glass wearing individual is at risk unless they wear their glasses 24⁄7, because they may forget to put them on while driving, ideally they should sleep with them on, just in case.
Every glass wearing individual is at risk because their condition might worsen, but not noticing this because of always wearing glasses (again, remember, if you take them off you may forget to put them on while driving) can cause them not to notice this slow decrease in quality and thus not get new glasses. Putting them and others at risk while driving.
Every glasses wearer is at risk while driving unless using some sort of strap, since, especially during a violent swirl or breaks when you need those glasses most, they might fall off their face.
neglect conventional eye care
Let’s see:
Wearing glasses might cause someone to neglect conventional eye care. Namely, they might neglect getting laser surgery, a permanent fix for their problem. This is because the glass wearing makes them forget about or not put too much emphasis on their eyesight problem.
possibly allowing serious conditions to develop
Wikipedia seems to just be wrong in even citing this one because it’s actually a claim against the dangers of Iridology (some other alternative medicine BS), not the Bates method, found in the same secondary source.
In short, all but one of the arguments are bullshit isolated demands for rigor that aren’t applied to modern optometry or modern medicine in general.
Except for the first “no evidence” argument, all of these could be arguments against any sort of intervention to correct eyesight.
So let me rephrase this argument:
There is no good evidence that any kind of training can change the refractive power of the eye. Moreover, Bates’ mom is fat and he’s ugly and somebody once told me they tried this and then they looked into the sun for too long and went blind.
IV
This is more or less the argument being made and accepted here, why?
Because “fuck alternative medicine”, it’s the outgroup, it’s evil, evil people should be hit with any and all arbitrary and isolated demands for rigor because we don’t like them.
At least I can’t think of any other reasoning. Any doctor or researcher will “score virtue points” by arguing against alternative medicine. Thus, everyone will throw insults at the method until some stick, and none will bother pointing out how unfair this is… because they’d be defending alternative medicine, they’d be defending the evil outgroup and be shun because of it.
I think my hate for alternative medicine is about as high as it can get, BUT, this approach is still stupid and harmful to everyone. By throwing isolated demands for rigor at the thing, you are weakening the only relevant claim:
There is no good evidence that any kind of training can change the refractive power of the eye.
Why even bother doing this? In itself, this is disproof enough of the method.
Now, imagine a somewhat intelligent alt medicine kinda person reading this. What would happen there? Might she figure out that some of those are just arbitrary demand for rigors coming from a place of bad faith? Might she chose to thus go ahead and also distrust the studies that investigate the refractive power claim? Might she conclude that medicine is just a political game 99% of medics just so happen to be on the wrong side of the fence at the moment?
That’s how you turn relatively smart people into homeopaths, anti-vaxxers, and 5g conspiracy theorists. You try to disprove the theory so hard, because it seems divergent and evil, that you get a few semi-intelligent contrarians (the linguistically-inclined kind) which can notice some of your argument are just shit-talking.
What happens afterward? Maybe they study the literature carefully and discover the rest of the argument are good… or maybe they just decide you’re full of shit and shouldn’t be trusted. I get a feeling that’s how otherwise smart people often get into alternative medicine, they never get a raw dose of the argument against it, they get status-signaling shit-talking with the real arguments intermixed.
V
A blooming idea should be nurtured by cutting it some epistemic slack rather than holding it to standards higher than those to which we hold all of our other ideas.
Otherwise aggressive tribes that never discuss things are formed. Because each tribe will hold the other’s ideas to stricter standards of rigors than theirs, ending up in thought bubbles that are impossible to break.
Otherwise, people are lead away from believing perfectly reasonable mainstream theories in favor of quacky nonsense. Because when quacky nonsense is held to standards of proof so high as to invalidate everything else, quacky nonsense seems about as valid as everything else.
Related to https://wiki.lesswrong.com/wiki/Arguments_as_soldiers—these are mostly examples of non-truth-seeking discussions, looking for advantage or status, rather than reframing the questions into verifiable propositions. See also https://wiki.lesswrong.com/wiki/Politics_is_the_Mind-Killer—these topics mostly can’t be usefully resolved without a pretty serious commitment by all participants.
Note that in MANY cases, the pattern is compounded by lack of rigor from the proponent, which makes them susceptible to demands on ludicrous dimensions. If the cancer cure is actually solid, then A very quickly points to what the study claims and how it’s validated, doesn’t claim any more than that, and then skips all the middle steps, going straight to “fuck you” when it becomes clear that B doesn’t care.
Alternative medicine proponents (as far as I’ve seen) nearly universally make amazingly strong claims that their methods should be embraced with near-zero theoretical nor statistical backup. If they just said “the standard model misses a lot of individual variance, and this thing has the following risk/benefit likelihoods”, I’d listen more.
Finally, this often comes up on topics where one or more participants isn’t motivated to seek the truth. If you’re arguing for entertainment, rather than putting work into understanding things, all bets are off. And if you’re trying to explore the truth, but your partners are just enjoying the attention, you’re likely to find yourself frustrated. Probably best to find new partners for that part of your investigation.
In hindsight I think I’m repeating a lot of the points made here, but maybe with more of an emphasis on how “not” to discredit a bad idea rather than on ideas competing on “equal” grounds.
Yes, but generally speaking I think these kind of people are selected exactly because the “the standard model misses a lot of individual variance, and this thing has the following risk/benefit likelihoods” kind of people are treated with equally inadequate standards.
To take one example here, on the “gluten” debate one can detect 3 camps:
1. Standard position (should only be cut in the case of celiac disease)
2. Nuanced alternative (celiac disease is not clearly defined, we should look at various antibodiy levels after a gluten challenge + various HLA genes and recommend a gluten-free diet if we notice values 0.5std above the mean… or something like that)
3. Gluten is bad and literally the source of the primordial decline of man, nobody should eat it.
Arguably, position 2 is probably too extreme and there’s still lacking evidence for it, but given that a significant amount of the population seems to do better without gluten, you either decide to cut position to some epistemtic slack and merge it into the mainstream (or at least propose it as an option, much like a orthopedist might suggest yoga as an alternative to standard physiotherapy), or you get people flocking to 3. Since 2 and 3 are seen as equally bad, and 3 is simpler plus is packed with an explanation as to why the establishment rejects it (the establishment are blind and/or evil)
Easy to say, but hard to detect. It’s easy to detect in e.g. politics, but maybe not so much in a more rigorous subject where the isolated demands for rigor being thrown against the divergent position might be very similar to those “common knowledge” is held up to.
Medicine is rife with selective demands for rigor. For example:
The evidence behind current dietary guidelines is weak https://www.sciencedirect.com/science/article/abs/pii/S0899900710002893
But it is demanded that critics meet a high hurdle https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(16)31278-8/fulltext
Before taking statins consider “Statin wars: have we been misled about the
evidence? A narrative review” doi:10.1136/bjsports-2017-098497
Link to that last one: https://bjsm.bmj.com/content/bjsports/early/2018/01/16/bjsports-2017-098497.full.pdf?ijkey=Rsap0XafljfcOCR&keytype=ref