Also, learn to differentiate between genuine curiosity and what I like to call pseudo-curiosity—basically, being satisfied by conclusions rather than concepts. Don’t let the two overlap. This is especially hard when conclusions are most of the time readily available and often the first item in a google search. In terms of genuine curiosity, google has been the bane of my existence—I will start off moderately curious, but instead of moving to that higher stage of curiosity, I will be sated by facts and conclusions without actually learning anything (similar to a guessing the teacher’s password situation). After a couple hours of doing this, I feel very scholarly and proud of my ability to parse so much information, when in reality all I did was collect a bunch of meaningless symbols.
To combat this, I started keeping a “notebook of curiosities”. The moment I get curious, I write whatever it is I’m curious about, and then write everything I know about it. At this point, I determine whether or not anything I know is a useful springboard; otherwise, I start from scratch. Then I circle my starting node and start the real work, with the following rules:
Every fact or concept I write must follow directly from a previous node (never more than two or three reasoning steps away). Most of the time, this results in a very large diagram referencing multiple pages. I use pen and paper only because I like to use it outside.
Wikipedia is a last resort—I don’t want to be tempted by easy facts. I use textbooks → arxiv → jstor → google scholar in order of preference. It’s a lot of work.
If I skip some reasoning or concept because I think it is trivial, I write the reason why it is trivial. Most of the time, this results in something interesting.
Doing this has revealed many gaps in my knowledge. I’ve become increasingly aware of a lack of internalization of basic concepts and modes of thinking that are necessary for certain concepts. It also forces me to confront my actual interest in the subject, rather than my perceived interest.
The majority of what I use it for is math related, so it’s more tailored to that use case.
This goes away when you start to realize what shit sources like wikipedia are. Go through the sources cited by wikipedia articles sometimes. Realize that everything presented to you as fact is generally a conclusion come to by people who are downright terrible at basic reasoning.
Practicing rephrasing everything as being written in E-Prime can be helpful, taking special note when normative and positive statements start becoming muddled.
I have to agree on terribleness of wikipedia. The approach in Wikipedia is as such: if you can cite that 2 2 = 5 , then you can write about it, but it is a mortal sin against the wikipedia to derive the 2 2 = 4 from first principles . That’s because wikipedia is an encyclopedia, and the wikipedia’s process only rearranges knowledge while introducing biases and errors; that’s by design. The most common bias is to represent both sides equally when they shouldn’t be, the second most common bias is the side with the most people editing wikipedia winning while screaming lalala original research can not hear you, when it comes to basic reasoning.
For 2 * 2 , it does generally work and the rules would be glossed over, for anything more complicated, well the wikipedia equates any logic with any nonsense.
Then, a great deal of websites regurgitate stuff from wikipedia, often making it very difficult or impossible to find any actual information.
That being said, the wikipedia is a pretty good online link directory. Just don’t rely on the stuff written on the wikipedia, and don’t rely on articles that were repeating the ‘citation needed’ section, and then were added as the needed citation. And be aware that the selection of links can be very biased.
The thing about citations and against derivations from first principles is deliberate and (so long as participation is open to everybody) I think removing it could do more harm than keeping it: it’s hard to tell if a derivation from first principles in a field you’re not familiar with is valid, so short of somehow magically increasing the number of (say) editors with a PhD in physics by a factor of 10, allowing OR would essentially give free rein to crackpots, since there wouldn’t be that many people around who could find the flaws in their reasoning. Right now, they (at least in principle) would have to find peer-reviewed publications supporting their arguments, which is not as easy as posting some complicated derivation and hoping no-one finds the errors.
One big problem with Wikipedia (which I’m not sure could be fixed even in principle) is that sometimes you’re not allowed to taboo words, because you’re essentially doing lexicography. If the question is “Was Richard Feynman Jewish?”, “He had Jewish ancestry but he didn’t practise Judaism” is not a good-enough answer if what you’re deciding is whether or not the article about Feynman should be in the category for Jewish American physicists; if the question is “Was an infant who has since become a transsexual woman a boy?”, answering “it had masculine external genitalia but likely had feminine brain anatomy” is not good enough if what you’re deciding is whether the article should say “She was born as a boy”; and so on and so forth. (There once was an argument about whether accelerometers measure inertial acceleration even though both parties agreed about what an accelerometer would read in all of the situations they could come up with, because they meant different things by inertial acceleration. What happened is that someone come up with other situations such as magnetically levitating the accelerometer or placing it somewhere with non-negligible tidal forces, and the parties did disagree about what would happen. (My view is that then you’re just misusing the accelerometer, and drawing any conclusions from such circumstances is as silly as saying that resistance is not what ohmmeters measure because if you put a battery across an ohmmeter, what it reads is not the internal resistance of the battery. But IIRC, rather than pointing that out I just walked away and left Wikipedia, even though I later came back with a different user name.)
Agreed that removing the condition against first principles would perhaps screw stuff up more.
But the attitude against original research is uncalled for. When there’s someone who misunderstands the quoted articles, you can’t just go ahead and refer to first principles, noooo thats original research, and the attitude is: i’m not ashamed i’m instead proud i don’t understand topic we’re talking about, i’m proud i don’t (because can’t) do original research. Non experts come up with all sorts of weird nonsense interpretations of what experts say, that experts would never even feel need to publish anything to dispel. And then you can’t argue with them rationally, they proudly reject any argumentation from first principles.
Huh, yes. OR shouldn’t be allowed into articles but it should on talk pages. (Plus, some people use a ridiculously broad definition of OR. If I pointed out that the speed of light in m/s is exact and the number of metres in a yard is exact and proceeded to give the exact value of the speed of light in imperial units, and I called that original research of mine anywhere outside Wikipedia, I’d be (rightly) laughed away. Hell, even my pointing out that the word Jewish has several meanings was dismissed as OR, by someone who insisted that on Wikipedia the only possible meaning of Jewish is ‘someone who a reliable source refers to as Jewish’.
If I pointed out that the speed of light in m/s is exact and the number of metres in a yard is exact and proceeded to give the exact value of the speed of light in imperial units
someone who insisted that on Wikipedia the only possible meaning of Jewish is ‘someone who a reliable source refers to as Jewish’.
That actually sounds pretty reasonable to me. If you want to use a more nuanced concept to refer to someone, you could always find a reliable source who has used that nuanced concept to refer to the person. Or you could do the OR somewhere else and then someone else can use that to improve the article.
If I pointed out that the speed of light in m/s is exact and the number of metres in a yard is exact and proceeded to give the exact value of the speed of light in imperial units
For some time, they claimed that converting exact values as rational numbers (as opposed to conversions with a finite number of sigfigs) is not a routine calculation. (To be honest, I’m not sure I remember what eventually happened. [goes to check] Oh, yeah. The footnote stayed because we did find a citation. Not that I’d normally consider the personal website of a cryptographer as a reliable source, but still.)
Well, here’s a talk section of an article on a subject I know something about. This should give an idea of wikipedia’s process and what kind of content results from it:
that low doses of ionizing radiation (within the region and just above natural background levels) are beneficial
is hardly a hypothesis. A proper hypothesis would be “[specific mechanism] activates in presence of ionizing radiation and has such and such consequences”. It would, incidentally, be easy to get rid of if it was wrong, or show correct if it was correct, and it’d be interesting even if the effect was too weak to beat the direct damage from radiation. I barely managed to get their proposed cause (some untapped powers of self repair mechanisms) into the definition of the hypothesis, ’cause the group that’s watching article loved to just have a hypothesis that low doses of radiation are beneficial, whatever the mechanisms may be, they don’t care, they just propose that effect is here. They don’t care to propose that there’s some self repair mechanism that activates by low doses of radiation, either, they want to propose that the effect is so strong there’s actual benefit.
Also, note the complete absence of the references to radiation cure quacks of early 20th century—which fall under the definition here. And good luck adding those because there’s some core group that’s just removing ’em as “irrelevant”. The link selection is honed to make it look like something new and advanced that could only have been thought of via some cool counter intuitive reasoning, rather than the first thing ever we thought of when we discovered radiation—ohh cool some poison, we don’t sure how it works but it must be good in moderation—then it took about 60 years to finally discard this hypothesis and adopt LNT.
And of course, don’t even dream of adding here the usual evolutionary counter argumentation to various allusions to some untapped powers of human body.
Note: radioactive remedies such as radon springs, radon caves, healing stones, etc. are a big business.
I doubt that selecting less than half a sentence from the lead paragraph of an article is a very careful approach to criticism.
This article actually looks pretty typical of Wikipedia articles on relatively obscure quackish biomedical ideas. It outlines what the “hypothesis” is, then makes clear that it is explicitly rejected by various people who have studied the matter. The subject doesn’t have enough history or enough attention from skeptics to get the kind of treatment that, say, the article on homeopathy does.
There are two completely junk charts (no scale!) in the article. Yuck!
When read carefully, the article makes clear it’s talking about an effect that even if it existed, would be very close to the noise threshold. It requires some statistical awareness — much more than the typical academic has, to say nothing of Wikipedians — to recognize that this is the same thing as saying “there’s no reason to suspect an effect here.”
The primary bias problem here isn’t the article; it’s that the subject matter is made of bias, at least as far as I can tell. There’s only so many times an article can say “there are a few noisy experiments, but nobody who actually counts on radiation safety thinks this exists.”
That said, there’s one thing I was really surprised to find: the talk page doesn’t seem to be full of supporters saying that their hypothesis is being persecuted by the mainstream and skeptics calling them a bunch of names. And that suggests to me that improvement shouldn’t be too hard.
When read carefully, the article makes clear it’s talking about an effect that even if it existed, would be very close to the noise threshold. It requires some statistical awareness — much more than the typical academic has, to say nothing of Wikipedians — to recognize that this is the same thing as saying “there’s no reason to suspect an effect here.”
Is this really true? I’m not a part of academia in any sort of way, nor do I have any sort of math or statistical training beyond what’s referred to as College Algebra, and I recognized immediately what the effect being close the noise threshold meant.
I’m just wondering if I just have a better intuitive grasp of statistics than your typical academic (and what exactly you mean by academic...all teachers? professors? english professors? stats majors?).
Of course, I read LessWrong and understand Bayes because of it, so maybe that’s all it takes...
Yes. Most of the academy doesn’t use math or have any feel for it. Being forced to take algebra when you truly do not give a damn about it results in people learning enough to pass the test and then forgetting it forever.
I’m just wondering if I just have a better intuitive grasp of statistics than your typical academic (and what exactly you mean by academic...all teachers? professors? english professors? stats majors?).
Academics are people who have jobs teaching/lecturing in tertiary education. In a US context the lowest you can go and still be an academic is teaching at a community college. Alternatively an academic is part of the community of scholars, people who actually care about knowledge as such rather than as a means to an end. Most of these people would not know statistics if it bit them on the ass. Remember, the world is insane.
Well yea, that’s a very good way to describe it—made of bias. We always believed that if something is bad in excess it’s good in moderation, and then proceeded to rationalize.
The topic is actually not very obscure. It pops up in any discussion of Chernobyl or Fukushima or cold war nuclear testing or radon testing of the households or the like, there’s that ‘scepticism’ towards choosing linear no threshold model as a prior.
The seriously bad bit is that it is entirely missing the historical reference. When I am looking up an article on some pseudoscience, I want to see the history of said branch of pseudoscience. It’s easier to reject something like this when you know that it is the first hypothesis we made about biological effects of radiation (and the first hypothesis we would make about new poisons in general until 20th century).
With regards to sanity of the talk page, that’s what’s most creepy. They get rid of historical background on this thing, calmly and purposefully (i don’t know if that’s still the case, going to try adding link to quack radiation cures again). There are honest pseudo-scientists who believe their stuff and they put up all the historical context up themselves. And there’s the cases whereby you got some sane rational people with an agenda whose behaviour is fairly consistent with knowing full well that it is a fraud.
note: the LNT makes sense as a prior based on knowledge that the radiation at near the background level is a very minor contributor to number of mutations, and if you look at the big picture—number of mutations—for doses up to many times background, you’re still varying it by microscopic amount around some arbitrary point, and you absolutely should choose linear behaviour as prior. Still, there’s the ‘sceptics’ who want to choose zero effect at low doses as a prior because the effects were never shown and occam’s razor blah blah blah.
edit: ahh by the way, i wrote some of that description outlining the hypothesis, making it clearer that they start from beneficial effects then hypothesise some defence mechanisms that are strong enough to cancel the detrimental effect. That’s such completely backwards reasoning.
Overall, that sounds more like a bunch of folks who have heard of this cool, weird, contrarian idea and are excited by it, rather than people who are trying to perpetrate a fraud for personal benefit. Notably, there isn’t any mention on the article of any of the quack treatments you mention above; there’s no claims of persecution or conspiracy; there’s not even much in the way of anti-epistemology.
It’s a pseudoscience article from which they remove the clues by which one could recognize pseudoscience, that’s what’s bad.
Also, it should link to past quack treatments of 20th century. I’m going to try again adding those when I have time. It’s way less cool and contrarian when you learn that it was popular nonsense when radiation was first discovered.
It’s been ages ago (>5 years i think), i don’t even quite remember how it all went.
What’s irritating about wikipedia is that the rule against original research in the articles spills over and becomes attitude against any argumentation not based on appeal to authority. So you have the folks there, they are curious about this hormesis concept, maybe they are actually just curious, not some proponents / astroturf campaign. But they are not interested in trying to listen to any argument and think if it is correct or not themselves. I don’t know, maybe it’s an attempt to preserve own neutrality on issue. In any case it is incredibly irritating. It’s half-curiosity.
Also, learn to differentiate between genuine curiosity and what I like to call pseudo-curiosity—basically, being satisfied by conclusions rather than concepts. Don’t let the two overlap. This is especially hard when conclusions are most of the time readily available and often the first item in a google search. In terms of genuine curiosity, google has been the bane of my existence—I will start off moderately curious, but instead of moving to that higher stage of curiosity, I will be sated by facts and conclusions without actually learning anything (similar to a guessing the teacher’s password situation). After a couple hours of doing this, I feel very scholarly and proud of my ability to parse so much information, when in reality all I did was collect a bunch of meaningless symbols.
To combat this, I started keeping a “notebook of curiosities”. The moment I get curious, I write whatever it is I’m curious about, and then write everything I know about it. At this point, I determine whether or not anything I know is a useful springboard; otherwise, I start from scratch. Then I circle my starting node and start the real work, with the following rules:
Every fact or concept I write must follow directly from a previous node (never more than two or three reasoning steps away). Most of the time, this results in a very large diagram referencing multiple pages. I use pen and paper only because I like to use it outside.
Wikipedia is a last resort—I don’t want to be tempted by easy facts. I use textbooks → arxiv → jstor → google scholar in order of preference. It’s a lot of work.
If I skip some reasoning or concept because I think it is trivial, I write the reason why it is trivial. Most of the time, this results in something interesting.
Doing this has revealed many gaps in my knowledge. I’ve become increasingly aware of a lack of internalization of basic concepts and modes of thinking that are necessary for certain concepts. It also forces me to confront my actual interest in the subject, rather than my perceived interest.
The majority of what I use it for is math related, so it’s more tailored to that use case.
This goes away when you start to realize what shit sources like wikipedia are. Go through the sources cited by wikipedia articles sometimes. Realize that everything presented to you as fact is generally a conclusion come to by people who are downright terrible at basic reasoning.
Practicing rephrasing everything as being written in E-Prime can be helpful, taking special note when normative and positive statements start becoming muddled.
I have to agree on terribleness of wikipedia. The approach in Wikipedia is as such: if you can cite that 2 2 = 5 , then you can write about it, but it is a mortal sin against the wikipedia to derive the 2 2 = 4 from first principles . That’s because wikipedia is an encyclopedia, and the wikipedia’s process only rearranges knowledge while introducing biases and errors; that’s by design. The most common bias is to represent both sides equally when they shouldn’t be, the second most common bias is the side with the most people editing wikipedia winning while screaming lalala original research can not hear you, when it comes to basic reasoning.
For 2 * 2 , it does generally work and the rules would be glossed over, for anything more complicated, well the wikipedia equates any logic with any nonsense.
Then, a great deal of websites regurgitate stuff from wikipedia, often making it very difficult or impossible to find any actual information.
That being said, the wikipedia is a pretty good online link directory. Just don’t rely on the stuff written on the wikipedia, and don’t rely on articles that were repeating the ‘citation needed’ section, and then were added as the needed citation. And be aware that the selection of links can be very biased.
The thing about citations and against derivations from first principles is deliberate and (so long as participation is open to everybody) I think removing it could do more harm than keeping it: it’s hard to tell if a derivation from first principles in a field you’re not familiar with is valid, so short of somehow magically increasing the number of (say) editors with a PhD in physics by a factor of 10, allowing OR would essentially give free rein to crackpots, since there wouldn’t be that many people around who could find the flaws in their reasoning. Right now, they (at least in principle) would have to find peer-reviewed publications supporting their arguments, which is not as easy as posting some complicated derivation and hoping no-one finds the errors.
One big problem with Wikipedia (which I’m not sure could be fixed even in principle) is that sometimes you’re not allowed to taboo words, because you’re essentially doing lexicography. If the question is “Was Richard Feynman Jewish?”, “He had Jewish ancestry but he didn’t practise Judaism” is not a good-enough answer if what you’re deciding is whether or not the article about Feynman should be in the category for Jewish American physicists; if the question is “Was an infant who has since become a transsexual woman a boy?”, answering “it had masculine external genitalia but likely had feminine brain anatomy” is not good enough if what you’re deciding is whether the article should say “She was born as a boy”; and so on and so forth. (There once was an argument about whether accelerometers measure inertial acceleration even though both parties agreed about what an accelerometer would read in all of the situations they could come up with, because they meant different things by inertial acceleration. What happened is that someone come up with other situations such as magnetically levitating the accelerometer or placing it somewhere with non-negligible tidal forces, and the parties did disagree about what would happen. (My view is that then you’re just misusing the accelerometer, and drawing any conclusions from such circumstances is as silly as saying that resistance is not what ohmmeters measure because if you put a battery across an ohmmeter, what it reads is not the internal resistance of the battery. But IIRC, rather than pointing that out I just walked away and left Wikipedia, even though I later came back with a different user name.)
Agreed that removing the condition against first principles would perhaps screw stuff up more.
But the attitude against original research is uncalled for. When there’s someone who misunderstands the quoted articles, you can’t just go ahead and refer to first principles, noooo thats original research, and the attitude is: i’m not ashamed i’m instead proud i don’t understand topic we’re talking about, i’m proud i don’t (because can’t) do original research. Non experts come up with all sorts of weird nonsense interpretations of what experts say, that experts would never even feel need to publish anything to dispel. And then you can’t argue with them rationally, they proudly reject any argumentation from first principles.
Huh, yes. OR shouldn’t be allowed into articles but it should on talk pages. (Plus, some people use a ridiculously broad definition of OR. If I pointed out that the speed of light in m/s is exact and the number of metres in a yard is exact and proceeded to give the exact value of the speed of light in imperial units, and I called that original research of mine anywhere outside Wikipedia, I’d be (rightly) laughed away. Hell, even my pointing out that the word Jewish has several meanings was dismissed as OR, by someone who insisted that on Wikipedia the only possible meaning of Jewish is ‘someone who a reliable source refers to as Jewish’.
That’s not reasonably called OR on Wikipedia either. See:
http://en.wikipedia.org/wiki/Wikipedia:No_original_research#Routine_calculations
That actually sounds pretty reasonable to me. If you want to use a more nuanced concept to refer to someone, you could always find a reliable source who has used that nuanced concept to refer to the person. Or you could do the OR somewhere else and then someone else can use that to improve the article.
For some time, they claimed that converting exact values as rational numbers (as opposed to conversions with a finite number of sigfigs) is not a routine calculation. (To be honest, I’m not sure I remember what eventually happened. [goes to check] Oh, yeah. The footnote stayed because we did find a citation. Not that I’d normally consider the personal website of a cryptographer as a reliable source, but still.)
Can you give specific examples of articles that are biased? Your comment and it’s parent made me curious about wikipedia’s quality :)
Well, here’s a talk section of an article on a subject I know something about. This should give an idea of wikipedia’s process and what kind of content results from it:
http://en.wikipedia.org/wiki/Talk:Bayesian_network
Here’s another one:
http://en.wikipedia.org/wiki/Confounding
The very first sentence is wrong.
Well, this article is pretty bad:
http://en.wikipedia.org/wiki/Radiation_hormesis
but it used to be even worse. First of all,
is hardly a hypothesis. A proper hypothesis would be “[specific mechanism] activates in presence of ionizing radiation and has such and such consequences”. It would, incidentally, be easy to get rid of if it was wrong, or show correct if it was correct, and it’d be interesting even if the effect was too weak to beat the direct damage from radiation. I barely managed to get their proposed cause (some untapped powers of self repair mechanisms) into the definition of the hypothesis, ’cause the group that’s watching article loved to just have a hypothesis that low doses of radiation are beneficial, whatever the mechanisms may be, they don’t care, they just propose that effect is here. They don’t care to propose that there’s some self repair mechanism that activates by low doses of radiation, either, they want to propose that the effect is so strong there’s actual benefit.
Also, note the complete absence of the references to radiation cure quacks of early 20th century—which fall under the definition here. And good luck adding those because there’s some core group that’s just removing ’em as “irrelevant”. The link selection is honed to make it look like something new and advanced that could only have been thought of via some cool counter intuitive reasoning, rather than the first thing ever we thought of when we discovered radiation—ohh cool some poison, we don’t sure how it works but it must be good in moderation—then it took about 60 years to finally discard this hypothesis and adopt LNT.
And of course, don’t even dream of adding here the usual evolutionary counter argumentation to various allusions to some untapped powers of human body.
Note: radioactive remedies such as radon springs, radon caves, healing stones, etc. are a big business.
I doubt that selecting less than half a sentence from the lead paragraph of an article is a very careful approach to criticism.
This article actually looks pretty typical of Wikipedia articles on relatively obscure quackish biomedical ideas. It outlines what the “hypothesis” is, then makes clear that it is explicitly rejected by various people who have studied the matter. The subject doesn’t have enough history or enough attention from skeptics to get the kind of treatment that, say, the article on homeopathy does.
There are two completely junk charts (no scale!) in the article. Yuck!
When read carefully, the article makes clear it’s talking about an effect that even if it existed, would be very close to the noise threshold. It requires some statistical awareness — much more than the typical academic has, to say nothing of Wikipedians — to recognize that this is the same thing as saying “there’s no reason to suspect an effect here.”
The primary bias problem here isn’t the article; it’s that the subject matter is made of bias, at least as far as I can tell. There’s only so many times an article can say “there are a few noisy experiments, but nobody who actually counts on radiation safety thinks this exists.”
That said, there’s one thing I was really surprised to find: the talk page doesn’t seem to be full of supporters saying that their hypothesis is being persecuted by the mainstream and skeptics calling them a bunch of names. And that suggests to me that improvement shouldn’t be too hard.
Is this really true? I’m not a part of academia in any sort of way, nor do I have any sort of math or statistical training beyond what’s referred to as College Algebra, and I recognized immediately what the effect being close the noise threshold meant.
I’m just wondering if I just have a better intuitive grasp of statistics than your typical academic (and what exactly you mean by academic...all teachers? professors? english professors? stats majors?).
Of course, I read LessWrong and understand Bayes because of it, so maybe that’s all it takes...
Yes. Most of the academy doesn’t use math or have any feel for it. Being forced to take algebra when you truly do not give a damn about it results in people learning enough to pass the test and then forgetting it forever.
Academics are people who have jobs teaching/lecturing in tertiary education. In a US context the lowest you can go and still be an academic is teaching at a community college. Alternatively an academic is part of the community of scholars, people who actually care about knowledge as such rather than as a means to an end. Most of these people would not know statistics if it bit them on the ass. Remember, the world is insane.
Well yea, that’s a very good way to describe it—made of bias. We always believed that if something is bad in excess it’s good in moderation, and then proceeded to rationalize.
The topic is actually not very obscure. It pops up in any discussion of Chernobyl or Fukushima or cold war nuclear testing or radon testing of the households or the like, there’s that ‘scepticism’ towards choosing linear no threshold model as a prior.
The seriously bad bit is that it is entirely missing the historical reference. When I am looking up an article on some pseudoscience, I want to see the history of said branch of pseudoscience. It’s easier to reject something like this when you know that it is the first hypothesis we made about biological effects of radiation (and the first hypothesis we would make about new poisons in general until 20th century).
With regards to sanity of the talk page, that’s what’s most creepy. They get rid of historical background on this thing, calmly and purposefully (i don’t know if that’s still the case, going to try adding link to quack radiation cures again). There are honest pseudo-scientists who believe their stuff and they put up all the historical context up themselves. And there’s the cases whereby you got some sane rational people with an agenda whose behaviour is fairly consistent with knowing full well that it is a fraud.
note: the LNT makes sense as a prior based on knowledge that the radiation at near the background level is a very minor contributor to number of mutations, and if you look at the big picture—number of mutations—for doses up to many times background, you’re still varying it by microscopic amount around some arbitrary point, and you absolutely should choose linear behaviour as prior. Still, there’s the ‘sceptics’ who want to choose zero effect at low doses as a prior because the effects were never shown and occam’s razor blah blah blah.
edit: ahh by the way, i wrote some of that description outlining the hypothesis, making it clearer that they start from beneficial effects then hypothesise some defence mechanisms that are strong enough to cancel the detrimental effect. That’s such completely backwards reasoning.
Overall, that sounds more like a bunch of folks who have heard of this cool, weird, contrarian idea and are excited by it, rather than people who are trying to perpetrate a fraud for personal benefit. Notably, there isn’t any mention on the article of any of the quack treatments you mention above; there’s no claims of persecution or conspiracy; there’s not even much in the way of anti-epistemology.
It’s a pseudoscience article from which they remove the clues by which one could recognize pseudoscience, that’s what’s bad.
Also, it should link to past quack treatments of 20th century. I’m going to try again adding those when I have time. It’s way less cool and contrarian when you learn that it was popular nonsense when radiation was first discovered.
If you added those before and they were reverted, then you should be discussing it on Talk and going for consensus.
It’s been ages ago (>5 years i think), i don’t even quite remember how it all went.
What’s irritating about wikipedia is that the rule against original research in the articles spills over and becomes attitude against any argumentation not based on appeal to authority. So you have the folks there, they are curious about this hormesis concept, maybe they are actually just curious, not some proponents / astroturf campaign. But they are not interested in trying to listen to any argument and think if it is correct or not themselves. I don’t know, maybe it’s an attempt to preserve own neutrality on issue. In any case it is incredibly irritating. It’s half-curiosity.