True. I linked the article as an example of the idealistic journalist, one that is disappointed that his motives are distrusted by the public.
byrnema
Your comment is well-received. I’m continuing to to think about it and what this means for finding reliable media sources.
My impression of journalists has always been that they would be fairly idealistic about information and communicating that information to be attracted to their profession. I also imagine that their goals are constantly antagonized by the goals of their bosses, that do want to make money, and probably it is the case that the most successful sell-out or find a good trade-off that is not entirely ideal for them or the critical reader.
I’ll link this article by Michael Volkmann, a disillusioned journalist.
I might need some recalibration, but I’m not sure.
I research topics of interest in the media, and I feel frustrated, angry and annoyed about the half-truths and misleading statements that I encounter frequently. The problem is not the feelings, but whether I am ‘wrong’. I figure there are two ways that I might be wrong:
(i) Maybe I’m wrong about these half-truths and misleading statements not being necessary. Maybe authors have already considered telling the facts straight and that didn’t get the best message out.
(ii) Maybe I’m actually wrong about whether these are half-truths or really all that misleading. Maybe I am focused on questions of fact and the meanings of particular phrases that are overly subtle.
The reason why I think I might need re-calibration is because I don’t consider it likely that I am much less pragmatic, smarter or more accurate than all these writers I am critical of (some of them, inevitably, but not all of them—also these issues are not that difficult intellectually).
Here are some concrete examples, all regarding my latest interest in the Ebola outbreak:
Harvard poll: Most recently, the HSPH-SSRS poll with headlines, “Poll finds US lack knowledge about ebola” or, “Many Americans harbor unfounded fears about Ebola”. But when you look at the poll questions, they ask whether Americans are “concerned” about the risk, not what they believe the risk to be, and whether they think Ebola is spread ‘easily’. The poll didn’t appear to be about American’s knowledge of Ebola, but how they felt about the knowledge they had. The question about whether Ebola transmits easily especially irks me, since everyone knows (don’t they??) that whether something is ‘easy’ is subjective?
“Bush meat”: I’ve seen many places that people need to stop consuming bush meat in outbreak areas (for example). I don’t know that much about how Ebola is spreading through this route, but wouldn’t it be the job of the media and epidemiologists to report on the rate of transmission from eating bats (I think there has only been one ground zero patient in West Africa who potentially contracted Ebola from a bat) and weigh this with the role of local meat as an important food source (again, don’t know, media to blame)? Just telling people to stop eating would be ridiculous, hopefully it’s not so extreme. Also, what about cooking rather than drying local meat sources? This seems a very good example of the media unable to nuance a message in a reasonable way, but I allow I could be wrong.
Media reports “Ebola Continues to spread in Nigeria” when the increase in Ebola cases were at that time due to contact with the same person and had already been in quarantine. This seemed to hype up the outbreak when in fact the Nigerians were successfully containing it. Perhaps this is an example of being too particular and over-analyzing something subtle?
Ever using the phrase ‘in the air’ to describe how Ebola does or doesn’t transmit, because this is a phrase that can mean completely different things to anyone using or hearing the phrase. Ebola is not airborne but can transmit within coughing distance.
The apparent internal inconsistency of a case of Ebola might come to the US, but an outbreak cannot happen here. Some relative risk numbers would be helpful here.
All of these examples upset me to various degrees since I feel like it is evidence that people—even writers and the scientists they are quoting—are unable to think critically and message coherently about issues. How should I update my view so that I am less surprised, less argumentative or less crazy-pedantic-fringe person?
A person infected with Ebola is very contagious during the period they are showing symptoms. The CDC recommends casual contact and droplet precautions.
Note the following description of (casual) contact:
Casual contact is defined as a) being within approximately 3 feet (1 meter) or within the room or care area for a prolonged period of time (e.g., healthcare personnel, household members) while not wearing recommended personal protective equipment (i.e., droplet and contact precautions–see Infection Prevention and Control Recommendations); or b) having direct brief contact (e.g., shaking hands) with an EVD case while not wearing recommended personal protective equipment (i.e., droplet and contact precautions–see Infection Prevention and Control Recommendations). At this time, brief interactions, such as walking by a person or moving through a hospital, do not constitute casual contact.
(Much more contagious than an STD.)
But Lumifer is also correct. People without symptoms are not contagious, and people with symptoms are conspicuous (e.g. Patrick Sawyer was very conspicuous when he infected staff and healthcare workers in Nigeria) and unlikely to be ambulatory. The probability of a given person in West Africa being infected is very small (2000 cases divided by approximately 20 million people in Guinea, Sierra Leone and Liberia) and the probability of a given person outside this area being infected is truly negligible. If we cannot contain the virus in the area, there will be a lot of time between the observation of a burning ‘ember’ (or 10 or 20) and any change in these probabilities—plenty of time to handle and douse out any further hotspots that form.
The worst case scenario in my mind is that it continues unchecked in West Africa or takes hold in more underdeveloped countries. This scenario would mean more unacceptable suffering and would also mean the outbreak gets harder and harder to squash and contain, increasing the risk to all countries.
We need to douse it while it is relatively small—I feel so frustrated when I hear there are hospitals in these regions without supplies such as protective gear. What is the problem? Rich countries should be dropping supplies already.
Sorry, realized I don’t feel comfortable commenting on such a high-profile topic. Will wait a few minutes and then delete this comment (just to make sure there are no replies.)
I don’t believe we disagree on anything. For example, I agree with this:
If you have equal numbers at +4 and +3 and +2, then most of the +4 still may not be the best, but the best is likely to be +4.
Are you talking about relative sample sizes, or absolute?
By ‘plenty of points’… I was imagining that we are taking a finite sample from a theoretically infinite population. A person decides on a density that represents ‘plenty of points’ and then keeps adding to the sample until they have that density up to a certain specified sd.
Interesting post. Well thought out, with an original angle.
In the direction of constructive feedback, consider that the concept of sample size—while it seems to help with the heuristic explanation—likely just muddies the water. (We’d still have the effect even if there were plenty of points at all values.)
For example, suppose there were so many people with extreme height some of them also had extreme agility (with infinite sample size, we’d even reliably have that the best players we’re also the tallest.) So: some of the tallest people are also the best basketball players. However, as you argued, most of the tallest won’t be the most agile also, so most of the tallest are not the best (contrary to what would be predicted by their height alone).
In contrast, if average height correlates with average basketball ability, the other necessary condition for a basketball player with average height to have average ability is to have average agility—but this is easy to satisfy. So most people with average height fit the prediction of average ability.
Likewise, the shortest people aren’t likely to have the lowest agility, so the correlation prediction fails at that tail too.
Some of the ‘math’ is that it is easy to be average in all variables ( say, (.65)^n where n is the number of variables) but the probability of being standard deviations extreme in all variables is hard (say, (.05)^n to be in the top 5 percent.) Other math can be used to find the theoretic shape for these assumptions (e. g., is it an ellipse?).
I see. I was confused for a while, but in the hypothetical examples I was considering, a link between MMR and autism might be missed (a false negative with 5% probability) but isn’t going to found unless it was there (low false positive). Then Vanviver explains, above, that the canonical null-hypothesis framework assumes that random chance will make it look like there is an effect with some probability—so it is the false positive rate you can tune with your sample size.
I marginally understand this. For example, I can’t really zoom out and see why you can’t define your test so that the false positive rate is low instead. That’s OK. I do understand your example and see that it is relevant for the null-hypothesis framework. (My background in statistics is not strong and I do not have much time to dedicate to this right now.)
(I realize I’m confused about something and am thinking it through for a moment.)
OK, that sounds straightforward.
How does one know that the 60 studies are these? (rather then the others (e.g., that were designed to show an effect with 95% probability, but failed to do so and thus got a negative result)).
Another way of asking my question, perhaps more clearly, is: how do we know if the 60 considered studies were testing the hypothesis that there was a link or the hypothesis that there was not a link?
Which 5%?
No, that 5% is the probability of a false positive, [...]
No, “that” 5% is the probability from my cooked-up example, which was the probability of a false-negative.
You’re saying (and Phil says also in several places) that in his example the 5% is the probability of a false positive. I don’t disagree, a priori, but I would like to know, how do we know this? This is a necessary component of the full argument that seems to be missing so far.
I don’t think that it’s necessarily suspicious in that, a priori, I wouldn’t have a problem with 60 tests all being negative even though they’re all only 95% confident.
The reason being, depending on the nature of the test, the probability of a false negative might indeed be 5% while the probability of a false positive could be tiny. Suppose this is indeed the case and let’s consider the two cases that the true answer is either ‘positive’ or ‘negative’.
(A) if the true conclusion is ‘positive’, any test can yield a negative with 5% probability. (this test will be reported as a negative with 95% confidence, though one would expect most tests to yield the positive conclusion.)
(B) if the true conclusion is ‘negative’, any test that yields a negative will still be reported with the 95% confidence because of the possibility of case (A). Though if it is case (B), we should not expect any positive conclusion, even over 60 tests, because the false-positive rate is so low.
I have no idea if this lack of symmetry is the case for the set of MMR and autism studies. (It probably isn’t—so I apologize that I am probably accomplishing nothing but making it more difficult to argue what is likely a true intuition.)
But it is easy to think of an example where this asymmetry would apply: consider that you are searching for someone that you know well in a crowd, but you are not sure they are there. Consider a test to be looking for them over a 15 minute period, and you estimate that if they are there, you are likely to find them during that 15 minute period with 95% probability. Suppose they are there but you don’t find them in 15 minutes—that is a false negative with 5% probability. Supopse they are not there and you do not find them—you again say they are not there with 95% probability. But in this case where they are not there, even if you have 60 people looking for them over 15 minutes, no one will find them because the probability of a false positive is pretty much zero.
(I do see where you addressed false positives versus false negatives in several places, so this explanation was not for you specifically since I know you are familiar with this. But it is not so clear which is which in these studies from the top, and it is fleshing this out that will ultimately make the argument more difficult, but more water-tight.)
I preferred to count down since I would like to keep track of how many comments remain until I’ve successfully met my commitment. If I had just wanted to accumulate an unspecified number, I would have counted up.
…any particular reason why you asked?
Thanks for the feedback. I’ll keep track of my 50 comments more unobtrusively. (Comment #47.)
This is one of the strangest posts I have ever read on Less Wrong..
50 comments 50 words or less. #48
Depending on opportunities in your field, academia may provide favorable amounts of freedom, job security and impact. However, for the quintessential academic, academia is not a calculated optimization but a personality type:
It’s awesome to be supported while you learn and think, if that’s what you wanted to do anyway.
50 comments 50 words or less. #49
I had already noticed I needed to adjust in this direction. I’m going to try being more concise and see how it goes.
Specifically, 50 words or less for 50 comments. (#50)
I appreciate your responses, thanks. My perspective on understanding a concept was a bit different—once a concept is owned, I thought, you apply it everywhere and are confused and startled when it doesn’t apply. But especially in considering this example I see your point about the difficulty in understanding the concept fully and consistently applying it.
Volume conservation is something we learn through experience that is true—it’s not logically required, and there are probably some interesting materials that violate it at any level of interpretation. But there is an associated abstract concept—that number of things might be conserved as you move them around—that we might measure comprehension of.
There are different levels at which this concept can be understood. It can be understood that it works for discrete objects: this number of things staying the same always works for things like blocks, but not for fluids, which flow together, so the child might initially carve reality in this way. Eventually volume conservation can be applied to something abstract like unit squares of volume, which liquids do satisfy.
Now that I see that the concept isn’t logically required (it’s a fact about everyday reality we learn through experience) and that there are a couple stages, I’m really skeptical that there is a physical module dedicated to this concept.
So I’ve updated. I don’t believe there are physical/neurological developments associated with particular concepts. (Abstract reasoning ability may increase over time, and may require particular neurological advancements, but these developments would not be tied with understanding particular concepts.)
Seems kind of silly now. Though there was some precedent with some motor development concepts (e.g., movements while learning to walk) being neurologically pre-programmed.
This seems an appropriate place to observe that while watching my children develop from very immature neurological systems (little voluntary control, jerky, spasmodic movements that are cute but characteristic of very young babies) to older babies that could look around and start learning to move themselves, I was amazed by how much didn’t seem to be pre-programmed and I wondered how well babies could adapt to different realities (e.g., weightlessness or different physics in simulated realities). Our plasticity in that regard, if my impression is correct, seems amazing. Evolution had no reason to select for that. Unless it is also associated with later plasticity for learning new motor skills, and new mental concepts.
I think, evidence that the universe was designed with some degree of attention to our well-being. If the universe is unexpectedly kind to us, or if we are especially well taken care of, would be evidence of a loving God.
I’m conflicted about which universe we’re in. Things could certainly be worse, but it’s also not very good. Is life more tolerable to us than we’d expect by random chance?
But for sure, just look at outcome. It only muddles to consider intention for three reasons:
(1) it is the outcome that we’re concerned with, “pretending” versus “sincere” has no meaning if there’s no distinguishing effect on observation
(2) asking about pretending is really asking about whether the evidence could be ‘tricking’ us; it is always a possibility that the evidence leads us to the wrong conclusion with some probability, or that induction over time doesn’t apply
(2) even if the creator is non-sentient, we can still ask if the universe is ‘us-loving’ or not