The person who taught your epidemiology course is incorrect: As Ilya correctly points out, differential misclassification can certainly occur even in a prospective cohort study. Unfortunately, this exact confusion is very common in epidemiology.
Some reading on how to reason about mismeasurement bias using causal graph is available in Chapter 9 of the Hernan and Robins textbook, which is freely available at http://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/ .The chapter contains all the relevant principles, but doesn’t explicitly answer your questions. I also have a set of slides that I use for teaching this material, these slides contain some directly relevant examples and graphs. I can send these to you if you contact me at ahuitfeldt@mail.harvard.edu.
The distinction between “cohort” and “case-control” is not relevant here. The professor is using it as shorthand for retrospective/prospective. The most useful definition of “prospective” and “retrospective” is that in a prospective study, the exposure variable is measured before the outcome variable is instantiated. This is a useful definition because under this definition of prospective, there cannot be a directed path from the outcome to the measurement error on the exposure, which reduces the potential for bias. However, there can still be common causes of the outcome and the measurement error on the exposure, which will results in differential misclassification of the exposure.
Thank you, I hope I indeed follow through on it! My interest in epi stems from an interest in stats, which was sparked from reading about Bayesian statistics through LW and being utterly overwhelmed from it!
Thanks—I was worried I was missing something. Incidentally, I wrote something that you might be interested in on missing data under MNAR that is generalizable to some measurement error contexts.
The person who taught your epidemiology course is incorrect: As Ilya correctly points out, differential misclassification can certainly occur even in a prospective cohort study. Unfortunately, this exact confusion is very common in epidemiology.
Some reading on how to reason about mismeasurement bias using causal graph is available in Chapter 9 of the Hernan and Robins textbook, which is freely available at http://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/ .The chapter contains all the relevant principles, but doesn’t explicitly answer your questions. I also have a set of slides that I use for teaching this material, these slides contain some directly relevant examples and graphs. I can send these to you if you contact me at ahuitfeldt@mail.harvard.edu.
The distinction between “cohort” and “case-control” is not relevant here. The professor is using it as shorthand for retrospective/prospective. The most useful definition of “prospective” and “retrospective” is that in a prospective study, the exposure variable is measured before the outcome variable is instantiated. This is a useful definition because under this definition of prospective, there cannot be a directed path from the outcome to the measurement error on the exposure, which reduces the potential for bias. However, there can still be common causes of the outcome and the measurement error on the exposure, which will results in differential misclassification of the exposure.
this was an unhelpful comment, removed and replaced by this comment
I think it would be very valuable for thinking about bias etc. in epi studies to learn about d-separation, good work on being proactive about it!
Thank you, I hope I indeed follow through on it! My interest in epi stems from an interest in stats, which was sparked from reading about Bayesian statistics through LW and being utterly overwhelmed from it!
Thanks—I was worried I was missing something. Incidentally, I wrote something that you might be interested in on missing data under MNAR that is generalizable to some measurement error contexts.