Thanks for all your effort on this (and the others).
A small point on the following:
I find that to be an interesting study result in its own right. People simply don’t believe such tests, or don’t even care enough to look at them in context.
That’s one plausible hypothesis, but not what I’d think of as the obvious one: if I’m shown a positive test for X and asked “Do you think you had X?” it’s plausible that I take that to mean “Do you think you had X (for reasons other than the positive test I just showed you)?”. I imagine quite a bit depends on the wording.
It’s quite natural in such situations to assume the questioner is interested in new, non-obvious information. Any inference based on the test result they’ve just shown you doesn’t qualify—unless you assume they’re aiming to test your thinking rather than your health. (of course it’d be handy if we lived in a world where questionnaires contained only precisely worded questions that carefully avoided ambiguity, and respondents interpreted them precisely...)
I’d want to ask the question before and after showing the test result.
I’ve said elsewhere that my hypothesis for this discrepancy is that people think of “had COVID” in terms of a set of experiences, not in terms of viruses/antibodies inside them. If I had a completely asymptomatic COVID infection, did I “have COVID”? I’d probably say yes, but I think it’s very reasonable to say no. The people who had a negative test and said that they did have COVID are slightly more surprising, but perhaps they contracted some other disease and mistakenly attributed all symptoms to COVID.
I agree with the broader point that people give weird answers to survey questions, and seemingly immaterial differences to the survey methodology can yield wildly different answers. I once streamlined a survey by removing 1 click per question and respondents answered 3 times as many questions on average.
Thanks for all your effort on this (and the others).
A small point on the following:
That’s one plausible hypothesis, but not what I’d think of as the obvious one: if I’m shown a positive test for X and asked “Do you think you had X?” it’s plausible that I take that to mean “Do you think you had X (for reasons other than the positive test I just showed you)?”. I imagine quite a bit depends on the wording.
It’s quite natural in such situations to assume the questioner is interested in new, non-obvious information. Any inference based on the test result they’ve just shown you doesn’t qualify—unless you assume they’re aiming to test your thinking rather than your health. (of course it’d be handy if we lived in a world where questionnaires contained only precisely worded questions that carefully avoided ambiguity, and respondents interpreted them precisely...)
I’d want to ask the question before and after showing the test result.
I’ve said elsewhere that my hypothesis for this discrepancy is that people think of “had COVID” in terms of a set of experiences, not in terms of viruses/antibodies inside them. If I had a completely asymptomatic COVID infection, did I “have COVID”? I’d probably say yes, but I think it’s very reasonable to say no. The people who had a negative test and said that they did have COVID are slightly more surprising, but perhaps they contracted some other disease and mistakenly attributed all symptoms to COVID.
I agree with the broader point that people give weird answers to survey questions, and seemingly immaterial differences to the survey methodology can yield wildly different answers. I once streamlined a survey by removing 1 click per question and respondents answered 3 times as many questions on average.