That last bit (which, I confess, I hadn’t actually noticed before) doesn’t say that “wearing sunscreen actually tends to increase risk of skin cancer”.
I agree. I think I read into it a bit on my first reading, when I was composing the post. But I still found the interpretation probable when I reflected on what the authors might have meant.
In any case, I concede based on what you could find that it’s less probable (than the alternatives). The interviewee probably didn’t have any positive information to the effect that getting someone to wear sunscreen causes them to stay out in the sun sufficiently longer for the skin cancer risk to actually go up on net. So my initial wording appears to be unsupported by the data, as you originally claimed.
But I don’t think your interpretation passes the smell test. If whoever wrote that page really believed that the overall effect of wearing sunscreen was to increase the risk of skin cancer (via making you stay out in the sun for longer), would they have said “We recommend sunscreen for skin cancer prevention”?
It’s pretty plausible on my social model.
To use an analogy: telling people to eat less is not a very good weight loss intervention, at all. (I think. I haven’t done my research on this.) More importantly, I don’t think people think it is. However, people do it all the time, because it is true that people would lose weight if they ate less.
My hypothesis: when giving advice, people tend to talk about ideal behavior rather than realistic consequences.
More evidence: when I give someone advice for how to cope with a behavior problem rather than fixing it, I often get pushback like “I should just fix it”, which seems to be offered as an actual argument against my advice. For example, if someone habitually stopped at McDonald’s on the way home from work, I might suggest driving a different way (avoiding that McDonald’s), so that the McDonald’s temptation doesn’t kick in when they drive past it. I might get a response like “but I should just not give in to temptation”. Now, that response is sometimes valid (like if the only other way to drive is significantly longer), but I think I’ve gotten responses like that when there’s no other reason except the person wants to see themselves as a virtuous person and any plan which accounts for their un-virtue is like admitting defeat.
So, if someone believes “putting on sunscreen has a statistical tendency to make people stay out in the sun longer, which is net negative wrt skin cancer” but also believes “all else being equal, putting on sunscreen is net positive wrt skin cancer”, I expect them to give advice based on the latter rather than the former, because when giving advice they tend to model the other person as virtuous enough to overcome the temptation to stay out in the sunlight longer. Anything else might even be seen as insulting to the listener.
(Unless they are the sort of person who geeks out about behavioral economics and such, in which case I expect the opposite.)
I think your modified wording is better, and wonder whether it might be improved further by replacing “correlated” with “associated” which is still technically correct (or might be? the Australian study above seems to disagree) and sounds more like “sunscreen is bad for you”.
I was really tempted to say “associated”, too, but it’s vague! The whole point of the example is to say something precise which is typically interpreted more loosely. Conflating correlation with causation is a pretty classic example of that, so, it seems good. “Associated” would still serve as an example of saying something that the precise person knows is true, but which a less precise person will read as implying something different. High-precision people might end up in this situation, and could still complain “I didn’t say you shouldn’t wear sunscreen” when misinterpreted. But it’s a slightly worse example because it doesn’t make the precise person sound like a precise person, so it’s not overtly illustrating the thing.
I’m not sure what distinction you’re making when you say someone might believe both of
putting on sunscreen has a statistical tendency to make people stay out in the sun longer, which is net negative wrt skin cancer
all else being equal, putting on sunscreen is net positive wrt skin cancer
If the first means only that being out in the sun longer is negative then of course it’s easy to believe both of those, but then “net negative” is entirely the wrong term and no one would describe the situation by saying anything like “wearing sunscreen actually tends to increase risk of skin cancer”.
If the first means that the benefit of wearing sunscreen and the harm of staying in the sun longer combine to make something net negative, then “net negative” is a good term for that and “tends to increase risk” is fine, but then I don’t understand how that doesn’t flatly contradict the second proposition.
Sorry, here’s another attempt to convey the distinction:
Possible belief #1 (first bullet point):
If we perform the causal intervention of getting someone to put on sunscreen, then (on average) that person will stay out in the sun longer; so much so that the overall incidence of skin cancer would be higher in a randomly selected group which we perform that intervention on, in comparison to a non-intervened group (despite any opposing beneficial effect of sunscreen itself).
I believe this is the same as the second interpretation you offer (the one which is consistent with use of the term “net”).
Possible belief #2 (second bullet point):
If we perform the same causal intervention as in #1, but also hold fixed the time spend in the sun, then the average incidence of skin cancer would be reduced.
This doesn’t flatly contradict the first bullet point, because it’s possible sunscreen is helpful when we keep the amount of sun exposure fixed, but that the behavior changes of those with sunscreen changes the overall story.
OK, yes: I agree that that is a possible distinction and that someone could believe both those things. And, duh, if I’d read what you wrote more carefully then I would have understood that that was what you meant. (”… because when giving advice they tend to model the other person as virtuous enough to overcome the temptation to stay out in the sunlight longer.”) My apologies.
I agree. I think I read into it a bit on my first reading, when I was composing the post. But I still found the interpretation probable when I reflected on what the authors might have meant.
In any case, I concede based on what you could find that it’s less probable (than the alternatives). The interviewee probably didn’t have any positive information to the effect that getting someone to wear sunscreen causes them to stay out in the sun sufficiently longer for the skin cancer risk to actually go up on net. So my initial wording appears to be unsupported by the data, as you originally claimed.
It’s pretty plausible on my social model.
To use an analogy: telling people to eat less is not a very good weight loss intervention, at all. (I think. I haven’t done my research on this.) More importantly, I don’t think people think it is. However, people do it all the time, because it is true that people would lose weight if they ate less.
My hypothesis: when giving advice, people tend to talk about ideal behavior rather than realistic consequences.
More evidence: when I give someone advice for how to cope with a behavior problem rather than fixing it, I often get pushback like “I should just fix it”, which seems to be offered as an actual argument against my advice. For example, if someone habitually stopped at McDonald’s on the way home from work, I might suggest driving a different way (avoiding that McDonald’s), so that the McDonald’s temptation doesn’t kick in when they drive past it. I might get a response like “but I should just not give in to temptation”. Now, that response is sometimes valid (like if the only other way to drive is significantly longer), but I think I’ve gotten responses like that when there’s no other reason except the person wants to see themselves as a virtuous person and any plan which accounts for their un-virtue is like admitting defeat.
So, if someone believes “putting on sunscreen has a statistical tendency to make people stay out in the sun longer, which is net negative wrt skin cancer” but also believes “all else being equal, putting on sunscreen is net positive wrt skin cancer”, I expect them to give advice based on the latter rather than the former, because when giving advice they tend to model the other person as virtuous enough to overcome the temptation to stay out in the sunlight longer. Anything else might even be seen as insulting to the listener.
(Unless they are the sort of person who geeks out about behavioral economics and such, in which case I expect the opposite.)
I was really tempted to say “associated”, too, but it’s vague! The whole point of the example is to say something precise which is typically interpreted more loosely. Conflating correlation with causation is a pretty classic example of that, so, it seems good. “Associated” would still serve as an example of saying something that the precise person knows is true, but which a less precise person will read as implying something different. High-precision people might end up in this situation, and could still complain “I didn’t say you shouldn’t wear sunscreen” when misinterpreted. But it’s a slightly worse example because it doesn’t make the precise person sound like a precise person, so it’s not overtly illustrating the thing.
I’m not sure what distinction you’re making when you say someone might believe both of
putting on sunscreen has a statistical tendency to make people stay out in the sun longer, which is net negative wrt skin cancer
all else being equal, putting on sunscreen is net positive wrt skin cancer
If the first means only that being out in the sun longer is negative then of course it’s easy to believe both of those, but then “net negative” is entirely the wrong term and no one would describe the situation by saying anything like “wearing sunscreen actually tends to increase risk of skin cancer”.
If the first means that the benefit of wearing sunscreen and the harm of staying in the sun longer combine to make something net negative, then “net negative” is a good term for that and “tends to increase risk” is fine, but then I don’t understand how that doesn’t flatly contradict the second proposition.
What am I missing?
Sorry, here’s another attempt to convey the distinction:
Possible belief #1 (first bullet point):
If we perform the causal intervention of getting someone to put on sunscreen, then (on average) that person will stay out in the sun longer; so much so that the overall incidence of skin cancer would be higher in a randomly selected group which we perform that intervention on, in comparison to a non-intervened group (despite any opposing beneficial effect of sunscreen itself).
I believe this is the same as the second interpretation you offer (the one which is consistent with use of the term “net”).
Possible belief #2 (second bullet point):
If we perform the same causal intervention as in #1, but also hold fixed the time spend in the sun, then the average incidence of skin cancer would be reduced.
This doesn’t flatly contradict the first bullet point, because it’s possible sunscreen is helpful when we keep the amount of sun exposure fixed, but that the behavior changes of those with sunscreen changes the overall story.
OK, yes: I agree that that is a possible distinction and that someone could believe both those things. And, duh, if I’d read what you wrote more carefully then I would have understood that that was what you meant. (”… because when giving advice they tend to model the other person as virtuous enough to overcome the temptation to stay out in the sunlight longer.”) My apologies.