My intuition is that people get confused whether they’re measuring the risk to themselves or the risk to society from their Covid decisions.
It seems like a lot of people I know have decided that they’d rather accept the increased risk of serious injury or death rather than have a substantially reduced quality of life for a year or two. Ok, fine. (Although it’s hard to measure small risks.)
On the other hand, the other problem is that even if the person is accepting the risk for themselves, I’m not sure they’re processing the risk that somebody else gets seriously ill or dies.
It might be that public pressure focusing on the risk to each of us is obscuring some of the risk to other people, or it might just be a collective action problem—if a mayor encourages other people to be vigilant about Covid from his beach vacation in Mexico, well, that does much less damage as long as everybody else follows his advice.
On the other hand, the other problem is that even if the person is accepting the risk for themselves, I’m not sure they’re processing the risk that somebody else gets seriously ill or dies.
Well maybe, but are you thinking of the fact that (trivially) P(you infect someone else) < P(you are infected)?
But if infected you may infect several other people, and indirectly the number of people infected who wouldn’t have been if you’d been more careful may be very large.
Sorry for the late reply. I’m assuming you need to be “infected” in order to infect someone else (define “infected” so that this is true). Since being infected is a neccessary precondition to infecting someone else,
P(you infect someone else) ⇐ P(you are infected),
and it’s clear you can replace “<=” by “<”.
This is basic probaility theory, I can’t follow your notation but suspect that you are using some different definition of “infected” and/or confusing probabilities with expected values..
I’m not good at expressing it formally, but I was thinking more:
Expected total utility to my friend of going to a bar with her granddaughters > expected total value to her of staying home,* but the the expected utility to society of her going to the bar with her grandkids is negative.
As long as enough other people stay home, on the other hand, the social costs of her going out are not as high as they would be if more people went out.
On the third hand, even if a bunch of people going out increases the cost (to both herself and society) of her going out, watching other people defect makes her feel like a sucker.
*She’s in her late 70s, and my feeling is she’s don’t the math and figures she may not have that many good years left, so she didn’t want to miss out on one, even if it increased her life expectancy.
My intuition is that people get confused whether they’re measuring the risk to themselves or the risk to society from their Covid decisions.
It seems like a lot of people I know have decided that they’d rather accept the increased risk of serious injury or death rather than have a substantially reduced quality of life for a year or two. Ok, fine. (Although it’s hard to measure small risks.)
On the other hand, the other problem is that even if the person is accepting the risk for themselves, I’m not sure they’re processing the risk that somebody else gets seriously ill or dies.
It might be that public pressure focusing on the risk to each of us is obscuring some of the risk to other people, or it might just be a collective action problem—if a mayor encourages other people to be vigilant about Covid from his beach vacation in Mexico, well, that does much less damage as long as everybody else follows his advice.
Well maybe, but are you thinking of the fact that (trivially) P(you infect someone else) < P(you are infected)?
But if infected you may infect several other people, and indirectly the number of people infected who wouldn’t have been if you’d been more careful may be very large.
That’s the point of the post. Given a large number of contacts, P(infecting at least one of them) > P(you are infected)
Lets illustrate. Suppose P1(you are infected AND (you are asymptomatic OR you are pre-symptomatic))
P2(infecting any one of your contacts) = P2′*P1 = where P2′ is the probability of infection per contact
Then P3(infecting at least one of your contacts out of N) = 1- (1-P2)^N provided none of the N contacts are themselves infected.
And in P3>P1 it is always possible to solve for N.
Sorry for the late reply. I’m assuming you need to be “infected” in order to infect someone else (define “infected” so that this is true). Since being infected is a neccessary precondition to infecting someone else,
P(you infect someone else) ⇐ P(you are infected),
and it’s clear you can replace “<=” by “<”.
This is basic probaility theory, I can’t follow your notation but suspect that you are using some different definition of “infected” and/or confusing probabilities with expected values..
I’m not good at expressing it formally, but I was thinking more:
Expected total utility to my friend of going to a bar with her granddaughters > expected total value to her of staying home,* but the the expected utility to society of her going to the bar with her grandkids is negative.
As long as enough other people stay home, on the other hand, the social costs of her going out are not as high as they would be if more people went out.
On the third hand, even if a bunch of people going out increases the cost (to both herself and society) of her going out, watching other people defect makes her feel like a sucker.
*She’s in her late 70s, and my feeling is she’s don’t the math and figures she may not have that many good years left, so she didn’t want to miss out on one, even if it increased her life expectancy.