Fairly often I will come across something on LW which is surprising / seems wrong to me. I will often do something of an epistemic spot check to check it.
If I conclude that the statement that I was investigating was incorrect then this will turn into a comment/review/post.
If the statement was correct then I’ve learnt something new.
But it slightly worries me that I never publish these findings. An analogy could be made to publication bias in scientific literature.
I feel like I should write a comment just to say “I checked X and found that it was true”. In addition to avoiding a kind of publication bias I suspect that if something is surprising to me then it will be to some other people and giving independent confirmation seems prosocial.
On the other hand writing on someone’s post “Yes, you are correct about that” comes across as quite arrogant, as if they needed my confirmation before they could be considered correct.
My proposed solution to this is to link to this comment when I do it and hope that this explains why I’m doing it.
I suspect that I’m not the only one who this happens to so in the long run it might be beneficial for this to become a community norm and then I wouldn’t have to link to this.
If anyone has a better solution or thinks I should just not bother with publishing these confirmations then I’d be interested to hear it.
I should say that I don’t know anyone from LW IRL and am sometimes more worried about accidentally violating norms. Anywhere other than LW I’d be confident that doing this kind of thing would be seen as a social faux-pas.
Obviously I’d like to think that LW would not have this issue. On the other hand I know that I’m communicating with humans so social reactions don’t always work like we’d want them to.
Edit: actually I’m not sure arrogant is the right word—more like just weird to write it up if your findings confirm the original claim, I don’t think I’ve seen this norm practiced in general.
If the only issue is tone, you could write something like: ‘Initially, I was confused/surprised by the core claim you made but reading this, this, and that [or thinking for 15 minutes/further research] made me believe that your position is basically correct’. This looks quite
[...] “Yes, you are correct about that” comes across as quite arrogant [...]
Hmm, I don’t think that’s what an upvote generally represents. An upvote is more of a general “I’d like to see more like this” rather than a specific “I researched this point and found it to be correct”.
(numbers from here and here, numbers inferred are marked with an asterisk)
Some interesting results from the latest vaccine trial. The treatment group was split in two, one of which received 2 full doses of the vaccine, the other received a half dose followed by a full dose (separated by a month in both cases).
In the control group there were 101 COVID infections in ~11,600* participants.
With 2 full doses there were 28* infections in 8,895 participants.
In the half first dose condition there were 2* infections in 2,741 participants.
So does having a low first dose actually improve the immune response?
The best I can figure out the evidence is 8:1 Bayes factor in favour of the “low first dose better” hypothesis vs the “It doesn’t matter either way” hypothesis.
Not sure what a reasonable prior is on this but I don’t think this puts the “low first dose is better” hypothesis as a strong winner on posterior.
On the other hand the evidence is 14:1 in favour of “it doesn’t matter either way” vs “both full doses is better” so at the moment it probably makes sense to give a half dose first and have more doses in total.
I’ll be interested as to what happens as the data matures—the above is apparently based on protection levels 2 weeks after receiving the second dose.
My own wild guess is that perhaps the two-full-dose protocol raised too many antibodies to the adenovirus vector itself, and made the second dose less effective. This has always been a concern with the viral-vector idea.
It sounds like there was a medium-sized prior that the lower first dose would be better—why else would they have tested it?
I was confused as to why they did this too—alternative guesses I had were to increase number of available doses or to decrease side effect severity.
However the site you link to has been updated with a link to Reuters who quote AstraZeneca saying it was an accident—they miscalculated and only noticed when side effects were smaller than predicted.
In my 2020 predictions I mentioned that I found the calibration buckets used on e.g. SSC (50%, 60%, 70%, 80%, 90% and 95%) difficult to work with at the top end as there is a large difference in odds ratio between adjacent buckets (2.25 between 80% and 90%, 2.11 between 90% and 95%). This means that when I want to say 85% both buckets are a decent way off.
I suggested at the time using 50%, 65%, 75%, 85%, 91% and 95% to keep the ratios between buckets fairly similar across the range (maximum 1.89) and to work with relatively nice round numbers.
Alternatively I suggested not having a 50% bucket as answers here don’t help towards measuring calibration and you could further reduce the gaps between buckets without increasing the number of buckets.
At the time I couldn’t come up with nice round percentage values which would keep the ratios similar. The best numbers I got were 57%, 69%, 79%, 87%, 92%, 95% (max difference of 1.78) which seemed hard to work with as they’re difficult to remember.
An alternative scheme I’ve come up with is not to use percentage values but to use odds ratios. The buckets would be:
4:3
7:3
4:1(=12:3)
7:1
4:13(=12:1)
7:13(=21:1)
The percentage equivalents are similar to the scheme mentioned previously with the same max difference between buckets. I prefer this as it has a simple pattern to remember and adjacent buckets are easy to compare (e.g. for every 3 times X doesn’t occur, would I expect X to occur 7 or 12 times?).
I’ve tried this out and found it nice to work with (not initially but after getting used to it) but that may just be a personal thing.
I was always a little underwhelmed by the argument that elections grant a government legitimacy—it feels like it assumes the conclusion.
A thought occurred to me that a stronger argument is that elections are a form of common knowledge building to avoid insurrections.
The key distinction with my previous way of thinking is that it isn’t the choice of elections as such which is important but that everyone knows how people feel on average. Obviously a normal election achieves both the choice and the knowledge but you could have knowledge without choice.
For example if people don’t vote on a new government but just indicate anonymously whether they would support an uprising (not necessarily violent) against the current government. Based on the result the government can choose to step down or not. This gives common knowledge without a choice.
I suspect this isn’t an original thought and seems kinda obvious now that I think about it—just a way of looking at it that I hadn’t considered before.
Fully agreed. Elections are part of the ceremony that keeps populations accepting of governance. Things are never completely one or the other, though—in the better polities, voting actually matters as a signal to the government as well.
Looking at how many COVID-19 deaths involve co-morbid conditions.
This report gives an idea of case fatality rate of various conditions:
CFR was elevated among those with preexisting comorbid conditions—10.5% for cardiovascular disease, 7.3% for diabetes, 6.3% for chronic respiratory disease, 6.0% for hypertension, and 5.6% for cancer.
This report has base rates of each condition in a sample of ~1000 patients.
Hypertension has a base rate of 14.9%, so 0.9% of patients had hypertension and died. At the time of that first report the total CFR was 2.3% so hypertension was present in 39% of deaths.
Correspondingly diabetes is present in 23% of deaths.
Cardiovascular disease (grouping 3 conditions in the second paper) is also present in 23% of deaths.
Obviously there’s probably a huge overlap between hypertension and CVD but even accounting for that hypertension is a large factor.
Cancer is present in 5% of deaths.
Some comorbid conditions are not in both papers so I can’t calculate those.
The no comorbidity CFR is listed at 0.9% (39% of deaths). This is much higher than the news reports I’ve heard seem to suggest, although I guess “generally unhealthy” probably doesn’t come up as a comorbid condition in the papers.
I’m not sure if there are better sources for this stuff.
Interested to hear now that these are now being provided in the to people in the UK who have COVID but are not in hospital and who are in a high risk category.
I note that there was some discussion on LW about how useful they were likely to be as people would probably notice difficulty in breathing which usually comes with low oxygen levels. It turns out that with COVID oxygen levels can get low without people noticing—this was mentioned later on LW (April).
I realised something a few weeks back which I feel like I should have realised a long time ago.
The size of the human brain isn’t the thing which makes us smart, rather it is an indicator that we are smart.
A trebling of brain size vs a chimp is impressive but trebling a neural network’s size doesn’t give that much of an improvement in performance.
A more sensible story is that humans started using their brains more usefully (evolutionarily speaking) so it made sense for us to devote more of our resources to bigger brains for the marginal gains that would give.
As I said, I feel like I should have known this for ages. I had a cached thought that human’s big brains (and other things) cause us to be smarter and had never re-examined the thought. Now I think that the “and other things” is doing almost all of the heavy lifting and the size is more incidental to the process.
Fairly often I will come across something on LW which is surprising / seems wrong to me. I will often do something of an epistemic spot check to check it.
If I conclude that the statement that I was investigating was incorrect then this will turn into a comment/review/post.
If the statement was correct then I’ve learnt something new.
But it slightly worries me that I never publish these findings. An analogy could be made to publication bias in scientific literature.
I feel like I should write a comment just to say “I checked X and found that it was true”. In addition to avoiding a kind of publication bias I suspect that if something is surprising to me then it will be to some other people and giving independent confirmation seems prosocial.
On the other hand writing on someone’s post “Yes, you are correct about that” comes across as quite arrogant, as if they needed my confirmation before they could be considered correct.
My proposed solution to this is to link to this comment when I do it and hope that this explains why I’m doing it.
I suspect that I’m not the only one who this happens to so in the long run it might be beneficial for this to become a community norm and then I wouldn’t have to link to this.
If anyone has a better solution or thinks I should just not bother with publishing these confirmations then I’d be interested to hear it.
I’m dismayed to hear that you think publicly double-checking someone’s claims might be too arrogant.
I’m more thinking of perceptions.
I should say that I don’t know anyone from LW IRL and am sometimes more worried about accidentally violating norms. Anywhere other than LW I’d be confident that doing this kind of thing would be seen as a social faux-pas.
Obviously I’d like to think that LW would not have this issue. On the other hand I know that I’m communicating with humans so social reactions don’t always work like we’d want them to.
Edit: actually I’m not sure arrogant is the right word—more like just weird to write it up if your findings confirm the original claim, I don’t think I’ve seen this norm practiced in general.
I would love to read more of such double-checking-this-claim by Bucky.
If the only issue is tone, you could write something like: ‘Initially, I was confused/surprised by the core claim you made but reading this, this, and that [or thinking for 15 minutes/further research] made me believe that your position is basically correct’. This looks quite
Isn’t that what the vote up button is for?
Hmm, I don’t think that’s what an upvote generally represents. An upvote is more of a general “I’d like to see more like this” rather than a specific “I researched this point and found it to be correct”.
Oxford / AstraZeneca vaccine effectiveness
(numbers from here and here, numbers inferred are marked with an asterisk)
Some interesting results from the latest vaccine trial. The treatment group was split in two, one of which received 2 full doses of the vaccine, the other received a half dose followed by a full dose (separated by a month in both cases).
In the control group there were 101 COVID infections in ~11,600* participants.
With 2 full doses there were 28* infections in 8,895 participants.
In the half first dose condition there were 2* infections in 2,741 participants.
So does having a low first dose actually improve the immune response?
The best I can figure out the evidence is 8:1 Bayes factor in favour of the “low first dose better” hypothesis vs the “It doesn’t matter either way” hypothesis.
Not sure what a reasonable prior is on this but I don’t think this puts the “low first dose is better” hypothesis as a strong winner on posterior.
On the other hand the evidence is 14:1 in favour of “it doesn’t matter either way” vs “both full doses is better” so at the moment it probably makes sense to give a half dose first and have more doses in total.
I’ll be interested as to what happens as the data matures—the above is apparently based on protection levels 2 weeks after receiving the second dose.
Derek Lowe writes:
It sounds like there was a medium-sized prior that the lower first dose would be better—why else would they have tested it?
I was confused as to why they did this too—alternative guesses I had were to increase number of available doses or to decrease side effect severity.
However the site you link to has been updated with a link to Reuters who quote AstraZeneca saying it was an accident—they miscalculated and only noticed when side effects were smaller than predicted.
I guess the lesson being: Never attribute to n-dimensional chess that which can equally be attributed to stupidity.
In my 2020 predictions I mentioned that I found the calibration buckets used on e.g. SSC (50%, 60%, 70%, 80%, 90% and 95%) difficult to work with at the top end as there is a large difference in odds ratio between adjacent buckets (2.25 between 80% and 90%, 2.11 between 90% and 95%). This means that when I want to say 85% both buckets are a decent way off.
I suggested at the time using 50%, 65%, 75%, 85%, 91% and 95% to keep the ratios between buckets fairly similar across the range (maximum 1.89) and to work with relatively nice round numbers.
Alternatively I suggested not having a 50% bucket as answers here don’t help towards measuring calibration and you could further reduce the gaps between buckets without increasing the number of buckets.
At the time I couldn’t come up with nice round percentage values which would keep the ratios similar. The best numbers I got were 57%, 69%, 79%, 87%, 92%, 95% (max difference of 1.78) which seemed hard to work with as they’re difficult to remember.
An alternative scheme I’ve come up with is not to use percentage values but to use odds ratios. The buckets would be:
4:3
7:3
4:1(=12:3)
7:1
4:13(=12:1)
7:13(=21:1)
The percentage equivalents are similar to the scheme mentioned previously with the same max difference between buckets. I prefer this as it has a simple pattern to remember and adjacent buckets are easy to compare (e.g. for every 3 times X doesn’t occur, would I expect X to occur 7 or 12 times?).
I’ve tried this out and found it nice to work with (not initially but after getting used to it) but that may just be a personal thing.
I was always a little underwhelmed by the argument that elections grant a government legitimacy—it feels like it assumes the conclusion.
A thought occurred to me that a stronger argument is that elections are a form of common knowledge building to avoid insurrections.
The key distinction with my previous way of thinking is that it isn’t the choice of elections as such which is important but that everyone knows how people feel on average. Obviously a normal election achieves both the choice and the knowledge but you could have knowledge without choice.
For example if people don’t vote on a new government but just indicate anonymously whether they would support an uprising (not necessarily violent) against the current government. Based on the result the government can choose to step down or not. This gives common knowledge without a choice.
I suspect this isn’t an original thought and seems kinda obvious now that I think about it—just a way of looking at it that I hadn’t considered before.
Fully agreed. Elections are part of the ceremony that keeps populations accepting of governance. Things are never completely one or the other, though—in the better polities, voting actually matters as a signal to the government as well.
I think Natural Reasons by Susan Hurley made the same argument (I don’t own a copy so I can’t check)
Looking at how many COVID-19 deaths involve co-morbid conditions.
This report gives an idea of case fatality rate of various conditions:
This report has base rates of each condition in a sample of ~1000 patients.
Hypertension has a base rate of 14.9%, so 0.9% of patients had hypertension and died. At the time of that first report the total CFR was 2.3% so hypertension was present in 39% of deaths.
Correspondingly diabetes is present in 23% of deaths.
Cardiovascular disease (grouping 3 conditions in the second paper) is also present in 23% of deaths.
Obviously there’s probably a huge overlap between hypertension and CVD but even accounting for that hypertension is a large factor.
Cancer is present in 5% of deaths.
Some comorbid conditions are not in both papers so I can’t calculate those.
The no comorbidity CFR is listed at 0.9% (39% of deaths). This is much higher than the news reports I’ve heard seem to suggest, although I guess “generally unhealthy” probably doesn’t come up as a comorbid condition in the papers.
I’m not sure if there are better sources for this stuff.
10 months ago there was the coronavirus justified practical advice thread.
This resulted in myself (and many many others) buying a pulse oximeter.
Interested to hear now that these are now being provided in the to people in the UK who have COVID but are not in hospital and who are in a high risk category.
I note that there was some discussion on LW about how useful they were likely to be as people would probably notice difficulty in breathing which usually comes with low oxygen levels. It turns out that with COVID oxygen levels can get low without people noticing—this was mentioned later on LW (April).
I realised something a few weeks back which I feel like I should have realised a long time ago.
The size of the human brain isn’t the thing which makes us smart, rather it is an indicator that we are smart.
A trebling of brain size vs a chimp is impressive but trebling a neural network’s size doesn’t give that much of an improvement in performance.
A more sensible story is that humans started using their brains more usefully (evolutionarily speaking) so it made sense for us to devote more of our resources to bigger brains for the marginal gains that would give.
As I said, I feel like I should have known this for ages. I had a cached thought that human’s big brains (and other things) cause us to be smarter and had never re-examined the thought. Now I think that the “and other things” is doing almost all of the heavy lifting and the size is more incidental to the process.