Samira
mayleaf
Thanks for sharing.
After reading the screenshots (linked in the Google Docs link) of Alice’s conversation with Drew, I mostly don’t agree with the claim that Alice was lying*. For one thing, the December 15 exchange with Drew indicates that he ultimately did not bring her dinner because he didn’t want to get fast food or drive more than 12 minutes away (he did later offer to bring her a salad from the place he ended up going to, which she declined.) This is a pretty different picture than the conclusion one would draw from just reading the screenshot embedded in the post, which makes it sound like Alice did receive her requested impossible burger.
I read the linked screenshot as Alice communicating that she is basically out of food she can eat, making a few requests to get food that met her dietary restrictions, encountering social friction, and giving up.
(I also think it normally isn’t an employer’s responsibility to deliver food to their sick employee, but also normally employees don’t live with their employers in a foreign country with no other support network, which to me changes the picture substantially).
*edit: That is, the screenshots don’t convince me that this claim in particular (quoted in the post) was a lie: “Alice claims she was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days.”
also edit: also see KPier’s more thorough comment here
I haven’t looked into this, no. I’m quite confident that I have COVID; I have all the classic symptoms (fever, cough, shortness of breath). I also re-tested using an antigen test today with a nasal-only swab and got a positive.
As a datapoint, I tested positive on 3 antigen tests of two different kinds and negative on a Cue test in the same hour, on my first day of having COVID. My suspicion is that this was because I swabbed my throat for the antigen tests, but not for the Cue test because I wasn’t sure if saliva worked for Cue. As further supporting evidence, I lightly brushed my throat for antigen test #1 and got an extremely faint line, and then vigorously swabbed it less an than hour later and got a very clear dark line (both on BinaxNow tests).
Edit: These tests were all performed yesterday, May 2 2022.
If I take a “non-sentient” chicken and cut off its wings, and I watch it as it helplessly tries to fly repeatedly, but is unable to, this strikes me as a form of harm to the chicken and its values even if the chicken is not having a subjective experience of its condition.
I’m curious how you would distinguish between entities that can be harmed in a morally relevant way and entities that cannot. I use subjective experience to make this distinction, but it sounds like you’re using something like—thwarted intentions? telos-violation? I suspect we’d both agree that chickens are morally relevant and (say) pencils are not, and that snapping a pencil in half is not a morally-relevant action. But I’m curious what criterion you’re using to draw that boundary.One could make similar inquiries into ‘dissociation’. If a person is regularly dissociated and doesn’t feel things very intensely, does it make it more okay to hurt them?
This is an interesting point; will think about it more.
Most people are tested for cancer because they have one or more symptoms consistent with cancer. So the base rate of 1% “for the patient’s age and sex” isn’t the correct prior, because most of the people in the base rate have no symptoms that would provoke a test.
To clarify, the problem that Gigerenzer posed to doctors began with “A 50-year-old woman, no symptoms, participates in a routine mammography screening”. You’re right that if there were symptoms or other reasons to suspect having cancer, that should be factored into the prior. (And routine mammograms are in fact recommended to all women of a certain age in the US.)
We really need a computation whose result is a probability.
I agree—it would be ideal to have a way to precisely calculate your prior odds of having COVID. I try and estimate this using microCOVID to sum my risk based on my recent exposure level, the prevalence in my area, and my vaccination status. I don’t know a good way to estimate my prior if I do have symptoms.
My prior would just be a guess, and I don’t see how multiplying a guess by 145x is helpful.
I don’t fully agree with this part, because regardless of whether my prior is a guess or not, I still need to make real-world decisions about when to self-isolate and when to seek medical treatment. If I have a very mild sore throat that might just be allergies, and I stayed home all week, and I test negative on a rapid test, what should I do? What if I test negative on a PCR test three days later? Regardless of whether I’m using Bayes factors, or test sensitivity or just my intuition, I’m still using something to determine at which point it’s safe to go out again. Knowing the Bayes factors for the tests I’ve taken helps that reasoning be slightly more grounded in reality.
Edit: I’ve updated my post to make it clearer that the Gigerenzer problem specified that the test was a routine test on an asymptomatic patient.
An earlier draft of this actually mentioned vaccination status, and I only removed it for sentence flow reasons. You’re right that vaccination status (or prior history of COVID) is an important part of your prior estimate, along with prevalence in your area, and your activities/level of exposure. The microCOVID calculator I linked factors in all three of these. I’ve also edited the relevant sentence in the “Using Bayes factors” section to mention vaccination status.
Wow, that is surprising, thanks for sharing. Am I reading correctly that you got no positive NAAT/PCR tests, and only got positives from antigen tests?
I took 13 rapid tests in total, 5 of which were positive, and 4 of these positive tests were from the same brand. 4 out of 5 of the tests of that brand that I have taken were positive.
Would you be up for sharing what brand that was?
I don’t yet know enough about what causes false positives and false negatives in either antigen tests or NAATs to speculate much, but I appreciate this datapoint! (Also, glad you’re feeling well and didn’t develop any symptoms)
Thanks for linking the meta-analysis and the other papers; will read (and possibly update the post afterwards)! I especially appreciate that the meta-analysis includes studies of BinaxNOW, something I’d been looking for.
Sensitivity for Ct < 25: 94%, Ct > 3: 30%. (I’ll be writing more about these results in a bit, but the short version is that this strongly supports the belief that test sensitivity depends strongly on viral load and will be highest during peak infectivity).
Nice, I’d been hearing/reading about using cycle count to determine how much a test’s results track infectiousness, and that’s really to see the results so starkly supporting that. Looking forward to your writeup!
I haven’t had time to read up about Beta distributions and play with the tool you linked, but I just wanted to say that I really appreciate the thorough explanation! I’m really happy that posting about statistics on LessWrong has the predictable consequence of learning more statistics from the commenters :)
Thanks, I was wondering if the answer would be something like this (basically that I should be using a distribution rather than a point estimate, something that @gwillen also mentioned when he reviewed the draft version of this point).
If the sensitivity and specificity are estimated with data from studies with large (>1000) sample sizes it mostly won’t matter.
That’s the case for the antigen test data; the sample sizes are >1000 for each subgroup analyzed (asymptomatic, symptoms developed <1 week ago, symptoms developed >1 week ago).
The sample size for all NAATs was 4351, but the sample size for the subgroups of Abbot ID Now and Cepheid Xpert Xpress were only 812 and 100 respectively. Maybe that’s a small enough sample size that I should be suspicious of the subgroup analyses? (@JBlack mentioned this concern below and pointed out that for the Cepheid test, there were only 29 positive cases total).
Thanks, I appreciate this explanation!
The other problem is that the positive sample size must have been only 29 people. That’s disturbingly small for a test that may be applied a billion times, and seriously makes me question their validation study that reported it.
Thanks for flagging this. The review’s results table (“Summary of findings 1”) says “100 samples” and “29 SARS-COV-2 cases”; am I correctly interpreting that as 100 patients, of which 29 were found to have COVID? (I think this is what you’re saying too, just want to make sure I’m clear on it)If I had to pick a single number based only on seeing their end result, I’d go with 96% sensitivity under their study conditions, whatever those were.
Can you say more about how you got 96%?
Yeah, based on the Cochrane paper I’d interpret “one positive result and one negative result” as an overall update towards having COVID. In general, both rapid antigen tests and NAATs are more sensitive than they are specific (more likely to return false negatives than false positives.)
Though also see the “Caveats about infectiousness” section, which suggests that NAATs have a much higher false positive rate for detecting infectiousness than they do for detecting illness. I don’t have numbers for this, unfortunately, so I’m not sure if 1 positive NAAT + 1 negative NAAT is overall an update in favor or away from infectiousness.
I’m not super sure; I wrote about this a little in the section “What if you take multiple tests?”:
If you get a false negative because you have a low viral load, or because you have an unusual genetic variant of COVID that’s less likely to be amplified by PCR*, presumably that will cause correlated failures across multiple tests. My guess is that each additional test gives you a less-significant update than the first one.
*This scenario is just speculation, I’m not actually sure what the main causes of false negatives are for PCR tests.but that’s just a guess. I’d love to hear from anyone who has a more detailed understanding of what causes failures in NAATs and antigen tests.
Naively, I’d expect that if the test fails due to low viral load, that would probably cause correlated failures across all tests taken on the same day. Waiting a few days between tests is probably a good idea, especially if you were likely to be in the early-infection stage (and so likely low viral load) during your first test. The instructions for the BinaxNOW rapid antigen test say that if you get a negative result, you shouldn’t repeat the test until 3 days later.
How much should you update on a COVID test result?
Thank you, I really appreciate this response. I did guess that this was probably how you and others (like Anna, whose comments have been very measured) felt, but it is really reassuring to have it explicitly verbally confirmed, and not just have to trust that it’s probably true.
I think your comments in this thread have been brusque/pushy in a way that’s hurting the conversation (others feel free to chime in if that seems wrong to them).
I mentioned in a different comment that I’ve appreciated some of farp’s comments here for pushing back against what I see as a missing mood in this conversation (acknowledgment that the events described in Zoe’s account are horrifying, as well as reassurance that people in leadership positions are taking the allegations seriously and might take some actions in response). I also appreciate Ruby’s statement that we shouldn’t pressure or judge people who might have something relevant to say.
The unitofcaring post on mediators and advocates seems relevant here. I interpret farp (edit: not necessarily in the parent comment, but in various other comments in this thread) as saying that they’d like to see more advocacy in this thread instead of just mediation. I am not someone who has any personal experiences to share about Leverage, but if I imagine how I’d personally feel if I did, I think I agree.
To do that investigation and postmortem, we can’t skip to sentencing (forgive me if that’s not your intention, but it reads a bit to me that that’s what you want to be happening), nor would it be epistemically virtuous or just to do so.
I super agree with this, but also want to note that I feel appreciation for farp’s comments here. The conversation on this page feels to me like it has a missing mood: I found myself looking for comments that said something like “wow, this account is really horrifying and tragic; we’re taking these claims really seriously, and are investigating what actions we should take in response”. Maybe everyone thinks that that’s obvious, and so instead is emphasizing the part where we’re committed to due process and careful thinking and avoiding mob dynamics. But I think it’s still worth stating explicitly, especially from those in leadership positions in the community. I found myself relieved just reading Ruby’s response here that “everyone in this thread is deeply concerned”.
Re-reading your post, it looks like you’re mostly objecting to people feigning ignorance when a word they don’t like comes up, which I agree is an annoying thing to do. I’m curious about whether you also object to people saying things like:
“Incel is a horrible word; it conflates ‘men who are sad about not having any sex’ with ‘misogynistic and violent men’. I worry that its popularity will influence people to be more hostile towards any man who complains about romantic loneliness.”
“I dislike rationlists’ usage of ‘defect’, it’s seemed to have broadened to the point of meaning ‘any behavior I dislike or think is wrong’. I wish we’d all just agree to taboo that word and specify exactly what we’re objecting to instead.”
What if the thing you’re trying to say is “I think the categorization scheme implied by your use of <word> is wrong, and will cause you to make wrong predictions?” This was the first thing that came to mind when I read your example about “chemicals”—I objected to my dad’s use of “chemicals” a few years ago, and it led to us discussing how that term conflates “has a scary-sounding name” with “has any evidence of being harmful at all”. My dad previously thought that willowbark extract might be healthier/less harmful than aspirin, despite them both having the same active ingredient (salicylic acid).
I agree that people don’t always want to debate about whether they’re making a category error; if someone says they want to avoid food with chemicals and I object to their categorization scheme and they say “I’m not interested in debating that, please respect my food preferences so we can finish this shopping trip”, then I should definitely drop the issue. But are your preferences that I shouldn’t even bring it up?
(I might be typical minding somewhat here; I’ve gotten a lot of mileage from various rationalist friends asking me to taboo certain words in discussion, which forces me to think more carefully and often causes me to notice distinctions that I was eliding. So I like the tool of striking words from my vocabulary!)
Minor quibble, but:
Fish and shrimp are animals; did you mean “mammals”? Or something else?