My rough guess is that there’s a 75% probability of effectively full immunity, and a 90% probability of severity reduction. This is a pretty well tested and understood vaccine mechanism, and the goal isn’t “perfect immunity” as “prime the immune system so it doesn’t spend a week guessing about what antibodies it needs to combat the virus effectively”.
As to why established companies don’t do it, I believe it’s partially logistics, and largely red tape. Logisitics first (though it should be noted that at least some of these could likely be tackled with a bit of effort):
Shots are well understood and easy; people are used to them, people know how to give them, etc. Nasal spray is irritating and makes you want to blow your nose, which washes out a lot of it and reduces effectiveness.
You need multiple of these annoying doses in the nose, staggered a few days apart, to generate a ‘good’ response.
This particular nanoparticle vaccine doesn’t have a long shelf life due to peptide degradation. Peptides don’t last forever, and while they’re more stable than the RNA vaccines, you’d have to ship them frozen as well.
Nanoparticle vaccines in general suffer from particle aggregation over time. The particles will gradually aggregate in solution, and if you freeze it they aggregate faster; freezing changes the size distribution pretty dramatically. That said, I don’t know how much that impacts effectiveness, and it doesn’t seem like an extensively researched topic. I only found one paper discussing it.
Because it doesn’t have a long shelf life, it has to be mixed in-house, then distributed, preferably within a small number of days. I plan to mix a new batch every week for my prime and boosters. On the plus side, mixing it can actually be done at home with pretty cheap and ordinary tools, it just takes time.
Red tape on the other hand is a huge problem and flat out intractable. Unlike injections, companies need to get safety approvals and testing done for pretty much all the components and the delivery mechanism, not just the active ingredients (the peptides.) Then, separate approvals and testing for each individual set of peptides, via a process that operates closer to the scale of decades than months and costs billions of dollars. Any modification to the peptide set can be expected to restart the process from scratch. Then, companies also need to set up / reconfigure their legal strategy to protect themselves, and ensure that things are sufficiently balanced business-wise that they don’t go bankrupt.
Using ancient, primitive “stab people in the arm” technologies isn’t great, but it erases likely more than half of the regulatory burden, because “we’ve been doing that for a century and we know it’s only somewhat dangerous”, wheras new technologies like “snort some nanoparticles” are Scary and Dangerous and New and Have Not Yet Been Approved By Appropriately Serious People Being Serious.
Nevermind that every time you play with a pet, breathe in part of a cloud of dust, or smell your SO’s hair, you’re inhaling more foreign peptides than what’s present in a vaccine dose.
This is a very in-depth explanation of some of the constraints affecting pharmaceutical companies that (mostly) don’t apply to individuals, and is useful as an object-level explanation for those interested. I’m glad this comment was written, and I upvoted accordingly.
Having said that, I would also like to point out that a detailed explanation of the constraints shouldn’t be needed to address the argument in the grandparent comment, which simply reads:
Why are established pharmaceutical companies spending billions on research and using complex mRNA vaccines when simply creating some peptides and adding it to a solution works just as well?
This question inherently assumes that the situation with commercial vaccine-makers is efficient with respect to easy, do-it-yourself interventions, and the key point I want to make is that this assumption is unjustified even if you don’t happen to have access to a handy list of bullet points detailing the ways in which companies and individuals differ on this front. (Eliezer wrote a whole book on this at one point, from which I’ll quote a relevant section:)
My wife has a severe case of Seasonal Affective Disorder. As of 2014, she’d tried sitting in front of a little lightbox for an hour per day, and it hadn’t worked. SAD’s effects were crippling enough for it to be worth our time to consider extreme options, like her spending time in South America during the winter months. And indeed, vacationing in Chile and receiving more exposure to actual sunlight did work, where lightboxes failed.
From my perspective, the obvious next thought was: “Empirically, dinky little lightboxes don’t work. Empirically, the Sun does work. Next step: more light. Fill our house with more lumens than lightboxes provide.” In short order, I had strung up sixty-five 60W-equivalent LED bulbs in the living room, and another sixty-five in her bedroom.
Ah, but should I assume that my civilization is being opportunistic about seeking out ways to cure SAD, and that if putting up 130 LED light bulbs often worked when lightboxes failed, doctors would already know about that? Should the fact that putting up 130 light bulbs isn’t a well-known next step after lightboxes convince me that my bright idea is probably not a good idea, because if it were, everyone would already be doing it? Should I conclude from my inability to find any published studies on the Internet testing this question that there is some fatal flaw in my plan that I’m just not seeing?
We might call this argument “Chesterton’s Absence of a Fence.” The thought being: I shouldn’t build a fence here, because if it were a good idea to have a fence here, someone would already have built it. The underlying question here is: How strongly should I expect that this extremely common medical problem has been thoroughly considered by my civilization, and that there’s nothing new, effective, and unconventional that I can personally improvise?
Eyeballing this question, my off-the-cuff answer—based mostly on the impressions related to me by every friend of mine who has ever dealt with medicine on a research level—is that I wouldn’t necessarily expect any medical researcher ever to have done a formal experiment on the first thought that popped into my mind for treating this extremely common depressive syndrome. Nor would I strongly expect the intervention, if initial tests found it to be effective, to have received enough attention that I could Google it.
The grandparent comment is more or less an exact example of this species of argument, and is the first of its kind that I can recall seeing “in the wild”. I think examples of this kind of thinking are all over the place, but it’s rare to find a case where somebody explicitly deploys an argument of this type in such a direct, obvious way. So I wanted to draw attention to this, with further emphasis on the idea that such arguments are not valid in general.
The prevalence of this kind of thinking is why (I claim) at-home, do-it-yourself interventions are so uncommon, and why this particular intervention went largely unnoticed even among the rationalist community. It’s a failure mode that’s easy to slip into, so I think it’s important to point these things out explicitly and push back against them when they’re spotted (which is the reason I wrote this comment).
IMPORTANT NOTE: This should be obvious enough to anyone who read Inadequate Equilibria, but one thing I’m not saying here is that you should just trust random advice you find online. You should obviously perform an object-level evaluation of the advice, and put substantial effort into investigating potential risks; such an assessment might very well require multiple days’ or weeks’ worth of work, and end up including such things as the bulleted list in the parent comment. The point is that once you’ve performed that assessment, it serves no further purpose to question yourself based only on the fact that others aren’t doing the thing you’re doing; this is what Eliezer would call wasted motion, and it’s unproductive at best and harmful at worst. If you find yourself thinking along these lines, you should stop, in particular if you find yourself saying things like this (emphasis mine):
That being said, I’m extremely skeptical that this will work, my belief is that there’s a 1-2% chance here that you’ve effectively immunized yourself from COVID.
You cannot get enough Bayesian evidence from the fact that [insert company here] isn’t doing [insert intervention here] to reduce your probability of an intervention being effective all the way down to 1-2%. That 1-2% figure almost certainly didn’t come from any attempt at a numerical assessment; rather, it came purely from an abstract intuition that “stuff that isn’t officially endorsed doesn’t work”. This is the kind of thinking that (I assert) should be noticed and stamped out.
While I generally agree with the concept, I’m going to push back a little here. I read the 1-2% chance as less being about “why aren’t companies doing it” and more about lack of information.
My initial reaction to seeing it was that it was a combination factors along the lines of:
“there’s a lot of fraud out there, and by default my prior for things like this being valid is very low”
“factoring in that a couple of lesswrongers seem to think it’s ok that only pushes my estimate up into the handful of percent range”
“but there’s also evidence against, in that we don’t see any commercial products based on this, which pushes my estimate down to 1-2%”
I think this is a pretty reasonable place to start from.
Should I conclude from my inability to find any published studies on the Internet testing this question that there is some fatal flaw in my plan that I’m just not seeing?
We report our preliminary attempts to modify these depressions by manipulating environmental lighting conditions. We have recently reported reversing depression in one patient with SAD by modifying his environmental lighting
...
The following light treatment was administered. … (1) bright, white full-spectrum fluorescent light (approximately 2,500 lux at 90 cm)
But taking a step back, the “Chesterton’s Absence of a Fence” argument doesn’t apply here because the circumstances are very different. The entire world is desperately looking for a way to stop COVID. If SAD suddenly occurred out of nowhere and affected the entire economy, you would be sure that bright lights would be one of the first things to be tested.
Dentin addresses the 1-2% claim pretty well, so I won’t repeat it.
A simple Google search shows thousands of articles addressing this very solution.
The solution in the paper you link is literally the solution Eliezer described trying, and not working:
As of 2014, she’d tried sitting in front of a little lightbox for an hour per day, and it hadn’t worked.
(Note that the “little lightbox” in question was very likely one of these, which you may notice have mostly ratings of 10,000 lux rather than the 2,500 cited in the paper. So, significantly brighter, and despite that, didn’t work.)
It does sound like you misunderstood, in other words. Knowing that light exposure is an effective treatment for SAD is indeed a known solution; this is why Eliezer tried light boxes to begin with. The point of that excerpt is that this “known solution” did not work for his wife, and the obvious next step of scaling up the amount of light used was not investigated in any of the clinical literature.
But taking a step back, the “Chesterton’s Absence of a Fence” argument doesn’t apply here because the circumstances are very different. The entire world is desperately looking for a way to stop COVID. If SAD suddenly occurred out of nowhere and affected the entire economy, you would be sure that bright lights would be one of the first things to be tested.
This is simply a (slightly) disguised variation of your original argument. Absent strong reasons to expect to see efficiency, you should not expect to see efficiency. The “entire world desperately looking for a way to stop COVID” led to bungled vaccine distribution, delayed production, supply shortages, the list goes on and on. Empirically, we do not observe anything close to efficiency in this market, and this should be obvious even without the aid of Dentin’s list of bullet points (though naturally those bullet points are very helpful).
(Question: did seeing those bullet points cause you to update at all in the direction of this working, or are you sticking with your 1-2% prior? The latter seems fairly indefensible from an epistemic standpoint, I think.)
Not only is the argument above flawed, it’s also special pleading with respect to COVID. Here is the analogue of your argument with respect to SAD:
Around 7% of the population has severe Seasonal Affective Disorder, and another 20% or so has weak Seasonal Affective Disorder. Around 50% of tested cases respond to standard lightboxes. So if the intervention of stringing up a hundred LED bulbs actually worked, it could provide a major improvement to the lives of 3% of the US population, costing on the order of $1000 each (without economies of scale). Many of those 9 million US citizens would be rich enough to afford that as a treatment for major winter depression. If you could prove that your system worked, you could create a company to sell SAD-grade lighting systems and have a large market.
SAD is not an uncommon disorder. In terms of QALYs lost, it’s… probably not directly comparable with COVID, but it’s at the very least in the same ballpark—certainly to the point where “people want to stop COVID, but they don’t care about SAD” is clearly false.
And yet, in point of fact, there are no papers describing the unspeakably obvious intervention of “if your lights don’t seem to be working, use more lights”, nor are there any companies predicated on this idea. If Eliezer had followed your reasoning to its end conclusion, he might not have bothered testing more light… except that his background assumptions did not imply the (again, fairly indefensible, in my view) heuristic that “if no one else is doing it, the only possible explanation is that it must not work, else people are forgoing free money”. And as a result, he did try the intervention, and it worked, and (we can assume) his wife’s quality of life was improved significantly as a result.
If there’s an argument that (a) applies in full generality to anything other people haven’t done before, and (b) if applied, would regularly lead people to forgo testing out their ideas (and not due to any object-level concerns, either, e.g. maybe it’s a risky idea to test), then I assert that that argument is bad and harmful, and that you should stop reasoning in this manner.
You can buy nasal sprays over-the-counter, while I can’t think of a single injectable medicine that you can buy legally without a prescription. I don’t think the “stab people in the arm” argument is very strong.
Would you like to make a friendly wager? (Either Dentin, or johnswentworth, or anyone else making their own vaccine). We can do 50⁄50, since its in between our estimates. If you have two positive back-to-back anti-body tests within 2 months, you win (assuming you don’t actually contract covid, which I trust you’ll be honest here). If not, I win. To start off with, I’m willing to put down $100, but happy to go up or down.
My estimate for whether or not I would test positive on a blood test was only about 50%, since blood isn’t the primary place that the response is generated. I’m already betting a substantial amount of money (peptide purchases and equipment) that this will be helpful, and I see no reason to throw an additional $50 on a break-even bet here.
I would, however, be happy to commit to sharing results, whether they be positive or negative.
… and now it occurs to me that if Lesswrong had a ‘public precommitments’ feature, I would totally use it.
My rough guess is that there’s a 75% probability of effectively full immunity, and a 90% probability of severity reduction. This is a pretty well tested and understood vaccine mechanism, and the goal isn’t “perfect immunity” as “prime the immune system so it doesn’t spend a week guessing about what antibodies it needs to combat the virus effectively”.
As to why established companies don’t do it, I believe it’s partially logistics, and largely red tape. Logisitics first (though it should be noted that at least some of these could likely be tackled with a bit of effort):
Shots are well understood and easy; people are used to them, people know how to give them, etc. Nasal spray is irritating and makes you want to blow your nose, which washes out a lot of it and reduces effectiveness.
You need multiple of these annoying doses in the nose, staggered a few days apart, to generate a ‘good’ response.
This particular nanoparticle vaccine doesn’t have a long shelf life due to peptide degradation. Peptides don’t last forever, and while they’re more stable than the RNA vaccines, you’d have to ship them frozen as well.
Nanoparticle vaccines in general suffer from particle aggregation over time. The particles will gradually aggregate in solution, and if you freeze it they aggregate faster; freezing changes the size distribution pretty dramatically. That said, I don’t know how much that impacts effectiveness, and it doesn’t seem like an extensively researched topic. I only found one paper discussing it.
Because it doesn’t have a long shelf life, it has to be mixed in-house, then distributed, preferably within a small number of days. I plan to mix a new batch every week for my prime and boosters. On the plus side, mixing it can actually be done at home with pretty cheap and ordinary tools, it just takes time.
Red tape on the other hand is a huge problem and flat out intractable. Unlike injections, companies need to get safety approvals and testing done for pretty much all the components and the delivery mechanism, not just the active ingredients (the peptides.) Then, separate approvals and testing for each individual set of peptides, via a process that operates closer to the scale of decades than months and costs billions of dollars. Any modification to the peptide set can be expected to restart the process from scratch. Then, companies also need to set up / reconfigure their legal strategy to protect themselves, and ensure that things are sufficiently balanced business-wise that they don’t go bankrupt.
Using ancient, primitive “stab people in the arm” technologies isn’t great, but it erases likely more than half of the regulatory burden, because “we’ve been doing that for a century and we know it’s only somewhat dangerous”, wheras new technologies like “snort some nanoparticles” are Scary and Dangerous and New and Have Not Yet Been Approved By Appropriately Serious People Being Serious.
Nevermind that every time you play with a pet, breathe in part of a cloud of dust, or smell your SO’s hair, you’re inhaling more foreign peptides than what’s present in a vaccine dose.
This is a very in-depth explanation of some of the constraints affecting pharmaceutical companies that (mostly) don’t apply to individuals, and is useful as an object-level explanation for those interested. I’m glad this comment was written, and I upvoted accordingly.
Having said that, I would also like to point out that a detailed explanation of the constraints shouldn’t be needed to address the argument in the grandparent comment, which simply reads:
This question inherently assumes that the situation with commercial vaccine-makers is efficient with respect to easy, do-it-yourself interventions, and the key point I want to make is that this assumption is unjustified even if you don’t happen to have access to a handy list of bullet points detailing the ways in which companies and individuals differ on this front. (Eliezer wrote a whole book on this at one point, from which I’ll quote a relevant section:)
The grandparent comment is more or less an exact example of this species of argument, and is the first of its kind that I can recall seeing “in the wild”. I think examples of this kind of thinking are all over the place, but it’s rare to find a case where somebody explicitly deploys an argument of this type in such a direct, obvious way. So I wanted to draw attention to this, with further emphasis on the idea that such arguments are not valid in general.
The prevalence of this kind of thinking is why (I claim) at-home, do-it-yourself interventions are so uncommon, and why this particular intervention went largely unnoticed even among the rationalist community. It’s a failure mode that’s easy to slip into, so I think it’s important to point these things out explicitly and push back against them when they’re spotted (which is the reason I wrote this comment).
IMPORTANT NOTE: This should be obvious enough to anyone who read Inadequate Equilibria, but one thing I’m not saying here is that you should just trust random advice you find online. You should obviously perform an object-level evaluation of the advice, and put substantial effort into investigating potential risks; such an assessment might very well require multiple days’ or weeks’ worth of work, and end up including such things as the bulleted list in the parent comment. The point is that once you’ve performed that assessment, it serves no further purpose to question yourself based only on the fact that others aren’t doing the thing you’re doing; this is what Eliezer would call wasted motion, and it’s unproductive at best and harmful at worst. If you find yourself thinking along these lines, you should stop, in particular if you find yourself saying things like this (emphasis mine):
You cannot get enough Bayesian evidence from the fact that [insert company here] isn’t doing [insert intervention here] to reduce your probability of an intervention being effective all the way down to 1-2%. That 1-2% figure almost certainly didn’t come from any attempt at a numerical assessment; rather, it came purely from an abstract intuition that “stuff that isn’t officially endorsed doesn’t work”. This is the kind of thinking that (I assert) should be noticed and stamped out.
While I generally agree with the concept, I’m going to push back a little here. I read the 1-2% chance as less being about “why aren’t companies doing it” and more about lack of information.
My initial reaction to seeing it was that it was a combination factors along the lines of:
“there’s a lot of fraud out there, and by default my prior for things like this being valid is very low”
“factoring in that a couple of lesswrongers seem to think it’s ok that only pushes my estimate up into the handful of percent range”
“but there’s also evidence against, in that we don’t see any commercial products based on this, which pushes my estimate down to 1-2%”
I think this is a pretty reasonable place to start from.
I don’t understand the argument about SAD.
A simple Google search shows thousands of articles addressing this very solution. The first Google result I found is a paper from 1984 with 2,758 citations: https://jamanetwork.com/journals/jamapsychiatry/article-abstract/493246
But taking a step back, the “Chesterton’s Absence of a Fence” argument doesn’t apply here because the circumstances are very different. The entire world is desperately looking for a way to stop COVID. If SAD suddenly occurred out of nowhere and affected the entire economy, you would be sure that bright lights would be one of the first things to be tested.
Dentin addresses the 1-2% claim pretty well, so I won’t repeat it.
The solution in the paper you link is literally the solution Eliezer described trying, and not working:
(Note that the “little lightbox” in question was very likely one of these, which you may notice have mostly ratings of 10,000 lux rather than the 2,500 cited in the paper. So, significantly brighter, and despite that, didn’t work.)
It does sound like you misunderstood, in other words. Knowing that light exposure is an effective treatment for SAD is indeed a known solution; this is why Eliezer tried light boxes to begin with. The point of that excerpt is that this “known solution” did not work for his wife, and the obvious next step of scaling up the amount of light used was not investigated in any of the clinical literature.
This is simply a (slightly) disguised variation of your original argument. Absent strong reasons to expect to see efficiency, you should not expect to see efficiency. The “entire world desperately looking for a way to stop COVID” led to bungled vaccine distribution, delayed production, supply shortages, the list goes on and on. Empirically, we do not observe anything close to efficiency in this market, and this should be obvious even without the aid of Dentin’s list of bullet points (though naturally those bullet points are very helpful).
(Question: did seeing those bullet points cause you to update at all in the direction of this working, or are you sticking with your 1-2% prior? The latter seems fairly indefensible from an epistemic standpoint, I think.)
Not only is the argument above flawed, it’s also special pleading with respect to COVID. Here is the analogue of your argument with respect to SAD:
SAD is not an uncommon disorder. In terms of QALYs lost, it’s… probably not directly comparable with COVID, but it’s at the very least in the same ballpark—certainly to the point where “people want to stop COVID, but they don’t care about SAD” is clearly false.
And yet, in point of fact, there are no papers describing the unspeakably obvious intervention of “if your lights don’t seem to be working, use more lights”, nor are there any companies predicated on this idea. If Eliezer had followed your reasoning to its end conclusion, he might not have bothered testing more light… except that his background assumptions did not imply the (again, fairly indefensible, in my view) heuristic that “if no one else is doing it, the only possible explanation is that it must not work, else people are forgoing free money”. And as a result, he did try the intervention, and it worked, and (we can assume) his wife’s quality of life was improved significantly as a result.
If there’s an argument that (a) applies in full generality to anything other people haven’t done before, and (b) if applied, would regularly lead people to forgo testing out their ideas (and not due to any object-level concerns, either, e.g. maybe it’s a risky idea to test), then I assert that that argument is bad and harmful, and that you should stop reasoning in this manner.
You can buy nasal sprays over-the-counter, while I can’t think of a single injectable medicine that you can buy legally without a prescription. I don’t think the “stab people in the arm” argument is very strong.
Would you like to make a friendly wager? (Either Dentin, or johnswentworth, or anyone else making their own vaccine). We can do 50⁄50, since its in between our estimates. If you have two positive back-to-back anti-body tests within 2 months, you win (assuming you don’t actually contract covid, which I trust you’ll be honest here). If not, I win. To start off with, I’m willing to put down $100, but happy to go up or down.
I wouldn’t take 50⁄50. I do think it’s much more likely than that to induce mucus antibodies, but not blood antibodies. I would take 3:1 odds.
My estimate for whether or not I would test positive on a blood test was only about 50%, since blood isn’t the primary place that the response is generated. I’m already betting a substantial amount of money (peptide purchases and equipment) that this will be helpful, and I see no reason to throw an additional $50 on a break-even bet here.
I would, however, be happy to commit to sharing results, whether they be positive or negative.
… and now it occurs to me that if Lesswrong had a ‘public precommitments’ feature, I would totally use it.