Let me suggest a concrete example: the existential risk of asteroid impacts. It is pretty easy to estimate the distribution of time till the next impact big enough to kill all humans. Astronomy is pretty well understood, so it is pretty easy to estimate the cost of searching the sky for dangerous objects. If you imagine this as an ongoing project, there is the problem of building lasting organizations. In the unlikely event that you find an object that will strike in a year, or in 30, there is the more difficult problem of estimating the chance it will be dealt with.
It would be good to see your take on this example, partly to clarify this article and partly to isolate some objections from others.
This was, in fact, the first example I ever brought Holden. IMHO he never really engaged with it, but he did find it interesting and maintained correspondence which brought him to the FAI point. (all this long before I was formally involved with SIAI)
Astronomy is pretty well understood, so it is pretty easy to estimate the cost of searching the sky for dangerous objects.
Sort of. The possibility of mirror matter objects makes this pretty difficult. There’s even a reasonable-if-implausible paper arguing that a mirror object caused the Tunguska event, and many other allegedly anomalous impacts over the last century. There’s a lot of astronomical reasons to take this idea seriously, e.g. IIRC three times too many moon craters. There are quite a few solid-looking academic papers on the subject, though a lot of them are by a single guy, Foot. My refined impression was p=.05 for mirror matter existing in a way that’s decision theoretically significant (e.g. mirror meteors), lower than my original impression because mirror matter in general has weirdly little academic interest. But so do a lot of interesting things.
Yes, you should compute the danger multiple ways, counting asteroids, craters, and extinction events. If there are 3x too many craters, then it may be that 2⁄3 of impacts are caused by objects that we can’t detect. Giving up on solving the whole or even most of the problem may sound bad, but it just reduces the expected value by a factor of 3, which is pretty small in this context.
I studied particle physics for a couple of decades, and I would not worry much about “mirror matter objects”. Mirror matter is just of many possibilities that physicists have dreamt up: there’s no good evidence that it exists. Yes, maybe every known particle has an unseen “mirror partner” that only interacts gravitationally with the stuff we see. Should we worry about this? If so, we should also worry about CERN creating black holes or strangelets—more theoretical possibilities not backed up by any good evidence. True, mirror matter is one of many speculative hypotheses that people have invoked to explain some peculiarities of the Tunguska event, but I’d say a comet was a lot more plausible.
Asteroid collisions, on the other hand, are known to have happened and to have caused devastating effects. NASA currently rates the chances of the asteroid Apophis colliding with the Earth in 2036 at 4.3 out of a million. They estimate that the energy of such a collision would be comparable with a 510-megatonne thermonuclear bomb. This is ten times larger than the largest bomb actually exploded, the Tsar Bomba. The Tsar Bomba, in turn, was ten times larger than all the explosives used in World War II.
On the bright side, even if it hits us, Apophis will probably just cause local damage. The asteroid that hit the Earth in Chicxulub and killed off the dinosaurs released an energy comparable to a 240,000-megatonne bomb. That’s the kind of thing that really ruins everyone’s day.
Mirror matter is indeed very speculative, but surely not less than 4.3 out of a million speculative, no? Mirror matter is significantly more worrisome than Apophis. I have no idea whether it’s more or less worrisome than the entire set of normal-matter Apophis-like risks; does anyone have a link to a good (non-alarmist) analysis of impact risks for the next century? Snippets of Global Catastrophic Risks seem to indicate that they’re not a big concern relatively speaking.
My initial impression is that the low interaction rate with ordinary matter would make me think this would not be a good explanation for anomalous impacts. But I obviously haven’t examined this in anywhere near enough detail.
Astronomy is pretty well understood, so it is pretty easy to estimate the cost of searching the sky for dangerous objects.
Sort of. The possibility of mirror matter objects makes this pretty difficult. There’s even a reasonable-if-implausible paper arguing that a mirror object caused the Tunguska event, and many other allegedly anomalous impacts over the last century. There’s a lot of astronomical reasons to take this idea seriously, e.g. IIRC three times too many moon craters.
Reality check: mirror matter has a gravitational signature—so we know some 99% of non-stellar matter in the solar system is not mirror matter—or we would see its grav-sig. So: we can ignore it with only a minor error.
Reading the Wikipedia article, I don’t really see how mirror matter would be dangerous. It describes them as being about as dangerous as neutrinos or something:
Mirror matter, if it exists, would have to be very weakly interacting with ordinary matter. This is because the forces between mirror particles are mediated by mirror bosons. With the exception of the graviton, none of the known bosons can be identical to their mirror partners. The only way mirror matter can interact with ordinary matter via forces other than gravity is via so-called kinetic mixing of mirror bosons with ordinary bosons or via the exchange of Holdom particles.[10] These interactions can only be very weak. Mirror particles have therefore been suggested as candidates for the inferred dark matter in the universe
Read the papers, Wikipedia is Wikipedia. Kinetic mixing can be strong. The paper on Tunguska is really quite explanatory. (Sorry, I don’t mean to be brusque, I’m just allergic to LW at the moment.) ETA: http://arxiv.org/abs/astro-ph/0309330 is the most cited one I think. ETA2 (after gwern replied): Most cited paper about mirror matter implications, not about Tunguska. See here for Tunguska: http://arxiv.org/abs/hep-ph/0107132
The part on Tunguska doesn’t really explain it though, but simply assumes a mirror matter object could do that and then spends more time on how the mirror matter explains the lack of observed fragments and how remaining mirror matter could be detected. The one relevant line seems to be
“If this impact event is due to a (pure) mirror matter body, it should not
have slowed down as rapidly in the atmosphere as an ordinary matter body (for ǫ ∼ 10−8 − 10−9 as suggested by DAMA/NaI results, the air molecules typically pass through the body losing only a relatively small fraction of their momentum^36^).”
It must be explained elsewhere or the implications of ‘ǫ ∼ 10−8 − 10−9’ be obvious to a physicist. How annoying...
Abstract: Mirror matter is predicted to exist if parity (i.e. left-right symmetry) is a symmetry of nature.
Remarkably mirror matter is capable of simply explaining a large number of contemporary
puzzles in astrophysics and particle physics including: Explanation of the MACHO gravitational microlensing events, the existence of close-in extrasolar gas giant planets, apparently
‘isolated’ planets, the solar, atmospheric and LSND neutrino anomalies, the orthopositronium lifetime anomaly and perhaps even gamma ray bursts. One fascinating possibility is
that our solar system contains small mirror matter space bodies (asteroid or comet sized
objects), which are too small to be revealed from their gravitational effects but nevertheless
have explosive implications when they collide with the Earth. We examine the possibility
that the 1908 Tunguska explosion in Siberia was the result of the collision of a mirror matter
space body with the Earth. We point out that if this catastrophic event and many other
similar smaller events are manifestations of the mirror world then these impact sites should
be a good place to start digging for mirror matter. Mirror matter could potentially be
extracted & purified using a centrifuge and have many useful industrial applications.
OK, I think that explains that—Wikipedia is making the first assumption identified below, rather than the other one that he prefers:
“If the only force connecting mirror matter with ordinary matter is gravity, then the consequences would be minimal. The mirror SB would
simply pass through the Earth and nobody would know about it unless it was so heavy as to gravitationally affect the motion of the Earth.
However if there is photon—mirror photon kinetic mixing as suggested by the orthopositronium vacuum cavity experiment, then the mirror nuclei (with Z ′ mirror protons) will effectively have a small ordinary electric charge ǫZ ′ e. This means that the nuclei of the mirror atoms of the SB will undergo Rutherford scattering off the nuclei of the atmospheric nitrogen and oxygen atoms. In addition ionizing
interactions can occur which can ionize both the mirror atoms of the space body and also the atmospheric atoms. The net effect is that the kinetic energy of the SB is transformed into light and heat (both ordinary an mirror varieties) and a component is also converted to the atmosphere in the form of a shockwave, as the forward momentum of the SB is transferred to the air which passes though or near the SB.
What happens to the mirror matter SB as it plummets towards the Earth’s surface depends on a number of factors such as its initial velocity, size, chemical composition and angle of trajectory. Of course all these uncertainties occur for an ordinary matter SB too. Interestingly it turns out that for the value of the kinetic mixing suggested by the Or-
thopositronium experiment, ǫ ≈ 10−6, the air resistence of a mirror SB in the atmosphere is roughly the same as an ordinary SB assuming the same trajectory, velocity mass, size and shape (and that it remains intact). This occurs because the air molecules will lose their relative forward momentum (with respect to the SB) within the SB itself because of the Rutherford scattering of the ordinary and mirror nuclei as we will show in a moment. (Of course the atmospheric atoms still have random thermal motion). This will lead to a drag force of roughly the same size as that on an ordinary matter SB, implying an energy loss rate ….
The above calculation shows that the rate of energy loss of the SB in the atmosphere depends on its size and density. If we assume a density of ρSB ≃ 1 g/cm3 which is approximately valid for a mirror SB made of cometary material (such as mirror ices of water, methane and/or ammonia) then the body will lose most of its kinetic energy in the atmosphere provided that it is less than roughly 5 meters in diameter. Of course things are complicated because the the SB will undergo mass
loss (ablation) and also potentially fragment into smaller pieces and of course potentially melt & vaporize. Thus even a very large body (e.g. R ∼ 100 meters as estimated for the Tunguska explosion) can lose its kinetic energy in the atmosphere if it fragments into small pieces.
...Returning to the most interesting case of large photon—mirror photon kinetic mixing, ǫ ≃ 10−6 which is indicated by the orthopositronium experiment, our earlier calculation suggests that most of the kinetic energy of a mirror matter SB is released in the atmosphere like an ordinary matter SB if it is not too big (∼ 5 meters) or fragments into small objects. It seems to be an interesting candidate to explain the 1908 Tunguska explosion (as well as smaller similar events as we will discuss in a moment).
OK, I think that explains that—Wikipedia is making the first assumption identified below
No, Wikipedia mentions kinetic mixing then says that if it exists it must be weak, Wikipeda doesn’t say it wouldn’t exist (the evidence suggests it would exist). The Wikipedia article is just wrong. (ETA: I mean, it is just wrong to assume that it’s weak.) (Unless I’m misinterpreting what you mean by “the first assumption identified below”?)
What I meant was that both the paper and Wikipedia regard kinetic mixing as weak and relatively unimportant; then they differ about the next effect, the one that would be strong and would matter to Tunguska.
Let me suggest a concrete example: the existential risk of asteroid impacts. [...]
Sure, so according to the Bayesian adjustment framework described in the article, in principle the thing to do would be to create a 95% confidence interval as to the impact of an asteroid strike prevention effort, use this to obtain a variance attached to the distribution of impact associated with the asteroid strike prevention effort, and then Bayesian regress. As you comment, some of the numbers going into the cost-effectiveness calculation are tightly constrained in value account of being well understood and others are not. The bulk of the variance would come from the numbers which are not tightly constrained.
But as Holden suggests in the final section of the post titled “Generalizing the Bayesian approach” probably a purely formal analysis should be augmented by heuristics.
Saying “Yes, I can apply this framework to concrete examples,” does not actually make anything more concrete.
Did Holden ever do the calculation or endorse someone else’s calculation? What heuristic did he use to reject the calculation? “Never pursue a small chance of a large effect”? “Weird charities don’t work”?
If you calculate that this is ineffective or use heuristics to reject the calculation, I’d like to see this explicitly. Which heuristics?
Did Holden ever do the calculation or endorse someone else’s calculation? What heuristic did he use to reject the calculation? “Never pursue a small chance of a large effect”? “Weird charities don’t work”?
Which calculation are you referring to? In order to do a calculation one needs to have in mind a specific intervention, not just “asteroid risk prevention” as a cause.
Before worrying about specific interventions, you can compute an idealized version as in, say, the Copenhagen Consensus. There are existing asteroid detection programs. I don’t know if any of them take donations, but this does allow assessments of realistic organizations. At some level of cost-effectiveness, you have to consider other interventions, like starting your own organization or promoting the cause. Not having a list of interventions is no excuse for not computing the value of intervening.
I would guess that it’s fairly straightforward to compute the cost-effectiveness of an asteroid strike reduction program to within an order of magnitude in either direction.
The situation becomes much more complicated with assessing the cost-effectiveness of something like a “Friendly AI program” where the relevant issues are so much more murky than the issues relevant to asteroid strike prevention.
GiveWell is funded by a committed base of donors. It’s not clear to me that these donors are sufficiently interested in x-risk reduction so that they would fund GiveWell if GiveWell were to focus on finding x-risk reduction charities.
I think that it’s sensible for GiveWell to have started by investigating the cause of international health. This has allowed them to gain experience, credibility and empirical feedback which has strengthened the organization.
Despite the above three points I share your feeling that at present it would be desirable for GiveWell to put more time into studying x-risks and x-risk reduction charities. I think that they’re now sufficiently established so that at the margin they could do more x-risk related research while simultaneously satisfying their existing constituents.
Concerning the issue of asteroid strike risk in particular, it presently looks to me as though there are likely x-risk reduction efforts which are more cost effective; largely because it seems as though people are already taking care of the asteroid strike issue. See Hellman’s article on nuclear risk and this article from Pan-STARRS (HT wallowinmaya). I’m currently investigating the issue of x-risk precipitated by nuclear war & what organizations are working on nuclear nonproliferation.
Sure, but my comment is not about what GiveWell or anyone should do in general, but in the context of this article: Holden is engaging with x-risk and trying to clarify disagreement, so let’s not worry if or when he should (and he has made many other comments about it over the years). I think it would be better to do so concretely, rather than claiming that vague abstract principles lead to unspecified disagreements with unnamed people. I think he would better convey the principles by applying them. I’m not asking for 300 hours of asteroid research, just as much time as it took to write this article. I could be wrong, but I think a very sloppy treatment of asteroids would be useful.
The article has relevance to thinking about effective philanthropy independently of whether one is considering x-risk reduction charities. I doubt that it was written exclusively with x-risk in mind
I can’t speak for Holden here but I would guess that to the extent that he wrote the article with x-risk in mind, he did so to present a detailed account of an important relevant point which he can refer to in the future so as to streamline subsequent discussions without sacrificing detail and clarity.
So he could have written a concrete account of the disagreement with Deworm the World. The only concrete section was on BeerAdvocate and that was the only useful section. Pointing to the other sections in the future is a sacrifice of detail and clarity.
This seems so vague and abstract.
Let me suggest a concrete example: the existential risk of asteroid impacts. It is pretty easy to estimate the distribution of time till the next impact big enough to kill all humans. Astronomy is pretty well understood, so it is pretty easy to estimate the cost of searching the sky for dangerous objects. If you imagine this as an ongoing project, there is the problem of building lasting organizations. In the unlikely event that you find an object that will strike in a year, or in 30, there is the more difficult problem of estimating the chance it will be dealt with.
It would be good to see your take on this example, partly to clarify this article and partly to isolate some objections from others.
This was, in fact, the first example I ever brought Holden. IMHO he never really engaged with it, but he did find it interesting and maintained correspondence which brought him to the FAI point. (all this long before I was formally involved with SIAI)
Sort of. The possibility of mirror matter objects makes this pretty difficult. There’s even a reasonable-if-implausible paper arguing that a mirror object caused the Tunguska event, and many other allegedly anomalous impacts over the last century. There’s a lot of astronomical reasons to take this idea seriously, e.g. IIRC three times too many moon craters. There are quite a few solid-looking academic papers on the subject, though a lot of them are by a single guy, Foot. My refined impression was p=.05 for mirror matter existing in a way that’s decision theoretically significant (e.g. mirror meteors), lower than my original impression because mirror matter in general has weirdly little academic interest. But so do a lot of interesting things.
Yes, you should compute the danger multiple ways, counting asteroids, craters, and extinction events. If there are 3x too many craters, then it may be that 2⁄3 of impacts are caused by objects that we can’t detect. Giving up on solving the whole or even most of the problem may sound bad, but it just reduces the expected value by a factor of 3, which is pretty small in this context.
True, true. I was partially just looking for an excuse to bring up mirror matter.
I studied particle physics for a couple of decades, and I would not worry much about “mirror matter objects”. Mirror matter is just of many possibilities that physicists have dreamt up: there’s no good evidence that it exists. Yes, maybe every known particle has an unseen “mirror partner” that only interacts gravitationally with the stuff we see. Should we worry about this? If so, we should also worry about CERN creating black holes or strangelets—more theoretical possibilities not backed up by any good evidence. True, mirror matter is one of many speculative hypotheses that people have invoked to explain some peculiarities of the Tunguska event, but I’d say a comet was a lot more plausible.
Asteroid collisions, on the other hand, are known to have happened and to have caused devastating effects. NASA currently rates the chances of the asteroid Apophis colliding with the Earth in 2036 at 4.3 out of a million. They estimate that the energy of such a collision would be comparable with a 510-megatonne thermonuclear bomb. This is ten times larger than the largest bomb actually exploded, the Tsar Bomba. The Tsar Bomba, in turn, was ten times larger than all the explosives used in World War II.
On the bright side, even if it hits us, Apophis will probably just cause local damage. The asteroid that hit the Earth in Chicxulub and killed off the dinosaurs released an energy comparable to a 240,000-megatonne bomb. That’s the kind of thing that really ruins everyone’s day.
Mirror matter is indeed very speculative, but surely not less than 4.3 out of a million speculative, no? Mirror matter is significantly more worrisome than Apophis. I have no idea whether it’s more or less worrisome than the entire set of normal-matter Apophis-like risks; does anyone have a link to a good (non-alarmist) analysis of impact risks for the next century? Snippets of Global Catastrophic Risks seem to indicate that they’re not a big concern relatively speaking.
ETA: lgkglgjag anthropics messes up everything
By “mirror matter”, I assume you mean what is more commonly known as “anti-matter”?
No, mirror matter, what you get if parity isn’t actually broken: http://scholar.google.com/scholar?hl=en&q=mirror+matter&btnG=Search&as_sdt=0%2C5&as_ylo=&as_vis=0 http://en.wikipedia.org/wiki/Mirror_matter
Huh. Glad I asked.
My initial impression is that the low interaction rate with ordinary matter would make me think this would not be a good explanation for anomalous impacts. But I obviously haven’t examined this in anywhere near enough detail.
See elsewhere in the thread. E.g. http://arxiv.org/abs/hep-ph/0107132
I did see those replies. Thanks.
Reality check: mirror matter has a gravitational signature—so we know some 99% of non-stellar matter in the solar system is not mirror matter—or we would see its grav-sig. So: we can ignore it with only a minor error.
Dark matter.
There evidently aren’t many “clumps” of that in the solar system—so we don’t have to worry very much about hypothetical collisions with it.
Reading the Wikipedia article, I don’t really see how mirror matter would be dangerous. It describes them as being about as dangerous as neutrinos or something:
Read the papers, Wikipedia is Wikipedia. Kinetic mixing can be strong. The paper on Tunguska is really quite explanatory. (Sorry, I don’t mean to be brusque, I’m just allergic to LW at the moment.) ETA: http://arxiv.org/abs/astro-ph/0309330 is the most cited one I think. ETA2 (after gwern replied): Most cited paper about mirror matter implications, not about Tunguska. See here for Tunguska: http://arxiv.org/abs/hep-ph/0107132
The part on Tunguska doesn’t really explain it though, but simply assumes a mirror matter object could do that and then spends more time on how the mirror matter explains the lack of observed fragments and how remaining mirror matter could be detected. The one relevant line seems to be
It must be explained elsewhere or the implications of ‘ǫ ∼ 10−8 − 10−9’ be obvious to a physicist. How annoying...
Here you go: http://arxiv.org/abs/hep-ph/0107132
OK, I think that explains that—Wikipedia is making the first assumption identified below, rather than the other one that he prefers:
No, Wikipedia mentions kinetic mixing then says that if it exists it must be weak, Wikipeda doesn’t say it wouldn’t exist (the evidence suggests it would exist). The Wikipedia article is just wrong. (ETA: I mean, it is just wrong to assume that it’s weak.) (Unless I’m misinterpreting what you mean by “the first assumption identified below”?)
What I meant was that both the paper and Wikipedia regard kinetic mixing as weak and relatively unimportant; then they differ about the next effect, the one that would be strong and would matter to Tunguska.
There’s a paper dedicated to Tunguska specifically that has tons of details, I’ll try to find it again. I’ll reply to your comment again once I do.
Sure, so according to the Bayesian adjustment framework described in the article, in principle the thing to do would be to create a 95% confidence interval as to the impact of an asteroid strike prevention effort, use this to obtain a variance attached to the distribution of impact associated with the asteroid strike prevention effort, and then Bayesian regress. As you comment, some of the numbers going into the cost-effectiveness calculation are tightly constrained in value account of being well understood and others are not. The bulk of the variance would come from the numbers which are not tightly constrained.
But as Holden suggests in the final section of the post titled “Generalizing the Bayesian approach” probably a purely formal analysis should be augmented by heuristics.
Saying “Yes, I can apply this framework to concrete examples,” does not actually make anything more concrete.
Did Holden ever do the calculation or endorse someone else’s calculation? What heuristic did he use to reject the calculation? “Never pursue a small chance of a large effect”? “Weird charities don’t work”?
If you calculate that this is ineffective or use heuristics to reject the calculation, I’d like to see this explicitly. Which heuristics?
Which calculation are you referring to? In order to do a calculation one needs to have in mind a specific intervention, not just “asteroid risk prevention” as a cause.
Before worrying about specific interventions, you can compute an idealized version as in, say, the Copenhagen Consensus. There are existing asteroid detection programs. I don’t know if any of them take donations, but this does allow assessments of realistic organizations. At some level of cost-effectiveness, you have to consider other interventions, like starting your own organization or promoting the cause. Not having a list of interventions is no excuse for not computing the value of intervening.
I would guess that it’s fairly straightforward to compute the cost-effectiveness of an asteroid strike reduction program to within an order of magnitude in either direction.
The situation becomes much more complicated with assessing the cost-effectiveness of something like a “Friendly AI program” where the relevant issues are so much more murky than the issues relevant to asteroid strike prevention.
GiveWell is funded by a committed base of donors. It’s not clear to me that these donors are sufficiently interested in x-risk reduction so that they would fund GiveWell if GiveWell were to focus on finding x-risk reduction charities.
I think that it’s sensible for GiveWell to have started by investigating the cause of international health. This has allowed them to gain experience, credibility and empirical feedback which has strengthened the organization.
Despite the above three points I share your feeling that at present it would be desirable for GiveWell to put more time into studying x-risks and x-risk reduction charities. I think that they’re now sufficiently established so that at the margin they could do more x-risk related research while simultaneously satisfying their existing constituents.
Concerning the issue of asteroid strike risk in particular, it presently looks to me as though there are likely x-risk reduction efforts which are more cost effective; largely because it seems as though people are already taking care of the asteroid strike issue. See Hellman’s article on nuclear risk and this article from Pan-STARRS (HT wallowinmaya). I’m currently investigating the issue of x-risk precipitated by nuclear war & what organizations are working on nuclear nonproliferation.
Sure, but my comment is not about what GiveWell or anyone should do in general, but in the context of this article: Holden is engaging with x-risk and trying to clarify disagreement, so let’s not worry if or when he should (and he has made many other comments about it over the years). I think it would be better to do so concretely, rather than claiming that vague abstract principles lead to unspecified disagreements with unnamed people. I think he would better convey the principles by applying them. I’m not asking for 300 hours of asteroid research, just as much time as it took to write this article. I could be wrong, but I think a very sloppy treatment of asteroids would be useful.
The article has relevance to thinking about effective philanthropy independently of whether one is considering x-risk reduction charities. I doubt that it was written exclusively with x-risk in mind
I can’t speak for Holden here but I would guess that to the extent that he wrote the article with x-risk in mind, he did so to present a detailed account of an important relevant point which he can refer to in the future so as to streamline subsequent discussions without sacrificing detail and clarity.
So he could have written a concrete account of the disagreement with Deworm the World. The only concrete section was on BeerAdvocate and that was the only useful section. Pointing to the other sections in the future is a sacrifice of detail and clarity.