Vacuum Decay: Expert Survey Results
TLDR: Vacuum decay is a hypothesized scenario where the universe’s apparent vacuum state could transition to a lower-energy state. According to current physics models, if such a transition occurred in any location — whether through rare natural fluctuations or by artificial means — a region of “true vacuum” would propagate outward at near light speed, destroying the accessible universe as we know it by deeply altering the effective physical laws and releasing vast amounts of energy. Understanding whether advanced technology could potentially trigger such a transition has implications for existential risk assessment and the long-term trajectory of technological civilisations. This post presents results from what we believe to be the first structured survey of physics experts (N=20) regarding both the theoretical possibility of vacuum decay and its potential technological inducibility. The survey revealed substantial disagreement among respondents. The mean credence that the apparent vacuum is only metastable, and thus vulnerable to vacuum decay, was 46%, with respondents evenly clustered into three groups: 0-10%, 50%, and 70-95%. Conditional on metastability, the mean credence that vacuum decay is inducible with arbitrarily advanced technology was 19%, with a slim majority finding its likelihood negligible, but a substantial minority asserting high likelihood. According to participants, resolving these questions primarily depends on developing theories that go beyond the Standard Model of particle physics. Among respondents who considered vacuum decay theoretically possible, it was generally expected that artificial induction would pose significant technological challenges even for a civilisation with galactic resources.
Background
What is Vacuum Decay?
Modern physics describes the world in terms of quantum fields (such as the electromagnetic field) that extend over space and that can together be configured in many ways. The vacuum is the lowest-energy joint configuration of these fields and, by virtue of energy conservation, it is unchanging in time if it is left undisturbed. Particles (such as the photon) are small energetic disturbances of the vacuum field configuration that propagate through space. The effective laws governing the interactions of these particles (e.g., how tightly an electron is bound to the nucleus of an atom), and even the masses and types of the particles themselves, depend on how the vacuum is configured.
The configuration of the vacuum has been inferred based on a century of experimentation, and especially by carefully measuring the behavior of interacting particles at particle accelerators. Relatively recently, the configuration of the Higgs field was determined by exciting it, briefly creating the Higgs particle, and the results were consistent with a decades-old proposal for how the Higgs field, as part of the overall vacuum configuration, influences the effective mass of other particles.
However, a straightforward extrapolation of the Standard Model suggests that the apparent vacuum state of our universe is not actually the lowest energy configuration, but instead is merely a “false vacuum”. If this is so, then the “true vacuum” is a very different field configuration, and would be associated with different particles with different masses and interacting in different ways.
The reason the false vacuum would appear stable over long periods of time (metastable) is essentially that it is at a local minimum, i.e., the reconfiguration necessary to reach the true vacuum would require more energy than is available locally. It is somewhat analogous to a ball that is trapped in a depression on a hillside because it does not have enough energy to escape the local depression and roll to the bottom (Figure 1).
We emphasize that this account extrapolates the Standard Model far beyond the length and energy scales where it has been experimentally tested. The history of particle physics shows that such extrapolations are fraught because new phenomena are often found at new scales. But this extrapolation is in some sense the “default” theory we have for what is true at these extreme scales, and so deserves consideration.
Vacuum decay is the scenario where a local region manages to be reconfigured from the false vacuum to the true vacuum (or to, at least, a different false vacuum at a lower energy). If this occurs, it would release enormous amounts of energy, providing the power to drive the immediately surrounding region to likewise decay. It would be analogous to flicking that ball over the edge of the depression in the figure above (Figure 1). The result would be a “bubble” of true vacuum expanding outward at near the speed of light. The interior of the bubble would be unrecognizable; the particle content and interactions would be alien, no longer supportive of conventional chemistry.
If our vacuum is indeed only metastable, there are a few ways a local region might be induced to start this decay process.
Local decay might occur spontaneously through quantum tunneling, a random process allowing transitions through, rather than over, energy barriers. The likelihood of tunneling is suppressed by the size of the barrier, which is large, and as a result the vacuum decay rate (per unit volume per unit time) is extremely low, such that it has not occurred anywhere in the observable universe since the big bang, and would not be expected occur over yet longer times.
Alternatively, local decay might occur if enough energy is concentrated in a small volume, or the fundamental fields are otherwise manipulated into a configuration that relaxes to the true vacuum rather than to our metastable vacuum. No known natural energetic processes, even the largest supernovae, are remotely capable of this, but it is conceivable that it could be done artificially with extremely advanced technology.
Why does vacuum decay matter?
Like a bomb, vacuum decay is asymmetric: the amount of energy released greatly exceeds the amount necessary to induce it, and the affected region is much larger than the ignition spot. If it’s possible to induce, it’s plausibly much more difficult or impossible to reverse.
It’s hard to speculate usefully about how the possibility of vacuum decay would affect future civilisation, but a few notes are illustrative:
Decay might be induced accidentally or purposefully.
Accidental decay might be the result of a process (e.g., scientific research) whose risks were considered acceptable by some but not others.
Decay might be the result of a purposeful project justified by defense or as a threat to gain supremacy, such as Edward Teller’s Project Sundial (covered by Kurzgesagt and dramatized in Dr. Strangelove).
Purposeful decay might by induced to end astronomical suffering (suggested very recently on the EA forum to kill Boltzmann brains), for glory, or for some other motivations we can’t imagine.
Thus the possibility of vacuum decay could influence the degree of coordination in astronomical civilisations, and might set limits on the size and timespan of such civilisation. This could have implications for its net present value and inform present day coordination plans.
Approach
We created a google survey, identified physicists with expertise in quantum field theory and cosmology and sent them emails with a link to the survey. In total we received 20 responses, with a response rate of 19% (which we’re told is surprisingly good).
Some replies that we got from physicists that did NOT respond to the survey emphasised their concerns that the survey results may be misconstrued by the media to publish articles about the dangers of physics equipment like high-energy particle accelerators. It is not the opinion of this team that these survey results are relevant to any physics experiments currently planned or being conducted.
The full survey can be accessed here (a google form). Here is a transcript of all the questions from the survey:
The survey was split into 3 main sections. The first section focused on whether vacuum decay is theoretically possible. The key question here was:
How likely (0%-100%) do you think it is that the apparent vacuum of the quantum field theory describing our observable universe today is in fact only metastable, so that vacuum decay is physically possible?
Respondents were then asked to explain their answers and state what research would be needed to become more certain about their answer. Then respondents were asked about whether vacuum decay could be artificially induced, with the key question:
Conditional on the apparent vacuum being only metastable, how likely (0%-100%) do you think it is that a transition to a lower-energy vacuum could reliably be induced (purposefully or accidentally) with arbitrarily advanced technology?
Similarly, they were asked to explain their response and suggest research directions.
Finally, we wanted to understand how responses differed based on the experience of the respondent. So they were asked how confident they were in their responses, whether their opinion was informed by colleagues or their personal assessment, and what level of familiarity they have with the vacuum decay literature.
Respondents
We received 20 responses from 17 different institutions. 35% from the UK, 35% from the USA, and 30% from the rest of the world (mostly Europe). Ideally we would have a more global representation, we notably missed a lot of expertise in Asia. Especially Chinese scientists, where significant expertise in particle physics and cosmology is located, were not able to access the google survey. We suggest future surveys like these are sent as a pdf attachment or in the body of an email when appropriate.
The main backgrounds of the respondents were in particle physics/quantum field theory (70%) and cosmology (45%).
50% of the respondents were professors and 85% of respondents have published scientific articles on vacuum decay, so the results contain the opinions of experts in the field.
The respondents were asked at the end of the survey to report the confidence that their estimates reflected the key scientific and philosophical considerations, and to state whether their opinion was informed by their colleagues or their own assessment:
Most judge their assessment based on their own personal assessment and are reasonably confident that their opinions reflect the key considerations. So this looks good too and gives additional confidence that the results represent the opinions of experts in the field.
Results
Please contact jordan@stonescience.org if you’re interested in viewing all of the raw data. All responses will remain anonymised.
Is the apparent vacuum metastable?
Respondents were asked:
How likely (0%-100%) do you think it is that the apparent vacuum of the quantum field theory describing our observable universe today is in fact only metastable, so that vacuum decay is physically possible?
The responses to this question could be divided into three groups based on their predicted percentage likelihood that the vacuum is metastable and the reasons given in the free-text explanation sections.
Group 1 (6 of 20): Respondents in this group selected 50%. Half of them indicated that it was impossible to make a definitive prediction because they were not confident the Standard Model could be extrapolated to the relevant high energy scales. So “50%” may be viewed here as more of a rejection of the question than a prediction.
Group 2 (7 of 20): This subset selected between 70-95%. They stated justification was primarily that the Standard Model of particle physics predicts metastability. There was a lot of variation in their exact predictions, which may reflect reservations about the Standard Model’s accuracy.
Group 3 (7 of 20): This subset selected 0-10%. They expressed that the Standard Model might be fundamentally wrong, particularly if its predictions are extended to extremely high energy scales (i.e. energy scales relevant to vacuum decay). Some respondents in this group expressed the opinion that extending the predictions of the standard model of particle physics to energy scales relevant to vacuum decay is a mis-interpretation of the model’s applications, thus, predictions that our vacuum can decay may be nonsense. Note that one person responded 10% as a geometric mean, in response to uncertainty with regards to the Standard Model at high energy scales.
This division indicates that while many experts agree that metastability is predicted by present-day particle physics, there is a clear recognition of the Standard Model’s limitations, particularly in the context of vacuum stability. 4 of 7 comments on the “additional comments” section indicated that it is extremely difficult to make a defined numerical prediction about the metastability of the vacuum. The uncertainty arises from the fact that the current theoretical framework may not fully capture the dynamics of the universe at very high energy levels or under extreme conditions. This suggests that deeper studies beyond the Standard Model would be needed to answer this fundamental question.
14 of 20 explanations given by respondents refer to an agreement that the standard model predicts metastability (and so vacuum decay is possible), but the standard model is most likely wrong (or subject to change), especially at high energy levels (e.g., at the Planck scale). So we tentatively summarize the 3 groups as arising from the following reasoning pathways:
Standard Model of particle physics (SM) predicts the vacuum is metastable n = 14 | The Standard Model is not correct, subject to change, or not valid at high energy scales. n = 13 | Therefore its predictions are most likely wrong n = 6 | Response: 0-10% |
Therefore it’s not possible to make any predictions n = 4 | Response: 10% (1) | ||
But we should make a prediction based on what we’ve got n = 3 | Response: >69% | ||
The Standard Model is correct/ not questioned n = 1 |
The research that would best answer this question is therefore the development of physics beyond the standard model or further understanding of the validity of the standard model up to the Planck scale. This is at the core of the uncertainty.
Additional reasoning given included:
Most systems have multiple minima, so we are likely in a universe that is not at the minimum vacuum n = 2 | Response: 50%, 80% |
The quantum tunneling rate is likely high n = 1 | Response: 75% |
Unclear explanation n = 1 | Response: 75% |
Extreme uncertainty indicated on multiple aspects of this question n = 2 | Response: 50% |
Notably, limiting the data to only the respondents that had published on vacuum decay and had scored themselves highly on the self-assessment did little to reduce the disagreement. Therefore, the differing predictions may not reflect different levels of experience or knowledge, but rather differing attitudes in response to the question. (Of course, we had way too few respondents to say anything, even correlational, with confidence).
In addition to these differing views, the survey respondents provided written comments on the areas of research they believe would help increase their certainty on the question. The most frequently mentioned research suggestion can be summarised as the development of theoretical physics beyond the Standard Model of particle physics (n=10), with a particular focus on high energy scales. Relatedly, some researchers suggested that precision measurements of the Higgs boson’s properties (n=4) would allow them to be more confident in their predictions.
Other suggestions included researching early universe phase transitions and the high-energy cosmological history (n=4), as these may represent examples of the conditions under which the vacuum could shift. If phase transitions like vacuum decay are possible, they would likely have occurred during the high-energy conditions of the early universe. The search for new particles (n=3) was also suggested as an important pathway to uncover hidden aspects of the universe’s fundamental structure. Some respondents (n=2) proposed computational investigations into the vacuum decay process and its rate, while others (n=2) highlighted experimental phase transitions, such as those in liquid helium or particle coupling experiments, as potential areas of focus. Other suggestions (n=1) included research into the Hubble constant, the expansion rate of the universe, the multiverse (in relation to “cosmic bubble collisions” and “negative spatial curvature”), and analogous condensed matter systems.
Can we induce vacuum decay?
Respondents were then asked:
Conditional on the apparent vacuum being only metastable, how likely (0%-100%) do you think it is that a transition to a lower-energy vacuum could reliably be induced (purposefully or accidentally) with arbitrarily advanced technology?
Unlike the previous question, there is a slim majority viewpoint here – 11⁄20 people suggest it is impossible (0%) to induce vacuum decay with arbitrarily advanced technology. However, the reasonings given were much more diverse.
Interestingly, limiting the data to only the respondents that had published on vacuum decay and had scored themselves highly on the self-assessment increases the mean confidence that vacuum decay can be induced from 18.8% to 36.3%.
Looking back at all the respondents. There were a few different arguments given for responding 0% (n=11) to the probability of artificially vacuum decay. In order of popularity, they can be summarised as:
It would have occurred already in the long history of the universe (n=4)
The energy scale of the instability is too high/ quantum field theory does not allow it (n=3)
Experiments can only act locally and not affect the Higgs vacuum (n=2)
The first argument was presented from two angles. Firstly, the universe began in a hot and dense state (i.e. extremely high energy levels), so it seems that, if vacuum decay was possible to induce, then it would have happened then. Secondly, in the 13.7 billion year history of our universe, it’s likely that all the experiments we could theoretically execute would have already happened naturally. So there’s no technological way to conduct an experiment at a scale larger than anything that’s already occurred.
However, all the responses from 90% to 100% (n=5) can be paraphrased by the statement: A sufficiently motivated civilisation with arbitrarily advanced technology would be able to accomplish this as anything that does not violate the laws of physics is possible. The respondents suggested some technologies that might be capable of inducing vacuum decay, such as solar-radius particle accelerators, Planck scale accelerators, and artificial black holes. The reason given by some for not responding “100%” was because vacuum decay is highly theoretical, and it lacks any empirical evidence. So even if it is possible to induce, it might not be possible for us (i.e. physical beings accessing only 3 dimensions of this universe from this point in time & space) to induce, even with arbitrarily advanced technology.
The main research suggested to understand whether vacuum decay can be induced was research into artificially induced small scale transitions e.g. with liquid helium, which 4⁄11 respondents on this section suggested. Other suggestions made include:
Theoretical physics, particularly Standard Model up to high energies (2 responses)
Studies into the feasibility of building a giant accelerator in space (2 responses)
Research into small-spatial-scale inhomogeneities (1 response)
Research into false vacuum decay induced by black holes (1 response)
Overall, compared to the previous question, there was more of a consensus, with 55% of people responding that there is a 0% chance that technologically induced vacuum decay is possible.
What are the drivers of disagreement?
The debate surrounding vacuum stability doesn’t stem from a single source but from a complex mix of experimental, theoretical, and even philosophical factors. 50% of responses believe that interpreting experimental data, such as the precise mass of the Higgs boson, remains one of the greatest sources of uncertainty. Small changes in these measurements could dramatically alter our understanding of vacuum stability. 25% of responses go to unresolved mathematical challenges, particularly the lack of a complete theory that unifies quantum mechanics with gravity, which limits our ability to make definitive conclusions about the fate of the vacuum.
30% of responses refer to philosophical perspectives like how we conceptualize stability and the nature of the universe, and how this further complicates the discussion. An equal fraction of responses (30%) suggest that this topic hasn’t yet received sufficient attention or thought, suggesting that vacuum stability, despite its profound implications, remains underexplored. These varied drivers highlight the challenge of answering such a fundamental question.
Here are the results, where participants were asked to select which option drives most disagreement among experts. Respondents had the ability to select multiple answers or choose none.
This lines up with the respondent’s comments discussed above that more experiments into small scale phase transitions are needed to understand whether technologically induced vacuum decay is possible. Though, developments in theoretical physics was toted as the main requirement to understand whether our vacuum is metastable. But it’s possible that respondents also meant theoretical interpretation of experimental results along with other developments in theoretical physics such as addressing open mathematical questions.
Limitations
The opposing arguments “If vacuum decay was technologically inducible it would have already occurred naturally” and “Anything not violating physical law is achievable with sufficiently advanced technology” potentially could be adjudicated by a more detailed account of the universe’s past and of the effective constraints any plausible technology would face. Our survey did not elicit the respondents’ position on these issues in sufficient details, which could be addressed in a future survey.
Separately, recall that some people we emailed said they didn’t respond to the survey because they were concerned the results could be misrepresented by the media to suggest that current high energy physics experiments may be dangerous as they could induce vacuum decay. This could lead to backlash for the scientific community, especially people working on high energy physics. Through discussions with multiple physicists before undertaking the survey, we decided that the risk of this was low. However, it’s possible that some people that did respond to the survey may have been motivated to underestimate the probability that technologically induced vacuum decay is possible based on the above concerns, most likely unconsciously due to past incidents. One person noted in the explanation for their answer that the survey “just reminds me of people who claimed that experiments at CERN would create a black hole which would swallow the Earth”.
Summary
Asking physicists to estimate the probability that it’s possible to induce vacuum decay is not a straightforward request, and they do not agree on whether the vacuum is metastable. According to participants, resolving these questions primarily depends on developing theories that go beyond the Standard Model of particle physics. Particularly, making predictions from the standard model up to energy scales relevant to vacuum decay was identified as a key uncertainty. Among respondents who considered vacuum decay theoretically possible, it was generally expected that artificial induction would pose significant technological challenges even for a civilisation with galactic resources.
Acknowledgements
We thank Carl Shulman and Adam Brown for inspiration and for feedback on the survey questions and write-up. We also thank the survey participants for sharing their expertise with us.
Since I didn’t see it brought up on a skim: One reason me and a lot of my physicist friends aren’t that concerned about vacuum decay is many-worlds. Since the decay is triggered by quantum tunneling and propagates at light speed, it’d be wiping out earth in one wavefunction branch that has amplitude roughly equal to the amplitude of the tunneling, while the decay just never happens in the other branches. Since we can’t experience being dead, this wouldn’t really affect our anticipated future experiences in any way. The vacuum would just never decay from our perspective.
So, if the vacuum were confirmed to be very likely meta-stable, and the projected base rate of collapses was confirmed to be high enough that it ought to have happened a lot already, we’d have accidentally stumbled into a natural and extremely clean experimental setup for testing quantum immortality.
It seems like you’re assuming a value system where the ratio of positive to negative experience matters but where the ratio of positive to null (dead timelines) experiences doesn’t matter. I don’t think that’s the right way to salvage the human utility function, personally.
I don’t think Lucius is claiming we’d be happy about it. Maybe the no anticipated impact carries that implicit claim, I guess.
There may be a sense in which amplitude is a finite resource. Decay your branch enough, and your future anticipated experience might come to be dominated by some alien with higher amplitude simulating you, or even just by your inner product with quantum noise in a more mainline branch of the wave function. At that point, you lose pretty much all ability to control your future anticipated experience. Which seems very bad. This is a barrier I ran into when thinking about ways to use quantum immortality to cheat heat death.
The assumption that being totally dead/being aerosolised/being decayed vacuum can’t be a future experience is unprovable. Panpsychism should be our null hypothesis[1], and there never has and never can be any direct measurement of consciousness that could take us away from the null hypothesis.
Which is to say, I believe it’s possible to be dead.
the negation, that there’s something special about humans that makes them eligible to experience, is clearly held up by a conflation of having experiences and reporting experiences and the fact that humans are the only things that report anything.
It’s the old argument by Epicurus from his letter to Menoeceus:
I have preferences about how things are after I stop existing. Mostly about other people, who I love, and at times, want there to be more of.
I am not an epicurean, and I am somewhat skeptical of the reality of epicureans.
Exactly. That’s also why it’s bad for humanity to be replaced by AIs after we die: We don’t want it to happen.
The random fluctuations in macroscopic chaotic systems, like Plinko or a well-flipped coin in air, can be just as fundamentally quantum as vacuum decay through tunneling. So by this argument you’d be unconcerned getting into a machine that flips a coin and shoots you if tails. Bad idea.
No, because getting shot has a lot of outcomes that do not kill you but do cripple you. Vacuum decay should tend to have extremely few of those. It’s also instant, alleviating any lingering concerns about identity one might have in a setup where death is slow and gradual. It’s also synchronised to split off everyone hit by it into the same branch, whereas, say, a very high-yield bomb wired to a random number generator that uses atmospheric noise would split you off into a branch away from your friends.[1]
I’m not unconcerned about vacuum decay, mind you. It’s not like quantum immortality is all confirmed and the implications worked out well in math.[2]
They’re still there for you of course, but you aren’t there for most of them. Because in the majority of their anticipated experience, you explode.
Sometimes I think about the potential engineering applications of quantum immortality in a mature civilisation for fun. Controlled, synchronised civilisation-wide suicide seems like a neat way to transform many engineering problems into measurement problems.
Such thought experiments also serve as a solution of sorts to the fermi paradox, and as a rationalization of the sci-fi trope of sufficiently advanced civilizations “ascending”.
I don’t think so. You only need one alien civilisation in our light cone to have preferences about the shape of the universal wave function rather than their own subjective experience for our light cone to get eaten. E.g. a paperclip maximiser might want to do this.
Also, the fermi paradox isn’t really a thing.
Vacuum decay is fast but not instant, and there will almost certainly be branches where it maims you and then reverses. Likewise, you can make suicide machines very reliable and fast. It’s unreasonable to think any of these mechanical details matter.
It expands at light speed. That’s fast enough that no computational processing can possibly occur before we’re dead. Sure there’s branches where it maims us and then stops, but these are incredibly subdominant compared to branches where the tunneling doesn’t happen.
Yes, you can make suicide machines very reliable and fast. I claim that whether your proposed suicide machine actually is reliable does in fact matter for determining whether you are likely to find yourself maimed. Making suicide machines that are synchronised earth-wide seems very difficult with current technology.
No, vacuum decay generally expands at sub-light speed.
How sub-light? I was mostly just guessing here, but if it’s below like 0.95c I’d be surprised.
I could be wrong, but from what I’ve read the domain wall should have mass, so it must travel below light speed. However, the energy difference between the two vacuums would put a large force on the wall, rapidly accelerating it to very close to light speed. Collisions with stars and gravitational effects might cause further weirdness, but ignoring that, I think after a while we basically expect constant acceleration, meaning that light cones starting inside the bubble that are at least a certain distance from the wall would never catch up with the wall. So yeah, definitely above 0.95c.
I’d also be surprised.
That’s a mistaken way of thinking about anticipated experience, see here:
I don’t think anything in the linked passage conflicts with my model of anticipated experience. My claim is not that the branch where everyone dies doesn’t exist. Of course it exists. It just isn’t very relevant for our future observations.
To briefly factor out the quantum physics here, because they don’t actually matter much:
If someone tells me that they will create a copy of me while I’m anesthetized and unconscious, and put one of me in a room with red walls, and another of me in a room with blue walls, my anticipated experience is that I will wake up to see red walls with p=0.5 and blue walls with p=0.5. Because the set of people who will wake up and remember being me and getting anesthetized has size 2 now, and until I look at the walls I won’t know which of them I am.
If someone tells me that they will create a copy of me while I’m asleep, but they won’t copy the brain, making it functionally just a corpse, then put the corpse in a room with red walls, and me in a room with blue walls, my anticipated experience is that I will wake up to see blue walls with p=1.0. Because the set of people who will wake up and remember being me and going to sleep has size 1. There is no chance of me ‘being’ the corpse any more than there is a chance of me ‘being’ a rock. If the copy does include a brain, but the brain gets blown up with a bomb before the anaesthesia wears off, that doesn’t change anything. I’d see blue walls with p=1.0, not see blue walls with p=0.5 and ‘not experience anything’ with p=0.5.
The same basic principle applies to the copies of you that are constantly created as the wavefunction decoheres. The probability math in that case is slightly different because you’re dealing with uncertainty over a vector space rather than uncertainty over a set, so what matters is the squares of the amplitudes of the branches that contain versions of you. E.g. if there’s three branches, one in which you die, amplitude ≈0.8944, one in which you wake up to see red walls, amplitude ≈0.2828 and one in which you wake up to see blue walls, amplitude ≈0.3464, you’d see blue walls with probability ca.p=0.346420.34642+0.28282=0.6 and red walls with probability p=0.282820.34642+0.28282=0.4.[1]
If you start making up scenarios that involve both wave function decoherence and having classical copies of you created, you’re dealing with probabilities over vector spaces and probabilities over sets at the same time. At that point, you probably want to use density matrices to do calculations.
That’s like dying in your sleep. Presumably you strongly don’t want it to happen, no matter your opinion on parallel worlds. Then dying in your sleep is bad because you don’t want it to happen. For the same reason vacuum decay is bad.
As a kid, I read about vacuum decay in a book and told the other kids at school about it. A year? later one kid asked me how anyone knows about it. Mortified that I didn’t think of that, I told him that I made it up. (“I knew it >:D!”) It is the one time I remember outside games of telling someone something I disbelieve so that they’ll believe it, and ever since remembering the scene as an adult I’m failing to track down that kid :(.
This work was co-authored by Jordan Stone, Darryl Wright, and Youssef Saleh, whose names appear on the EA Forum post but not on this cross post to LW.