I mostly agree with you, but we may disagree on the implausibility of exotic physics. Do you consider all explanations which require “exotic physics” to be less plausible than any explanation that does not? If you are willing to entertain “exotic physics”, then are there many ideas involving exotic physics that you find more plausible than Catastrophe Engines?
In the domain of exotic physics, I find Catastrophe Engines to be relatively plausible since are already analogues of similar phenomena to Catastrophe Engines in known physics: for example, nuclear chain reactions. It is quite natural to think that a stronger method of energy production would result in even greater risks, and finally the inherent uncertainty of quantum physics implies that one can never eliminate the risk of any machine, regardless of engineering. Note that my explanation holds no matter how small the risk lambda actually is (though I implicitly assumed that the universe has infinite lifetime: for my explanation to work the expected life of the Catastrophe Engine has to be at most on the same order as the lifetime of the universe.)
It is also worth noting that there are many variants of the Catastrophe Engine hypothesis that have the same consequences but which you might find more or less plausible. Perhaps these Engines don’t have “meltdown”, but it is necessary that they experience some kind of interference from other nearby Engines that would prevent them from being built too closely to each other. You could suppose that the best Matrioshka Brains produce chaotic gravity waves that would interfere with other nearby Brains, for instance.
Personally, I find explanations that require implausible alien psychology to be less plausible than explanations that require unknown physics. I expect most higher civilizations to be indifferent about our existence unless we pose a substantial threat, and I expect a sizable fraction of higher civilizations to value expansion. Perhaps you have less confidence in our understanding of evolutionary biology than our understanding of physics, hence our disagreement.
For the sake of discussion, here is my subjective ranking of explanations by plausibility:
There are visible signs of other civilizations, we just haven’t looked hard enough.
Most expansionist civilizations develop near light-speed colonization, hence making it very unlikely for us to exist in the interval between when their civilization is visible and our planet has already been colonized
We happen to be the first technologically advanced civilization in our visible universe
Most artifacts are invisible due to engineering considerations (e.g. the most efficient structures are made out of low-density nanofibers, or dark matter).
Colonization is much, much more difficult than we anticipated.
Defensively motivated “berserkers”. Higher civs have delicate artifacts that could actually be harmed by much less advanced spacefaring species, hence new spacefaring species are routinely neutralized. It still needs to be explained why most of the universe hasn’t been obviously manipulated, hence “Catastrophe Engines” or a similar hypothesis. Also, it needs to be explained why we still exist, since it would be presumably very cheap to neutralize our civilization.
Some “great filters” lie ahead of us: such as nuclear war. Extremely implausible because you would also have to explain why no species could manage to evolve with better cooperation skills.
“Galactic zoo” hypotheses and other explanations which require most higher civilizations to NOT be expansionist. Extremely implausible because many accidentally created strong AIs would be expansionist.
I ignore the hypothesis that “we are in a simulation” because it doesn’t actually help explain why we would be the only species in the simulation.
EDIT: Modified the order
Disclaimer: I am lazy and could have done more research myself.
I’m looking for work on what I call “realist decision theory.” (A loaded term, admittedly.) To explain realist decision theory, contrast with naive decision theory. My explanation is brief since my main objective at this point is fishing for answers rather than presenting my ideas.
Naive Decision Theory
Assumes that individuals make decisions individually, without need for group coordination.
Assumes individuals are perfect consequentialists: their utility function is only a function of the final outcome.
Assumes that individuals have utility functions which do not change with time or experience.
Assumes that the experience of learning new information has neutral or positive utility.
Hence a naive decision protocol might be:
A person decides whether to take action A or action B
An oracle tells the person the possible scenarios that could result from action A or action B, with probability weightings.
The person subconsciously assigns a utility to each scenario. This utility function is fixed. The person chooses the action A or B based on which action maximizes expected utility.
As a consequence of the above assumptions, the person’s decision is the same regardless of the order of presentation of the different actions.
Note: we assume physical determinism, so the person’s decision is even known in advance to the oracle. But we suppose the oracle can perfectly forecast counterfactuals; to emphasize this point, we might call it a “counterfactual oracle” from now on.
It should be no surprise that the above model of utility is extremely unrealistic. I am aware of experiments demonstrating non-transitivity of utility, for instance. Realist decision theory contrasts with naive decision theory in several ways.
Realist Decision Theory
Acknowledges that decisions are not made individually but jointly with others.
Acknowledges that in a group context, actions have a utility in of themselves (signalling) separate from the utility of the resulting scenarios.
Acknowledges that an individual’s utility function changes with experience.
Acknowledges that learning new information constitutes a form of experience, which may itself have positive or negative utility.
Relaxing any one of the four assumptions radically complicates the decision theory. Consider only relaxing conditions 1 and 2: then game theory becomes required. Consider relaxing only 3 and 4, so that for all purposes only one individual exists in the world: then points 3 and 4 mean that the order in which a counterfactual oracle presents the relevant information to the individual affect the individual’s final decision. Furthermore, an ethically implemented decision procedure would allow the individual to choose which pieces of information to learn. Therefore there is no guarantee that the individual will even end up learning all the information relevant to the decision, even if time is not a limitation.
It would be great to know which papers have considered relaxing the assumptions of a “naive” decision theory in the way I have outlined.