What are the existential risks for a multi-galaxy super-civilization? Or even a multi-stellar civilization expanding outward at some fraction of light speed? I don’t see how life can be exterminated once it has spread that far. “liberate much of the energy in the black hole at the center of our galaxy in a giant explosion” does not make sense, since a black hole is not considered a store of energy that can be liberated.
If you are speculating about new physics that haven’t been discovered yet, then “subjective-time exponential” and risk per century seems irrelevant (we can just assume that all of physics will be discovered sooner or later), and a more pertinent question might be how much of physics are as yet undiscovered, and what is the likelihood that some new physics will allow a galaxy/universe killer to be built.
I argue that the amount of physics left to be discovered is finite, and therefore the likelihood that a galaxy/universe killer can be built in the future does not approach arbitrarily close to 1 as time goes to infinity.
Speaking of new physics, there was the discovery that stars are other suns rather than tiny holes in celestial sphere… and in the future there’s the possibility of discovering practically attainable interstellar travel. Discoveries in physics can have different effects.
And if we’re to talk of limitless new and amazing physics, there may be superbombs, and there may be infinite subjective time within finite volume of spacetime, or something of that sort.
I don’t see how life can be exterminated once it has spread that far.
You may be right. It takes a long time to become a multi-galaxy super-civilization. IIRC our galaxy is 100,000 light-years across, and the nearest galaxy is about 2 million light-years away. We might make it in time. It depends a lot on how far time-compression goes, and on how correlated apocalypses are.
“liberate much of the energy in the black hole at the center of our galaxy in a giant explosion” does not make sense, since a black hole is not considered a store of energy that can be liberated.
Wrong. Google ‘black hole explosions’.
I argue that the amount of physics left to be discovered is finite, and therefore the likelihood that a galaxy/universe killer can be built in the future does not approach arbitrarily close to 1 as time goes to infinity.
None of the results indicate a possibility that the “energy in the black hole at the center of our galaxy” can be liberated in a giant explosion.
The first result is a 1974 paper by Stephen Hawkings predicting that black holes emit black-body radiation at a temperature inverse in its mass. For large black holes this temperate is close to absolute zero, making them more useful as entropy dumps than energy sources.
On the other hand, if you could simultaneously convert a lot of ordinary matter into numerous tiny black holes, they would all instantly evaporate and have the effect of a single great explosion, so that’s one risk to be worried about.
None of the results indicate a possibility that the “energy in the black hole at the center of our galaxy” can be liberated in a giant explosion.
You’re right about that. But they do indicate that the energy in smaller black holes can be liberated in giant explosions. And they indicate that black holes could be used as an energy sources. So when you said, “a black hole is not considered a store of energy that can be liberated,” that was wrong; or at least it was wrong if you meant “a black hole is not considered a store of energy.” And that was what I said was wrong.
That’s only true if there are lots of different existential risks besides this particular one. The fact that no one has answered my question with a list of such risks seems to argue against that. I also argued earlier that the amount of physics left to be discovered is finite, so the number of such risks is finite.
More generally, I guess it boils down to cognitive strategies. I like to start from specific examples, build intuitions, find similarities, then proceed to generalize. I program like this too. If I have to write two procedures that I know will end up sharing a lot of code, I will write one complete procedure first, then factor out the common code as I write the second one, instead of writing the common function first. I suppose this seems like a waste of time to someone used to working directly on the general/abstract issue.
Well, you know my specific example of a risk. Even if you know all about physics, that is the rules of the game, you can still lose to an opponent that can figure out a winning strategy.
Examples are good when you can confidently say something about them, and their very existence was in question. But there are so many ways to sidestep mere physical threat that it doesn’t seem a good choice. An explosion is just something that happens to the local region, in a lawful physical way. You could cook some dynamic redundancy to preserve computation in case of an explosion.
What are the existential risks for a multi-galaxy super-civilization? Or even a multi-stellar civilization expanding outward at some fraction of light speed? I don’t see how life can be exterminated once it has spread that far. “liberate much of the energy in the black hole at the center of our galaxy in a giant explosion” does not make sense, since a black hole is not considered a store of energy that can be liberated.
If you are speculating about new physics that haven’t been discovered yet, then “subjective-time exponential” and risk per century seems irrelevant (we can just assume that all of physics will be discovered sooner or later), and a more pertinent question might be how much of physics are as yet undiscovered, and what is the likelihood that some new physics will allow a galaxy/universe killer to be built.
I argue that the amount of physics left to be discovered is finite, and therefore the likelihood that a galaxy/universe killer can be built in the future does not approach arbitrarily close to 1 as time goes to infinity.
Speaking of new physics, there was the discovery that stars are other suns rather than tiny holes in celestial sphere… and in the future there’s the possibility of discovering practically attainable interstellar travel. Discoveries in physics can have different effects.
And if we’re to talk of limitless new and amazing physics, there may be superbombs, and there may be infinite subjective time within finite volume of spacetime, or something of that sort.
You may be right. It takes a long time to become a multi-galaxy super-civilization. IIRC our galaxy is 100,000 light-years across, and the nearest galaxy is about 2 million light-years away. We might make it in time. It depends a lot on how far time-compression goes, and on how correlated apocalypses are.
Wrong. Google ‘black hole explosions’.
That’s my hope as well.
None of the results indicate a possibility that the “energy in the black hole at the center of our galaxy” can be liberated in a giant explosion.
The first result is a 1974 paper by Stephen Hawkings predicting that black holes emit black-body radiation at a temperature inverse in its mass. For large black holes this temperate is close to absolute zero, making them more useful as entropy dumps than energy sources.
On the other hand, if you could simultaneously convert a lot of ordinary matter into numerous tiny black holes, they would all instantly evaporate and have the effect of a single great explosion, so that’s one risk to be worried about.
You’re right about that. But they do indicate that the energy in smaller black holes can be liberated in giant explosions. And they indicate that black holes could be used as an energy sources. So when you said, “a black hole is not considered a store of energy that can be liberated,” that was wrong; or at least it was wrong if you meant “a black hole is not considered a store of energy.” And that was what I said was wrong.
Why are you continuing to talk about this one particular hypothetical risk?
I asked for a list of possible risks, and nobody has given any other answer...
Still, the question of whether one particular risk is real has almost no bearing on the total existential risk.
That’s only true if there are lots of different existential risks besides this particular one. The fact that no one has answered my question with a list of such risks seems to argue against that. I also argued earlier that the amount of physics left to be discovered is finite, so the number of such risks is finite.
More generally, I guess it boils down to cognitive strategies. I like to start from specific examples, build intuitions, find similarities, then proceed to generalize. I program like this too. If I have to write two procedures that I know will end up sharing a lot of code, I will write one complete procedure first, then factor out the common code as I write the second one, instead of writing the common function first. I suppose this seems like a waste of time to someone used to working directly on the general/abstract issue.
Well, you know my specific example of a risk. Even if you know all about physics, that is the rules of the game, you can still lose to an opponent that can figure out a winning strategy.
Examples are good when you can confidently say something about them, and their very existence was in question. But there are so many ways to sidestep mere physical threat that it doesn’t seem a good choice. An explosion is just something that happens to the local region, in a lawful physical way. You could cook some dynamic redundancy to preserve computation in case of an explosion.