Yes, I read the essay. It doesn’t make it any more inevitable. It isn’t inevitable at all—rather this is speculation on your part, and unsubstantiated speculation that I can see no sensible basis for.
When arguing about the future, the imaginable is not all there is. You essentially gave several imaginable futures (some in which risks continue to arise, and others in which they do not) and did some handwaving about which class you considered likely to be larger. There are three ways to dispute this: to dispute your handwaving (eg, you consider compression of subjective time to be a conclusive argument, as if this is inevitable), to propose not-considered classes of future (eg, technology continues to increase, but some immutable law of the universe means that there are only a finite number of apocalyptic technologies), or to maintain that there are large classes of future which cannot possibly be imagined because they do not clearly fall into any categories such as we are likely to define in the present. If you use the latter dispute, arguing about probability is just arguing about which uninformative prior to use.
I’m not pretending this is an airtight case. If you previously assumed that existential threats converge to zero as rationality increases; or that rationality is always the best policy; or that rationality means expectation maximization; and now you question one of those things; then you’ve gotten something out of it.
homung suggests that there may be immutable laws of the universe that mean there are only a finite number of apocalyptic technologies. Note that even if the probability of such technological limits is small, in order for Phil’s argument to work, either that probability would have to be infinitesimal, or some of the doomsday devices have to continue to be threatening after the various attack/defense strategies reach a very mature level of development. All of the probabilities look finite to me.
You talk as if you have presented a credible case—but you really haven’t.
Instead there is a fantasy about making black holes explode (references, please!) another fantasy about subjective time compression outstripping expansion—and a story about disasters triggering other disasters—which is true, but falls a long way short of a credible argument that civilisation is likely to be wiped out once it has spread out a bit.
Instead there is a fantasy about making black holes explode (references, please!)
You have me there. We have not yet successfully detonated a black hole.
Small black holes are expected to eventually explode. Large black holes are expected to take longer than the expected life of the universe to evaporate to that point.
Anyway, I’m not a physicist. It’s just a handwavy example that maybe there is some technology with solar-scale or galaxy-scale destructive power. When all the humans lived on one island, they didn’t imagine they could one day destroy the Earth.
Anyway, I’m not a physicist. It’s just a handwavy example that maybe there is some technology with solar-scale or galaxy-scale destructive power.
Then the example is pointless. A weapon powerful enough to cause extinction galaxy wide is a very big if. It’s unlikely there would be, simply because of the massive distances between stars.
Also, if you base your argument (or part of it, anyways) on such an event, it is equally fair to state “if not”. And in the case of “if not” (which I imagine to be highly more likely), the argument must end there.
Therefor, it is likely to assume that yes, we could outrun our own destructive tendencies.
When all the humans lived on one island, they didn’t imagine they could one day destroy the Earth.
At that point in our evolution we had no firm grasp on what “world” even meant, let alone a basic understanding of scale. Now, we do. We also have a basic understanding of the universe, and a method to increase our understanding (Ability to postulate theories, run experiments and collect evidence). When all humans (most likely an ancestor) were contained in one geographic coordinate, none of these things even existed as concepts. There are a few more problems with this comparison, but I’ll leave them alone for now, as it does nothing to bring them out.
I wasn’t asking for references supporting the idea that we had detonated a black hole. It’s an incredible weapon, which seems to have a low probability of existing—based on what we know about physics. The black hole at the center of our galaxy is not going to go away any time soon.
Bizarre future speculations which defy the known laws of physics don’t add much to your case.
Yes, I read the essay. It doesn’t make it any more inevitable. It isn’t inevitable at all—rather this is speculation on your part, and unsubstantiated speculation that I can see no sensible basis for.
It would be more helpful if you explained why each of the many reasons I gave are insensible.
When arguing about the future, the imaginable is not all there is. You essentially gave several imaginable futures (some in which risks continue to arise, and others in which they do not) and did some handwaving about which class you considered likely to be larger. There are three ways to dispute this: to dispute your handwaving (eg, you consider compression of subjective time to be a conclusive argument, as if this is inevitable), to propose not-considered classes of future (eg, technology continues to increase, but some immutable law of the universe means that there are only a finite number of apocalyptic technologies), or to maintain that there are large classes of future which cannot possibly be imagined because they do not clearly fall into any categories such as we are likely to define in the present. If you use the latter dispute, arguing about probability is just arguing about which uninformative prior to use.
I’m not pretending this is an airtight case. If you previously assumed that existential threats converge to zero as rationality increases; or that rationality is always the best policy; or that rationality means expectation maximization; and now you question one of those things; then you’ve gotten something out of it.
homung suggests that there may be immutable laws of the universe that mean there are only a finite number of apocalyptic technologies. Note that even if the probability of such technological limits is small, in order for Phil’s argument to work, either that probability would have to be infinitesimal, or some of the doomsday devices have to continue to be threatening after the various attack/defense strategies reach a very mature level of development. All of the probabilities look finite to me.
No; that probability about a property of the universe is a one-shot trial. It only has to be false once, out of one trial.
So your thesis is not that rationality dooms civilization, but only that as far as we know, it might. I get it now.
You talk as if you have presented a credible case—but you really haven’t.
Instead there is a fantasy about making black holes explode (references, please!) another fantasy about subjective time compression outstripping expansion—and a story about disasters triggering other disasters—which is true, but falls a long way short of a credible argument that civilisation is likely to be wiped out once it has spread out a bit.
You have me there. We have not yet successfully detonated a black hole.
Small black holes are expected to eventually explode. Large black holes are expected to take longer than the expected life of the universe to evaporate to that point.
Anyway, I’m not a physicist. It’s just a handwavy example that maybe there is some technology with solar-scale or galaxy-scale destructive power. When all the humans lived on one island, they didn’t imagine they could one day destroy the Earth.
Then the example is pointless. A weapon powerful enough to cause extinction galaxy wide is a very big if. It’s unlikely there would be, simply because of the massive distances between stars.
Also, if you base your argument (or part of it, anyways) on such an event, it is equally fair to state “if not”. And in the case of “if not” (which I imagine to be highly more likely), the argument must end there.
Therefor, it is likely to assume that yes, we could outrun our own destructive tendencies.
At that point in our evolution we had no firm grasp on what “world” even meant, let alone a basic understanding of scale. Now, we do. We also have a basic understanding of the universe, and a method to increase our understanding (Ability to postulate theories, run experiments and collect evidence). When all humans (most likely an ancestor) were contained in one geographic coordinate, none of these things even existed as concepts. There are a few more problems with this comparison, but I’ll leave them alone for now, as it does nothing to bring them out.
I wasn’t asking for references supporting the idea that we had detonated a black hole. It’s an incredible weapon, which seems to have a low probability of existing—based on what we know about physics. The black hole at the center of our galaxy is not going to go away any time soon.
Bizarre future speculations which defy the known laws of physics don’t add much to your case.