We could also add a-risks: that human civilisation will destroy alien life and alien civilizations. For example, LHC-false vacuum-catastrophe or UFAI could dangerously affect all visible universe and kill an unknown number of the alien civilisations or prevent their existence.
Preventing risks to alien life is one of the main efforts in the sterilisation of Mars rovers and sinking of Galileo and Cassini in Jupiter and Saturn after the end of their missions.
The flip side of this idea is “cosmic rescue missions” (term coined by David Pearce), which refers to the hypothetical scenario in which human civilization help to reduce the suffering of sentient extraterrestrials (in the original context, it referred to the use of technology to abolish suffering). Of course, this is more relevant for simple animal-like aliens and less so for advanced civilizations, which would presumably have already either implemented a similar technology or decided to reject such technology. Brian Tomasik argues that cosmic rescue missions are unlikely.
Also, there’s an argument that humanity conquering aliens civs would only be considered bad if you assume that either (1) we have non-universalist-consequentialist reasons to believe that preventing alien civilizations from existing is bad, or (2) the alien civilization would produce greater universalist-consequentialist value than human civilizations with the same resources. If (2) is the case, then humanity should actually be willing to sacrifice itself to let the aliens take over (like in the “utility monster” thought experiment), assuming that universalist consequentialism is true. If neither (1) nor (2) holds, then human civilization would have greater value than ET civilization. Seth Baum’s paper on universalist ethics and alien encounters goes into greater detail.
Thanks for links. My thought was that we may give higher negative utility to those x-risks which are able to become a-risks too, that is LHC and AI.
If you know Russian science fiction by Strugatsky, there is an idea in it of “Progressors”—the people who are implanted into other civilisations to help them develop quickly. At the end, the main character concluded that such actions violate value of any civilization to determine their own way and he returned to earth to search and stop possible alien progressors on here.
Iain Banks has similar themes in his books—e.g. Inversions. And generally speaking, in the Culture universe, the Special Circumstances are a meddlesome bunch.
We could also add a-risks: that human civilisation will destroy alien life and alien civilizations. For example, LHC-false vacuum-catastrophe or UFAI could dangerously affect all visible universe and kill an unknown number of the alien civilisations or prevent their existence.
Preventing risks to alien life is one of the main efforts in the sterilisation of Mars rovers and sinking of Galileo and Cassini in Jupiter and Saturn after the end of their missions.
The flip side of this idea is “cosmic rescue missions” (term coined by David Pearce), which refers to the hypothetical scenario in which human civilization help to reduce the suffering of sentient extraterrestrials (in the original context, it referred to the use of technology to abolish suffering). Of course, this is more relevant for simple animal-like aliens and less so for advanced civilizations, which would presumably have already either implemented a similar technology or decided to reject such technology. Brian Tomasik argues that cosmic rescue missions are unlikely.
Also, there’s an argument that humanity conquering aliens civs would only be considered bad if you assume that either (1) we have non-universalist-consequentialist reasons to believe that preventing alien civilizations from existing is bad, or (2) the alien civilization would produce greater universalist-consequentialist value than human civilizations with the same resources. If (2) is the case, then humanity should actually be willing to sacrifice itself to let the aliens take over (like in the “utility monster” thought experiment), assuming that universalist consequentialism is true. If neither (1) nor (2) holds, then human civilization would have greater value than ET civilization. Seth Baum’s paper on universalist ethics and alien encounters goes into greater detail.
Thanks for links. My thought was that we may give higher negative utility to those x-risks which are able to become a-risks too, that is LHC and AI.
If you know Russian science fiction by Strugatsky, there is an idea in it of “Progressors”—the people who are implanted into other civilisations to help them develop quickly. At the end, the main character concluded that such actions violate value of any civilization to determine their own way and he returned to earth to search and stop possible alien progressors on here.
Oh, in those cases, the considerations I mentioned don’t apply. But I still thought they were worth mentioning.
In Star Trek, the Federation has a “Prime Directive” against interfering with the development of alien civilizations.
The main role of which is to figure in this recurring dialogue:
-- Captain, but the Prime Directive!
-- Screw it, we’re going in.
Iain Banks has similar themes in his books—e.g. Inversions. And generally speaking, in the Culture universe, the Special Circumstances are a meddlesome bunch.