Most of the future things we care about – i.e., (dis)value – come, in expectation, from futures where humanity develops artificial general intelligence (AGI) and colonizes many other stars (Bostrom 2003; MacAskill 2022; Althaus and Gloor 2016).
Hanson (2021) and Cook (2022) estimate that we should expect to eventually “meet” (grabby) alien AGIs/civilizations – just AGIs, from here on – if humanity expands, and that our corner of the universe will eventually be colonized by aliens if humanity doesn’t expand.
This raises the following three crucial questions:
What would happen once/if our respective AGIs meet? Values handshakes (i.e., cooperation) or conflict? Of what forms?
Do we have good reasons to think the scenario where our corner of the universe is colonized by humanity is better than that where it is colonized by aliens? Should we update on the importance of reducing existential risks?[1]
Considering the fact that aliens might fill our corner of the universe with things we (dis)value, does humanity have an (inter-civilizational) comparative advantage in focusing on something the grabby aliens will neglect?
The answers to these three questions heavily depend on the values we expect the grabby aliens our AGI will meet to have. For instance, if we expect grabby alien AGIs to, say, care about suffering more than our AGI, AGI conflict generating significant suffering is then relatively unlikely, and the importance of reducing X-risks depends on whether you prefer the aliens’ degree of concern for suffering or that of our AGI.
Therefore, figuring out what aliens value (or Alien Values[2] Research) appears quite important,[3] although absolutely no one is working on it[4] as far as I know.
Is it because it isn’t tractable? Although I see how it might seem so, I don’t think it is. First, thinking about the values of grabby aliens doesn’t strike me as harder than modeling their spread (see, e.g., Hanson 2021 and Cook 2022 for work on the latter). My EA Forum sequence What values will control the Future? is an instance of how simple observations/reasoning can make us significantly narrow down the range of values we should expect grabby aliens to have. Second, there seems to be – outside of the Effective Altruism sphere – a whole field of research focused on thinking about the evolution of aliens (most of which I’m not familiar with, yet), and there are already quite interesting takeaways (see, e.g., Kershenbaum 2020; Todd and Miller 2017). Although the moral preferences of aliens are by no means the focus so far, this is evidence that figuring stuff out about aliens is feasible, and there might even be potential for making Alien Values Research part of people’s alien-related research agenda.
Acknowledgment
Thanks to Elias Schmied for their helpful comments on a draft. All assumptions/claims/omissions are my own.
Appendix: Relevant work
(This list is not exhaustive.[5] More or less ranked by decreasing order of relevance.)
’How “bad” would the future be, if an existential catastrophe occurs? How does this differ between different existential catastrophes?
How likely is future evolution of moral agents or patients on Earth, conditional on (various different types of) existential catastrophe? How valuable would that future be?
How likely is it that our observable universe contains extraterrestrial intelligence (ETI)? How valuable would a future influenced by them rather than us be?’
Resources on modeling the spread of grabby aliens.
Besides helping us answer the two above questions, it might also give us useful insights regarding the future of human evolution and what our successors might value (see Buhler 2023). Robin Hanson makes a similar point around the beginning of this interview.
And this is more because of my limited knowledge than due to an intent to keep this list short, so please send me other potentially relevant resources!
The (short) case for predicting what Aliens value
The case
Most of the future things we care about – i.e., (dis)value – come, in expectation, from futures where humanity develops artificial general intelligence (AGI) and colonizes many other stars (Bostrom 2003; MacAskill 2022; Althaus and Gloor 2016).
Hanson (2021) and Cook (2022) estimate that we should expect to eventually “meet” (grabby) alien AGIs/civilizations – just AGIs, from here on – if humanity expands, and that our corner of the universe will eventually be colonized by aliens if humanity doesn’t expand.
This raises the following three crucial questions:
What would happen once/if our respective AGIs meet? Values handshakes (i.e., cooperation) or conflict? Of what forms?
Do we have good reasons to think the scenario where our corner of the universe is colonized by humanity is better than that where it is colonized by aliens? Should we update on the importance of reducing existential risks?[1]
Considering the fact that aliens might fill our corner of the universe with things we (dis)value, does humanity have an (inter-civilizational) comparative advantage in focusing on something the grabby aliens will neglect?
The answers to these three questions heavily depend on the values we expect the grabby aliens our AGI will meet to have. For instance, if we expect grabby alien AGIs to, say, care about suffering more than our AGI, AGI conflict generating significant suffering is then relatively unlikely, and the importance of reducing X-risks depends on whether you prefer the aliens’ degree of concern for suffering or that of our AGI.
Therefore, figuring out what aliens value (or Alien Values[2] Research) appears quite important,[3] although absolutely no one is working on it[4] as far as I know.
Is it because it isn’t tractable? Although I see how it might seem so, I don’t think it is. First, thinking about the values of grabby aliens doesn’t strike me as harder than modeling their spread (see, e.g., Hanson 2021 and Cook 2022 for work on the latter). My EA Forum sequence What values will control the Future? is an instance of how simple observations/reasoning can make us significantly narrow down the range of values we should expect grabby aliens to have. Second, there seems to be – outside of the Effective Altruism sphere – a whole field of research focused on thinking about the evolution of aliens (most of which I’m not familiar with, yet), and there are already quite interesting takeaways (see, e.g., Kershenbaum 2020; Todd and Miller 2017). Although the moral preferences of aliens are by no means the focus so far, this is evidence that figuring stuff out about aliens is feasible, and there might even be potential for making Alien Values Research part of people’s alien-related research agenda.
Acknowledgment
Thanks to Elias Schmied for their helpful comments on a draft. All assumptions/claims/omissions are my own.
Appendix: Relevant work
(This list is not exhaustive.[5] More or less ranked by decreasing order of relevance.)
Robin Hanson (1998) Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization
on selection effects during space colonization
Charlie Guttman (2022) Alien Counterfactuals (and comments)
on the importance/tractability of this topic for assessing the case for reducing X-risks.
The first paragraph of What if human colonization is more humane than ET colonization? in Tomasik (2013)
Whether (post-)humans colonizing space is good or bad, space colonization by other agents seems worse in Brauner and Grosse-Haltz (2018)
Cosmic rescues and comments in DiGiovanni (2021)
argues against Brauner and Grosse-Haltz’s (2018) claim.
Anders Sandberg (2022) Game Theory with Aliens on the Largest Scales
some successful cooperation stories between civilizations with orthogonal values
Rational animations (2022) Could a single alien message destroy us?
a bargaining game scenario between different civilizations that turns straight into conflict before any form of actual bargaining takes place
Non-causal motivations for thinking about the values of aliens and a few thoughts on how to do it.
Andrew Critch (2023) Acausal normalcy
Caspar Oesterheld (2017) Multiverse-wide Cooperation via Correlated Decision Making (section 3.3 and 3.4)
A few relevant questions in Michael’s Aird (2020) Crucial questions for longtermists
’How “bad” would the future be, if an existential catastrophe occurs? How does this differ between different existential catastrophes?
How likely is future evolution of moral agents or patients on Earth, conditional on (various different types of) existential catastrophe? How valuable would that future be?
How likely is it that our observable universe contains extraterrestrial intelligence (ETI)? How valuable would a future influenced by them rather than us be?’
Resources on modeling the spread of grabby aliens.
Robin Hanson et al. (2021) If Loud Aliens Explain Human Earliness, Quiet Aliens Are Also Rare
Tristan Cook (2022) Replicating and extending the grabby aliens model (and references)
Resources that don’t focus on the values of aliens but on relevant evolutionary dynamics
Arik Kershenbaum (2020) The Zoologist’s Guide to the Galaxy. What Animals on Earth Reveal about Aliens – and Ourselves (and references)
Peter M. Todd & Geoffrey F. Miller (2017) The Evolutionary Psychology of Extraterrestrial Intelligence: Are There Universal Adaptations in Search, Aversion, and Signaling? (and references)
See also the Appendix in Buhler (2023)
Charlie Guttman (2022) and Michael Aird (2020) ask questions very similar to this second one.
Alien values” here literally means “the values of aliens”, not “values that look alien to us” as in this confusing LessWrong tag.
Besides helping us answer the two above questions, it might also give us useful insights regarding the future of human evolution and what our successors might value (see Buhler 2023). Robin Hanson makes a similar point around the beginning of this interview.
The Appendix lists a few pieces that raised relevant considerations, however.
And this is more because of my limited knowledge than due to an intent to keep this list short, so please send me other potentially relevant resources!