I tried to avoid bloating this post; Habermacher (2020) contains a bit more detail on the proposed chain AI → popularity/plausibility of illusionism → heightened egoism, and makes a few more links to literature. Plus it provides – a bit more wildly – speculations about related resolutions of the Fermi paradox (no claim for these to be really pertinent; call it musings rather than speculations if you want):
One largely in line with what @green_leaf suggests (and largely with Alenian’s fate in the story): With the illusionism related to our development & knowledge about advanced AI, we kill (or back-to-stoneage) ourselves even before we can build smarter-than-us, independently evolving AGI
Without illusionism (and a related upholding of altruism), we cannot even develop high enough intelligence without becoming too lethal to one another to sustain peaceful co-living & collaboration. Hence, advanced intelligence is even less likely than one could otherwise think, as more ‘basic’ creatures who become more intelligent (without illusionism) cannot collaborate so well; they’re too dangerous to each other!
There is some link here to some of evolutionary biology maintaining broad (non-kin) altruism itself is in many environments not evolutionarily stable; but maybe independently of that, one can ask the question: what with a species that had generally altruistic instincts, but which evolves to be highly dominated by an abstract mind that is able to put into question all sorts of instincts, and that might then also put into perspective its own altruistic instinct unless there’s something very special directly telling its abstract mind that kindness is important...
Afaik, in most species, an individual cannot effortlessly kill a peer (?); humans (speers etc.) arguably can. Without a genuine mutual kindness, i.e. in a tribe among rather psychopathic peers, it’d often have been particularly unpleasant to fall asleep as a human
Admittedly this entire theory would help resolve the Fermi Paradox mostly on a rather abstract level. Conditional on the observation of us having evolved to be intelligent in due time despite the point made here, the probability of advanced intelligence evolving on other planets need not necessarily be impacted by the reflection.
I tried to avoid bloating this post; Habermacher (2020) contains a bit more detail on the proposed chain AI → popularity/plausibility of illusionism → heightened egoism, and makes a few more links to literature. Plus it provides – a bit more wildly – speculations about related resolutions of the Fermi paradox (no claim for these to be really pertinent; call it musings rather than speculations if you want):
One largely in line with what @green_leaf suggests (and largely with Alenian’s fate in the story): With the illusionism related to our development & knowledge about advanced AI, we kill (or back-to-stoneage) ourselves even before we can build smarter-than-us, independently evolving AGI
Without illusionism (and a related upholding of altruism), we cannot even develop high enough intelligence without becoming too lethal to one another to sustain peaceful co-living & collaboration. Hence, advanced intelligence is even less likely than one could otherwise think, as more ‘basic’ creatures who become more intelligent (without illusionism) cannot collaborate so well; they’re too dangerous to each other!
There is some link here to some of evolutionary biology maintaining broad (non-kin) altruism itself is in many environments not evolutionarily stable; but maybe independently of that, one can ask the question: what with a species that had generally altruistic instincts, but which evolves to be highly dominated by an abstract mind that is able to put into question all sorts of instincts, and that might then also put into perspective its own altruistic instinct unless there’s something very special directly telling its abstract mind that kindness is important...
Afaik, in most species, an individual cannot effortlessly kill a peer (?); humans (speers etc.) arguably can. Without a genuine mutual kindness, i.e. in a tribe among rather psychopathic peers, it’d often have been particularly unpleasant to fall asleep as a human
Admittedly this entire theory would help resolve the Fermi Paradox mostly on a rather abstract level. Conditional on the observation of us having evolved to be intelligent in due time despite the point made here, the probability of advanced intelligence evolving on other planets need not necessarily be impacted by the reflection.