I know this may come across as sociopathically cold and calculating, but given that post-singularity civilisation could be at least thirty orders of magnitude larger than current civilisation, I don’t really think short term EA makes sense. I’m surprised that the EA and existential risk efforts seem to be correlated, since logically it seems to me that they should be anti-correlated.
And if the response is that future civilisation is ‘far’ in the overcoming bias sense, well, so are starving children in Africa.
It doesn’t come across as sociopathically cold and calculating to me. It may come across like that to others. Some people who have never encountered effective altruism or Less Wrong might think you sociopathic, but most people don’t reflective enough to realize if they care about the overwhelming magnitude of future civilizations, or starving children far away. So, the consequences of what most others signal and believe of their own values don’t lead to consequences different than yours. The capacity to care about so many far away people seems difficult to maintain all the time, mostly because if you carried so much empathy in the forefront of your mind all the time it’d be overwhelming. Saying so about real people in particular might seem sociopathic no matter who says it.
Anyway, it at first confused me why existential risk reduction is correlated with effective altruism. Effective altruism is a common banner which promotes the values common enough to existential risk reduction and Less Wrong, such as reflective thinking, evidence-based evaluation, and far preferences for helping others through time and space. I think the x-risk reduction community makes a choice to go with effective altruism because they get a strong enough position to attract more capital: financial capital, human capital, relevant expertise, etc.
While x-risk may only get a small slice of the pie that is effective altruism, as effective altruism grows, so does the absolute size of the added support x-risk reduction receives. Also it’s the common impression effective altruists are talented and reflective folk to begin with, so if one can cross-convert their concerns from poverty reduction and global health to existential risk reduction, it helps out. Further, cause areas which would otherwise be at odds with each other accept each other within effective altruism because they all gain from cooperation with each other. For example, such efforts are coordinated by the Centre for Effective Altruism, which leads to everyone under the ‘EA’ banner receiving more attention.
Meanwhile, the existential risk reduction community doesn’t look worse by associating with effective altruism, even if it will always be a smaller part of it than poverty reduction. It’s not like associating with effective altruism costs the cause of x-risk reduction so much it would be smaller or weaker movement. Aside from the coverage of the Future Humanity Institute’s publications like Superintelligence by Nick Bostrom (and its consequences, like Elon Musk’s support), effective altruism might be boosting the profile of x-risk more than anything else.
The attitude you express towards short-term effective altruism given the magnitude and importance of post-Singularity civilization is one I’ve seen expressed by people, some from Less Wrong, within or adjacent to the effective altruist community. I think these disagreements and sentiments don’t come out much from central or mainstream coverage of effective altruism because it would look bad and be confusing to the public.
Proponents of both have the same attitude of “this is a thing that people ocassionally give lip service to, that we’re going to follow to a more logical conclusion and actually act on”.
I’d say well over 80%. The probability of the whole of humanity deciding to stop technological development, and actually successfully co-coordinating this is minimal. Even if the human mind cannot be run on a classical computer, we would still tile the universe with quantum computronium.
You people sound awfully sure about far-off future. How well, do you think, an educated Egyptian from, say, 2000 BC would have fared at predicting the future path of the society?
Was there any noticeable technological progress back in 2000 BC?
Looking at science fiction from the 19th century, aerial warfare, armoured land warfare, space exploration were all predicted. The details were all wrong, and I doubt we can predict the details of the future with any great accuracy. But the general theme of humanity expanding across the universe seems a safe extrapolation, even if I don’t know whether the starships will be beam riders or ramscoops or wormhole navigators or Alcubierre drive or some other technology that has not yet been conceived.
Was there any noticeable technological progress back in 2000 BC?
Shitloads. Empires rose and fell as they obsoleted each other’s military technologies, architecture evolved tremendously, crop plants diversified and became more nutritious, extractive farming techniques gave way to those that preserved the fertility of the soil rather than stripmining it, new naval technology was partially responsible for the late bronze age collapse… (yes I’m aware these gradiate towards 1000 BC)
Well for starters his decedents would no longer be ruled by someone (purporting to be) a living incarnation of the sun god. Something he would no doubt consider extremely shocking.
The life of a typical Egyptian didn’t much change from 2000 BC to 1000 AD. And for most of this time the leaders claimed to have a strong connection or endorsement from the divine. An educated Egyptian living in 2000 BC would be aware of the diversity of religion in the world and would probably expect that over the next 3000 years religious practices would change in form in his country.
Good point! I would have thought the great filter probably lies in our past, most likely with the origin of life or perhaps multicellular life, but the Fermi paradox is still information against space colonisation.
It’s also unfortunately a distinctly uninformative piece of evidence about anything but space colonization and exponential expansion. All it tells us is that nothing self-replicates across the galaxy to a scale we could see in sheer infrared emissions or truly ridiculous levels of active attempts to be visible. There are so many orders of magnitude and divergent possibilities of things that could exist that we simply wouldn’t know about right now given the observations we have made.
I know this may come across as sociopathically cold and calculating, but given that post-singularity civilisation could be at least thirty orders of magnitude larger than current civilisation, I don’t really think short term EA makes sense. I’m surprised that the EA and existential risk efforts seem to be correlated, since logically it seems to me that they should be anti-correlated.
And if the response is that future civilisation is ‘far’ in the overcoming bias sense, well, so are starving children in Africa.
It doesn’t come across as sociopathically cold and calculating to me. It may come across like that to others. Some people who have never encountered effective altruism or Less Wrong might think you sociopathic, but most people don’t reflective enough to realize if they care about the overwhelming magnitude of future civilizations, or starving children far away. So, the consequences of what most others signal and believe of their own values don’t lead to consequences different than yours. The capacity to care about so many far away people seems difficult to maintain all the time, mostly because if you carried so much empathy in the forefront of your mind all the time it’d be overwhelming. Saying so about real people in particular might seem sociopathic no matter who says it.
Anyway, it at first confused me why existential risk reduction is correlated with effective altruism. Effective altruism is a common banner which promotes the values common enough to existential risk reduction and Less Wrong, such as reflective thinking, evidence-based evaluation, and far preferences for helping others through time and space. I think the x-risk reduction community makes a choice to go with effective altruism because they get a strong enough position to attract more capital: financial capital, human capital, relevant expertise, etc.
While x-risk may only get a small slice of the pie that is effective altruism, as effective altruism grows, so does the absolute size of the added support x-risk reduction receives. Also it’s the common impression effective altruists are talented and reflective folk to begin with, so if one can cross-convert their concerns from poverty reduction and global health to existential risk reduction, it helps out. Further, cause areas which would otherwise be at odds with each other accept each other within effective altruism because they all gain from cooperation with each other. For example, such efforts are coordinated by the Centre for Effective Altruism, which leads to everyone under the ‘EA’ banner receiving more attention.
Meanwhile, the existential risk reduction community doesn’t look worse by associating with effective altruism, even if it will always be a smaller part of it than poverty reduction. It’s not like associating with effective altruism costs the cause of x-risk reduction so much it would be smaller or weaker movement. Aside from the coverage of the Future Humanity Institute’s publications like Superintelligence by Nick Bostrom (and its consequences, like Elon Musk’s support), effective altruism might be boosting the profile of x-risk more than anything else.
The attitude you express towards short-term effective altruism given the magnitude and importance of post-Singularity civilization is one I’ve seen expressed by people, some from Less Wrong, within or adjacent to the effective altruist community. I think these disagreements and sentiments don’t come out much from central or mainstream coverage of effective altruism because it would look bad and be confusing to the public.
Proponents of both have the same attitude of “this is a thing that people ocassionally give lip service to, that we’re going to follow to a more logical conclusion and actually act on”.
This just strikes me as another pascal’s mugging.
Disagree because the probability of this happening is significant. I would rate as >80% conditional on us not destroying ourselves.
I’d say well over 80%. The probability of the whole of humanity deciding to stop technological development, and actually successfully co-coordinating this is minimal. Even if the human mind cannot be run on a classical computer, we would still tile the universe with quantum computronium.
You people sound awfully sure about far-off future. How well, do you think, an educated Egyptian from, say, 2000 BC would have fared at predicting the future path of the society?
Was there any noticeable technological progress back in 2000 BC?
Looking at science fiction from the 19th century, aerial warfare, armoured land warfare, space exploration were all predicted. The details were all wrong, and I doubt we can predict the details of the future with any great accuracy. But the general theme of humanity expanding across the universe seems a safe extrapolation, even if I don’t know whether the starships will be beam riders or ramscoops or wormhole navigators or Alcubierre drive or some other technology that has not yet been conceived.
Shitloads. Empires rose and fell as they obsoleted each other’s military technologies, architecture evolved tremendously, crop plants diversified and became more nutritious, extractive farming techniques gave way to those that preserved the fertility of the soil rather than stripmining it, new naval technology was partially responsible for the late bronze age collapse… (yes I’m aware these gradiate towards 1000 BC)
What makes you think that in 4000 years people will think there was noticeable technological progress in the XXI century?
Actually, no, if the limit of the speed of light holds, either there won’t be much expansion or the result of the expansion won’t be very human.
Fairly well for the next 3000 years since not a lot changed.
And yet I feel you don’t want to follow that example of success :-P
Well for starters his decedents would no longer be ruled by someone (purporting to be) a living incarnation of the sun god. Something he would no doubt consider extremely shocking.
So we have gone from worshiping the sun god to worshiping the son of god.
Nice pun. Now do you have a serious response?
The life of a typical Egyptian didn’t much change from 2000 BC to 1000 AD. And for most of this time the leaders claimed to have a strong connection or endorsement from the divine. An educated Egyptian living in 2000 BC would be aware of the diversity of religion in the world and would probably expect that over the next 3000 years religious practices would change in form in his country.
Are you joking?
No, the life of the average human didn’t much change from 2000 BC to 1000 AD.
Jim
If not for the Fermi paradox, I would agree.
Good point! I would have thought the great filter probably lies in our past, most likely with the origin of life or perhaps multicellular life, but the Fermi paradox is still information against space colonisation.
It’s also unfortunately a distinctly uninformative piece of evidence about anything but space colonization and exponential expansion. All it tells us is that nothing self-replicates across the galaxy to a scale we could see in sheer infrared emissions or truly ridiculous levels of active attempts to be visible. There are so many orders of magnitude and divergent possibilities of things that could exist that we simply wouldn’t know about right now given the observations we have made.