Occasionally I find myself nostalgic for the old, optimistic transhumanism of which e.g. this 2006 article is a good example. After some people argued that radical life extension would increase our population too much, the author countered that oh, that’s not an issue, here are some calculations showing that our planet could support a population of 100 billion with ease!
So something like “let’s fix the most urgent pressing problems and stabilize the world, then let’s turn into a utopia”. X-risk was on the radar, but the prevailing mindset seemed to be something like “oh, x-risk? yeah, we need to get to that too”.
Today’s philosophy seems more like “let’s try to ensure that things won’t be quite as horrible as they are today, and if we work really hard and put all of our effort into it, there’s a chance that maybe we and all of our children won’t die.” Most of the world-saving energy seems to have gone into effective altruism, where people work on issues like making the US prison system suck less or distributing bednets to fight malaria. (Causes that I thoroughly support, to be clear, but also ones where the level of ambition seems quite a bit lower than in “let’s make it a violation of the laws of physics to try to harm people”.)
I can’t exactly complain about this. Litany of Tarski and alll: if the Old Optimistic Era was hopelessly naive and over-optimistic, then I wish to believe that it was hopelessly naive and over-optimistic, and believe in the more realistic predictions instead. And it’s not clear that the old optimism ever actually achieved much of anything in the way of its grandiose goals, whereas more “grounded” organizations such as GiveWell have achieved quite a lot.
But it still feels like there’s something valuable that we’ve lost.
For what it’s worth, I get the sense that the Oxford EA research community is pretty optimistic about the future, but generally seem to believe the risks are just more pragmatic to pay attention to.
Anders Sandberg is doing work on the potential of humans (or related entities) expanding through the universe. The phrase “Cosmic Endowment” is said every here and there. Stuart Armstrong recently created a calendar of the year 12020.
I personally have a very hard time imagining exactly what things will be like post-AGI or what we could come up with now that would make them better, conditional on it going well. It seems like future research could figure a lot of those details out. But I’m in some ways incredibly optimistic about the future. This model gives a very positive result, though also a not very specific one.
I think my personal view is something like,
“Things seem super high-EV in expectation. In many ways, we as a species seem to be in a highly opportunistic setting. Let’s generally try to be as careful as possible to make sure we don’t mess up.”
Note that high-EV does not mean high-probability. It could be that we have a 0.1% chance of surviving, as a species, but if we do, there would be many orders of magnitude net benefit. I use this not because I believe we have a 0.1% chance, but rather because I think it’s a pretty reasonable lower bound.
I think that although the new outlook is more pessimistic, it is also more uncertain. So, yes, maybe we will become extinct, but maybe we will build a utopia.
It likely reflects a broader, general trend towards pessimism in our culture. Futurism was similarly pessimistic in the 1970s, and turned more generally optimistic in the 1980s. Right now we’re in a pessimistic period, but as things change in the future we can probably expect more optimism, including within futurism, if the zeitgeist becomes more optimistic.
Occasionally I find myself nostalgic for the old, optimistic transhumanism of which e.g. this 2006 article is a good example. After some people argued that radical life extension would increase our population too much, the author countered that oh, that’s not an issue, here are some calculations showing that our planet could support a population of 100 billion with ease!
In those days, the ethos seemed to be something like… first, let’s apply a straightforward engineering approach to eliminating aging, so that nobody who’s alive needs to worry about dying from old age. Then let’s get nanotechnology and molecular manufacturing to eliminate scarcity and environmental problems. Then let’s re-engineer the biosphere and human psychology for maximum well-being, such as by using genetic engineering to eliminate suffering and/or making it a violation of the laws of physics to try to harm or coerce someone.
So something like “let’s fix the most urgent pressing problems and stabilize the world, then let’s turn into a utopia”. X-risk was on the radar, but the prevailing mindset seemed to be something like “oh, x-risk? yeah, we need to get to that too”.
That whole mindset used to feel really nice. Alas, these days it feels like it was mostly wishful thinking. I haven’t really seen that spirit in a long time; the thing that passes for optimism these days is “Moloch hasn’t entirely won (yet)”. If “overpopulation? no problem!” felt like a prototypical article to pick from the Old Optimistic Era, then Today’s Era feels more described by Inadequate Equilibria and a post saying “if you can afford it, consider quitting your job now so that you can help create aligned AI before someone else creates unaligned AI and kills us all”.
Today’s philosophy seems more like “let’s try to ensure that things won’t be quite as horrible as they are today, and if we work really hard and put all of our effort into it, there’s a chance that maybe we and all of our children won’t die.” Most of the world-saving energy seems to have gone into effective altruism, where people work on issues like making the US prison system suck less or distributing bednets to fight malaria. (Causes that I thoroughly support, to be clear, but also ones where the level of ambition seems quite a bit lower than in “let’s make it a violation of the laws of physics to try to harm people”.)
I can’t exactly complain about this. Litany of Tarski and alll: if the Old Optimistic Era was hopelessly naive and over-optimistic, then I wish to believe that it was hopelessly naive and over-optimistic, and believe in the more realistic predictions instead. And it’s not clear that the old optimism ever actually achieved much of anything in the way of its grandiose goals, whereas more “grounded” organizations such as GiveWell have achieved quite a lot.
But it still feels like there’s something valuable that we’ve lost.
For what it’s worth, I get the sense that the Oxford EA research community is pretty optimistic about the future, but generally seem to believe the risks are just more pragmatic to pay attention to.
Anders Sandberg is doing work on the potential of humans (or related entities) expanding through the universe. The phrase “Cosmic Endowment” is said every here and there. Stuart Armstrong recently created a calendar of the year 12020.
I personally have a very hard time imagining exactly what things will be like post-AGI or what we could come up with now that would make them better, conditional on it going well. It seems like future research could figure a lot of those details out. But I’m in some ways incredibly optimistic about the future. This model gives a very positive result, though also a not very specific one.
I think my personal view is something like, “Things seem super high-EV in expectation. In many ways, we as a species seem to be in a highly opportunistic setting. Let’s generally try to be as careful as possible to make sure we don’t mess up.”
Note that high-EV does not mean high-probability. It could be that we have a 0.1% chance of surviving, as a species, but if we do, there would be many orders of magnitude net benefit. I use this not because I believe we have a 0.1% chance, but rather because I think it’s a pretty reasonable lower bound.
I think that although the new outlook is more pessimistic, it is also more uncertain. So, yes, maybe we will become extinct, but maybe we will build a utopia.
It likely reflects a broader, general trend towards pessimism in our culture. Futurism was similarly pessimistic in the 1970s, and turned more generally optimistic in the 1980s. Right now we’re in a pessimistic period, but as things change in the future we can probably expect more optimism, including within futurism, if the zeitgeist becomes more optimistic.