[Question] Pondering how good or bad things will be in the AGI future

Yesterday I heard a podcast where someone said he hoped AGI would be developed in his lifetime. This confused me, and I realized that it might be useful—at least for me—to write down this confusion.

Consider that for some reasons—different history, different natural laws, whatever—LLMs had never been invented, the AI winter had taken forever, and AGI would generally be impossible. Progress would still have been possible in this hypothetical world, but without whatever is called AI nowadays or in the real-world future.

Such a world seems enjoyable. It is plausible that technological and political progress might get it to fulfilling all Sustainable Development Goals. Yes, death would still exist (though people might live much longer than they currently do). Yes, existential risks to humanity would still exist, although they might be smaller and hopefully kept in check. Yes, sadness and other bad feelings would still exist. Mental health would potentially fare very well in the long term (but perhaps poorly in the short term, due to smartphones or whatever). Overall, if I had to choose between living in the 2010s and not living at all, I think the 2010s were the much better choice, as were the 2000s and the 1990s (at least for the average population in my area). And the hypothetical 2010s (or hypothetical 2024) without AGI could still develop into something better.

But what about the actual future?

It clearly seems very likely that AI progress will continue. Median respondents to the 2023 Expert Survey on Progress in AI “put 5% or more on advanced AI leading to human extinction or similar, and a third to a half of participants gave 10% or more”. Some people seem to think that the extinction event that is expected with some 5% or whatever in the AI catastrophe case is some very fast event—maybe too fast for people to even realize what is happening. I do not know why that should be the case; a protracted and very unpleasant catastrophe seems at least as likely (conditional on the extinction case). So these 5% do not seem negligible.[1]

Well, at least in 19 of 20 possible worlds, everything goes extremely well because we have a benevolent AGI then, right?

That’s not clear, because an AGI future seems hard to imagine anyway. It seems so hard to imagine that while I’ve read a lot about what could go wrong, I haven’t yet found a concrete scenario of a possible future with AGI that strikes me as both a likely future and promising.

Sure, it seems that everybody should look forward to a world without suffering, but reading such scenarios, they do not feel like a real possibility, but like a fantasy. A fantasy does not have to obey real-world constraints, and that does not only include physical limitations but also all the details of how people find meaning, how they interact and how they feel when they spend their days.

It is unclear how we would spend our days in the AGI future, it is not guaranteed that “noone is left behind”, and it seems impossible to prepare. AI companies do not have a clear vision where we are heading, and journalists are not asking them because they just assume that creating AGI is a normal way of making money.

Do I hope that AGI will be developed during my lifetime? No, and maybe you are also reluctant about this, but nobody is asking you for your permission anyway. So if you can say something to make the 95% probability mass look good, I’d of course appreciate it. How do you prepare? What do you expect your typical day to be like in 2050?

  1. ^

    Of course, there are more extinction risks than just AI. In 2020, Toby Ord estimated “a 1 in 6 total risk of existential catastrophe occurring in the next century”.