I agree that people will care more if their decisions clearly matter in producing that future.
This isn’t easy to apply to the AGI situation, because what actions will help which outcomes is quite unclear and vigorously argued. Serious thinkers argue for both trying to slow down (PauseAI), and for defensive acceleration (Buterin, Aschenbrenner, etc). And it’s further complicated in that many of us think that accelerating will probably produce a better world in a few years, then shortly after that, humanity is dead or sadly obsolete. This pits short-term directly against long-term concerns.
I very much agree that helping people imagine either a very good or a very bad future will cause them to care more about it. I think that’s been established pretty thoroughly in the decision-making empirical literature.
Here I’m reluctant to say more than “futures so good they’re difficult to imagine” since the my actual predictions sound like batshit-crazy scifi to most people right now. Sometimes I say things like people won’t have to work and global warming will be easy to solve; then people fret about what they’d do with their time if they didn’t have to work. I’ve also tried talking about dramatic health extension, to which people question how much longer they’d want to live any (except old people, who never do—but they’re ironically exactly the ones who probably won’t benefit from AGI-designed life extension).
That’s all specific points in agreement with your take that really good outcomes are hard for modern humans to conceive.
I agree that describing good futures is worth some more careful thinking.
One thought is that it might be easier for most folks to imagine a possible dystopian outcome, in which humans aren’t wiped out but made obsolete and simply starve to death when they can’t compete with AI wages for any job. I don’t think that’s the likeliest catastrophe, but it seems possible and might be a good point of focus.
Serious thinkers argue for both trying to slow down (PauseAI), and for defensive acceleration (Buterin, Aschenbrenner, etc)
Yeah, I’m in both camps. We should do our absolute best to slow down how quickly we approach building agents, and one way is leveraging AI that doesn’t rely on being agentic. It offers us a way to do something like global compute monitoring and could possibly also alleviate short-term incentives satisfiable by building agents, by offering a safer avenue. Insofar as a global moratorium stopping all large model research is feasible, we should probably just do that.
then people fret about what they’d do with their time if they didn’t have to work
It feels like there’s a missing genre of slice of life stories about people living in utopias. Arguably there are some members in related genres which might be weird to use for convincing people.
One thought is that it might be easier for most folks to imagine a possible dystopian outcome
The tale could have two topias, one where it was the best of times, another where it was the worst of times, the distance to either one more palpable for it, with the differences following from different decisions made at the outset, and possibly using many of the same characters. This seems like a sensible thing for somebody to do, as I can point to being personally better calibrated due to thinking along those lines.
All good points.
I agree that people will care more if their decisions clearly matter in producing that future.
This isn’t easy to apply to the AGI situation, because what actions will help which outcomes is quite unclear and vigorously argued. Serious thinkers argue for both trying to slow down (PauseAI), and for defensive acceleration (Buterin, Aschenbrenner, etc). And it’s further complicated in that many of us think that accelerating will probably produce a better world in a few years, then shortly after that, humanity is dead or sadly obsolete. This pits short-term directly against long-term concerns.
I very much agree that helping people imagine either a very good or a very bad future will cause them to care more about it. I think that’s been established pretty thoroughly in the decision-making empirical literature.
Here I’m reluctant to say more than “futures so good they’re difficult to imagine” since the my actual predictions sound like batshit-crazy scifi to most people right now. Sometimes I say things like people won’t have to work and global warming will be easy to solve; then people fret about what they’d do with their time if they didn’t have to work. I’ve also tried talking about dramatic health extension, to which people question how much longer they’d want to live any (except old people, who never do—but they’re ironically exactly the ones who probably won’t benefit from AGI-designed life extension).
That’s all specific points in agreement with your take that really good outcomes are hard for modern humans to conceive.
I agree that describing good futures is worth some more careful thinking.
One thought is that it might be easier for most folks to imagine a possible dystopian outcome, in which humans aren’t wiped out but made obsolete and simply starve to death when they can’t compete with AI wages for any job. I don’t think that’s the likeliest catastrophe, but it seems possible and might be a good point of focus.
Yeah, I’m in both camps. We should do our absolute best to slow down how quickly we approach building agents, and one way is leveraging AI that doesn’t rely on being agentic. It offers us a way to do something like global compute monitoring and could possibly also alleviate short-term incentives satisfiable by building agents, by offering a safer avenue. Insofar as a global moratorium stopping all large model research is feasible, we should probably just do that.
It feels like there’s a missing genre of slice of life stories about people living in utopias. Arguably there are some members in related genres which might be weird to use for convincing people.
The tale could have two topias, one where it was the best of times, another where it was the worst of times, the distance to either one more palpable for it, with the differences following from different decisions made at the outset, and possibly using many of the same characters. This seems like a sensible thing for somebody to do, as I can point to being personally better calibrated due to thinking along those lines.