The problem with that and many arguments for caution is that people usually barely care about possibilities even twenty years out.
It seems better to ask what would people do if they had more tangible options, such that they could reach a reflective equilibrium which explicitly endorses particular tradeoffs. People mostly pick not caring about possibilities twenty years out due to not seeing how their options constrain what happens in twenty years. This points to not treating their surface preferences as central insofar as they are not following from a reflective equilibrium with knowledge about all their available options. If one knows their principal can’t get that opportunity, one has a responsibility to still act on what their principal’s preferences would point to given more of the context.
Most people don’t care that much about logical consistency
They would care more about logical consistency if they knew more about its implications.
If we’re asking people to imagine a big empty future full of vague possibility, it’s not surprising that they’re ambivalent about long-termism. Describe an actual hard-for-humans-to-conceive-of-in-the-first-place utopia and how it conditions on their coordinacy, show them the joy and depth of each life which follows, the way things like going on an adventure were taken to a transcendent level, and the preferences they already had will plausibly lead them to adopt a more long-termist stance. On the surface, people care as a function of distance from how tangible the options are.
The problem is demonstrating that good outcomes are gated by what we do, and that those good outcomes are actually really good in a way hard for modern humans to conceive.
I agree that people will care more if their decisions clearly matter in producing that future.
This isn’t easy to apply to the AGI situation, because what actions will help which outcomes is quite unclear and vigorously argued. Serious thinkers argue for both trying to slow down (PauseAI), and for defensive acceleration (Buterin, Aschenbrenner, etc). And it’s further complicated in that many of us think that accelerating will probably produce a better world in a few years, then shortly after that, humanity is dead or sadly obsolete. This pits short-term directly against long-term concerns.
I very much agree that helping people imagine either a very good or a very bad future will cause them to care more about it. I think that’s been established pretty thoroughly in the decision-making empirical literature.
Here I’m reluctant to say more than “futures so good they’re difficult to imagine” since the my actual predictions sound like batshit-crazy scifi to most people right now. Sometimes I say things like people won’t have to work and global warming will be easy to solve; then people fret about what they’d do with their time if they didn’t have to work. I’ve also tried talking about dramatic health extension, to which people question how much longer they’d want to live any (except old people, who never do—but they’re ironically exactly the ones who probably won’t benefit from AGI-designed life extension).
That’s all specific points in agreement with your take that really good outcomes are hard for modern humans to conceive.
I agree that describing good futures is worth some more careful thinking.
One thought is that it might be easier for most folks to imagine a possible dystopian outcome, in which humans aren’t wiped out but made obsolete and simply starve to death when they can’t compete with AI wages for any job. I don’t think that’s the likeliest catastrophe, but it seems possible and might be a good point of focus.
Serious thinkers argue for both trying to slow down (PauseAI), and for defensive acceleration (Buterin, Aschenbrenner, etc)
Yeah, I’m in both camps. We should do our absolute best to slow down how quickly we approach building agents, and one way is leveraging AI that doesn’t rely on being agentic. It offers us a way to do something like global compute monitoring and could possibly also alleviate short-term incentives satisfiable by building agents, by offering a safer avenue. Insofar as a global moratorium stopping all large model research is feasible, we should probably just do that.
then people fret about what they’d do with their time if they didn’t have to work
It feels like there’s a missing genre of slice of life stories about people living in utopias. Arguably there are some members in related genres which might be weird to use for convincing people.
One thought is that it might be easier for most folks to imagine a possible dystopian outcome
The tale could have two topias, one where it was the best of times, another where it was the worst of times, the distance to either one more palpable for it, with the differences following from different decisions made at the outset, and possibly using many of the same characters. This seems like a sensible thing for somebody to do, as I can point to being personally better calibrated due to thinking along those lines.
It seems better to ask what would people do if they had more tangible options, such that they could reach a reflective equilibrium which explicitly endorses particular tradeoffs. People mostly pick not caring about possibilities twenty years out due to not seeing how their options constrain what happens in twenty years. This points to not treating their surface preferences as central insofar as they are not following from a reflective equilibrium with knowledge about all their available options. If one knows their principal can’t get that opportunity, one has a responsibility to still act on what their principal’s preferences would point to given more of the context.
They would care more about logical consistency if they knew more about its implications.
If we’re asking people to imagine a big empty future full of vague possibility, it’s not surprising that they’re ambivalent about long-termism. Describe an actual hard-for-humans-to-conceive-of-in-the-first-place utopia and how it conditions on their coordinacy, show them the joy and depth of each life which follows, the way things like going on an adventure were taken to a transcendent level, and the preferences they already had will plausibly lead them to adopt a more long-termist stance. On the surface, people care as a function of distance from how tangible the options are.
The problem is demonstrating that good outcomes are gated by what we do, and that those good outcomes are actually really good in a way hard for modern humans to conceive.
All good points.
I agree that people will care more if their decisions clearly matter in producing that future.
This isn’t easy to apply to the AGI situation, because what actions will help which outcomes is quite unclear and vigorously argued. Serious thinkers argue for both trying to slow down (PauseAI), and for defensive acceleration (Buterin, Aschenbrenner, etc). And it’s further complicated in that many of us think that accelerating will probably produce a better world in a few years, then shortly after that, humanity is dead or sadly obsolete. This pits short-term directly against long-term concerns.
I very much agree that helping people imagine either a very good or a very bad future will cause them to care more about it. I think that’s been established pretty thoroughly in the decision-making empirical literature.
Here I’m reluctant to say more than “futures so good they’re difficult to imagine” since the my actual predictions sound like batshit-crazy scifi to most people right now. Sometimes I say things like people won’t have to work and global warming will be easy to solve; then people fret about what they’d do with their time if they didn’t have to work. I’ve also tried talking about dramatic health extension, to which people question how much longer they’d want to live any (except old people, who never do—but they’re ironically exactly the ones who probably won’t benefit from AGI-designed life extension).
That’s all specific points in agreement with your take that really good outcomes are hard for modern humans to conceive.
I agree that describing good futures is worth some more careful thinking.
One thought is that it might be easier for most folks to imagine a possible dystopian outcome, in which humans aren’t wiped out but made obsolete and simply starve to death when they can’t compete with AI wages for any job. I don’t think that’s the likeliest catastrophe, but it seems possible and might be a good point of focus.
Yeah, I’m in both camps. We should do our absolute best to slow down how quickly we approach building agents, and one way is leveraging AI that doesn’t rely on being agentic. It offers us a way to do something like global compute monitoring and could possibly also alleviate short-term incentives satisfiable by building agents, by offering a safer avenue. Insofar as a global moratorium stopping all large model research is feasible, we should probably just do that.
It feels like there’s a missing genre of slice of life stories about people living in utopias. Arguably there are some members in related genres which might be weird to use for convincing people.
The tale could have two topias, one where it was the best of times, another where it was the worst of times, the distance to either one more palpable for it, with the differences following from different decisions made at the outset, and possibly using many of the same characters. This seems like a sensible thing for somebody to do, as I can point to being personally better calibrated due to thinking along those lines.