On a meta level, I think there’s a difference in “model style” between your comment, some of which seems to treat future advances as a grab-bag of desirable things, and our post, which tries to talk more about the general “gears” that might drive the future world and its goodness. There will be a real shift in how progress happens when humans are no longer in the loop, as we argue in this section. Coordination costs going down will be important for the entire economy, as we argue here (though we don’t discuss things as galaxy-brained as e.g. Wei Dai’s related post). The question of whether humans are happy self-actualising without unbounded adversity cuts across every specific cool thing that we might get to do in the glorious transhumanist utopia.
Thinking about the general gears here matters. First, because they’re, well, general (e.g. if humans were not happy self-actualising without unbounded adversity, suddenly the entire glorious transhumanist utopia seems less promising). Second, because I expect that incentives, feedback loops, resources, etc. will continue mattering. The world today is much wealthier and better off than before industrialisation, but the incentives / economics / politics / structures of the industrial world let you predict the effects of it better than if you just modelled it as “everything gets better” (even though that actually is a very good 3-word summary). Of course, all the things that directly make industrialisation good really are a grab-bag list of desirable things (antibiotics! birth control! LessWrong!). But there’s structure behind that that is good to understand (mechanisation! economies of scale! science!). A lot of our post is meant to have the vibe of “here are some structural considerations, with near-future examples”, and less “here is the list of concrete things we’ll end up with”. Honestly, a lot of the reason we didn’t do the latter more is because it’s hard.
On a meta level, I think there’s a difference in “model style” between your comment, some of which seems to treat future advances as a grab-bag of desirable things, and our post, which tries to talk more about the general “gears” that might drive the future world and its goodness. There will be a real shift in how progress happens when humans are no longer in the loop, as we argue in this section. Coordination costs going down will be important for the entire economy, as we argue here (though we don’t discuss things as galaxy-brained as e.g. Wei Dai’s related post). The question of whether humans are happy self-actualising without unbounded adversity cuts across every specific cool thing that we might get to do in the glorious transhumanist utopia.
Thinking about the general gears here matters. First, because they’re, well, general (e.g. if humans were not happy self-actualising without unbounded adversity, suddenly the entire glorious transhumanist utopia seems less promising). Second, because I expect that incentives, feedback loops, resources, etc. will continue mattering. The world today is much wealthier and better off than before industrialisation, but the incentives / economics / politics / structures of the industrial world let you predict the effects of it better than if you just modelled it as “everything gets better” (even though that actually is a very good 3-word summary). Of course, all the things that directly make industrialisation good really are a grab-bag list of desirable things (antibiotics! birth control! LessWrong!). But there’s structure behind that that is good to understand (mechanisation! economies of scale! science!). A lot of our post is meant to have the vibe of “here are some structural considerations, with near-future examples”, and less “here is the list of concrete things we’ll end up with”. Honestly, a lot of the reason we didn’t do the latter more is because it’s hard.
Your last paragraph, though, is very much in this more gears-level-y style, and a good point. It reminds me of Eliezer Yudkowsky’s recent mini-essay on scarcity.