(Except maybe I’d emphasise the command economy possibility slightly less. And compared to what I understand of your ranking, I’d rank competition between different AGIs/AGI-using factions as a relatively more important factor in determining what happens, and values put into AGIs as a relatively less important factor. I think these are both downstream of you expecting slightly-to-somewhat more singleton-like scenarios than I do?)
EDIT: see here for more detail on my take on Daniel’s takes.
Overall, I’d emphasize as the main point in my post: AI-caused shifts in the incentives/leverage of human v non-human factors of production, and this mattering because the interests of power will become less aligned with humans while simultaneously power becomes more entrenched and effective. I’m not really interested in whether someone should save or not for AGI. I think starting off with “money won’t matter post-AGI” was probably a confusing and misleading move on my part.
OK, cool, thanks for clarifying. Seems we were talking past each other then, if you weren’t trying to defend the strategy of saving money to spend after AGI. Cheers!
I see the command economy point as downstream of a broader trend: as technology accelerates, negative public externalities will increasingly scale and present irreversible threats (x-risks, but also more mundane pollution, errant bio-engineering plague risks etc.). If we condition on our continued existence, there must’ve been some solution to this which would look like either greater government intervention (command economy) or a radical upgrade to the coordination mechanisms in our capitalist system. Relevant to your power entrenchment claim: both of these outcomes involve the curtailment of power exerted by private individuals with large piles of capital.
(Note there are certainly other possible reasons to expect a command economy, and I do not know which reasons were particularly compelling to Daniel)
I think I agree with all of this.
(Except maybe I’d emphasise the command economy possibility slightly less. And compared to what I understand of your ranking, I’d rank competition between different AGIs/AGI-using factions as a relatively more important factor in determining what happens, and values put into AGIs as a relatively less important factor. I think these are both downstream of you expecting slightly-to-somewhat more singleton-like scenarios than I do?)
EDIT: see here for more detail on my take on Daniel’s takes.
Overall, I’d emphasize as the main point in my post: AI-caused shifts in the incentives/leverage of human v non-human factors of production, and this mattering because the interests of power will become less aligned with humans while simultaneously power becomes more entrenched and effective. I’m not really interested in whether someone should save or not for AGI. I think starting off with “money won’t matter post-AGI” was probably a confusing and misleading move on my part.
OK, cool, thanks for clarifying. Seems we were talking past each other then, if you weren’t trying to defend the strategy of saving money to spend after AGI. Cheers!
I see the command economy point as downstream of a broader trend: as technology accelerates, negative public externalities will increasingly scale and present irreversible threats (x-risks, but also more mundane pollution, errant bio-engineering plague risks etc.). If we condition on our continued existence, there must’ve been some solution to this which would look like either greater government intervention (command economy) or a radical upgrade to the coordination mechanisms in our capitalist system. Relevant to your power entrenchment claim: both of these outcomes involve the curtailment of power exerted by private individuals with large piles of capital.
(Note there are certainly other possible reasons to expect a command economy, and I do not know which reasons were particularly compelling to Daniel)