Glad you enjoyed it!
Could you elaborate on your last paragraph? Presuming a state overrides its economic incentives (ie establishes a robust post-AGI welfare system), I’d like to see how you think the selection pressures would take hold.
For what it’s worth, I don’t think “utopian communism” and/or a world without human agency are good. I concur with Rudolf entirely here—those outcomes miss agency what has so far been an core part of the human experience. I want dynamism to exist, though I’m still working on if/how I think we could achieve that. I’ll save that for a future post.
My claim is that the incentives AGI creates are quite similar to the resource curse, not that it would literally behave like a resource. But:
My default is that powerful actors will do their best to build systems that do what they ask them to do (ie they will not pursue aligning systems with human values).
The field points towards this: alignment efforts are primarily focused on controlling systems. I don’t think this is inherently a bad thing, but it results in the incentives I’m concerned about. I’ve not seen great work on defining human values, creating a value set a system could follow, and forcing them to follow it in a way that couldn’t be overridden by its creators. Anthropic’s Constitutional AI may be a counter-example.
The incentives point towards this as well. A system that is aligned to refuse efforts that could lead resource/power/capital concentration would be difficult to sell to corporations who are likely to pursue this.
These (here, here, and here) definitions are roughly what I am describing as intent alignment.