I continue to think that, in worlds where we robustly survive, money is largely going to be obsolete. The thing that maximizes the terminal values of the kind of (handshake of) utility functions we can expect probably aren’t maximized by maintaining current allocations of wealth and institutions-that-care-about-that-wealth. The use for money/investment/resources is making sure we get utopia in the first place, by slowing capabilities and solving alignment (and thus also plausibly purchasing shares of the LDT utility function handshake), not being rich in utopia. (maybe see also 1, 2)
I continue to think that, in worlds where we robustly survive, money is largely going to be obsolete. The thing that maximizes the terminal values of the kind of (handshake of) utility functions we can expect probably aren’t maximized by maintaining current allocations of wealth and institutions-that-care-about-that-wealth. The use for money/investment/resources is making sure we get utopia in the first place, by slowing capabilities and solving alignment (and thus also plausibly purchasing shares of the LDT utility function handshake), not being rich in utopia. (maybe see also 1, 2)
What if we survive without building a global utility maximizer god?
I think it’s exceedingly unlikely (<1%) that we robustly prevent anyone from {making an AI that kills everyone} without an aligned sovereign.