what i mean is that despite the foundamental scarcity of negentropy-until-heat-death, aligned superintelligent AI will be able to better allocate resources than any human-designed system. i expect that people will still be able to “play at money” if they want, but pre-singularity allocations of wealth/connections are unlikely to be relevant what maximizes nice-things utility.
it’s entirely useless to enter the post-AGI era with either wealth or wealthy connections. in fact, it’s a waste to not have spent it on increasing-the-probability-that-AGI-goes-well while money was still meaningful.
aligned superintelligent AI will be able to better allocate resources than any human-designed system.
Sure, but allocate to what end? Somebody gets to decide the goal, and you get more say if you have money than if you don’t. Same as in all of history, really.
As a concrete example, if you want to do something with the GPT-4 API, it costs money. When someday there’s an AGI API, it’ll cost money too.
the GPT-4 API has not taken over the world. there is a singular-point-in-time at which some AI will take over everything with a particular utility function and, if AI goes well, create utopia.
Sure, but allocate to what end?
whatever utility function it’s been launched with. which is particularly representative of who currently has money. it’s not somebody who decides resource-allocation-in-the-post-singularity-future, it’s some-utility-function, and the utility function is picked by whoever built the thing, and they’re unlikely to type a utility function saying “people should have control over the future proportional to their current allocation of wealth”. they’re a lot more likely to type something like “make a world that people would describe as good under CEV”.
It’s true that if the transition to the AGI era involves some sort of 1917-Russian-revolution-esque teardown of existing forms of social organization to impose a utopian ideology, pre-existing property isn’t going to help much.
Unless you’re all-in on such a scenario, though, it’s still worth preparing for other scenarios too. And I don’t think it makes sense to be all-in on a scenario that many people (including me) would consider to be a bad outcome.
what i mean is that despite the foundamental scarcity of negentropy-until-heat-death, aligned superintelligent AI will be able to better allocate resources than any human-designed system. i expect that people will still be able to “play at money” if they want, but pre-singularity allocations of wealth/connections are unlikely to be relevant what maximizes nice-things utility.
it’s entirely useless to enter the post-AGI era with either wealth or wealthy connections. in fact, it’s a waste to not have spent it on increasing-the-probability-that-AGI-goes-well while money was still meaningful.
Sure, but allocate to what end? Somebody gets to decide the goal, and you get more say if you have money than if you don’t. Same as in all of history, really.
As a concrete example, if you want to do something with the GPT-4 API, it costs money. When someday there’s an AGI API, it’ll cost money too.
the GPT-4 API has not taken over the world. there is a singular-point-in-time at which some AI will take over everything with a particular utility function and, if AI goes well, create utopia.
whatever utility function it’s been launched with. which is particularly representative of who currently has money. it’s not somebody who decides resource-allocation-in-the-post-singularity-future, it’s some-utility-function, and the utility function is picked by whoever built the thing, and they’re unlikely to type a utility function saying “people should have control over the future proportional to their current allocation of wealth”. they’re a lot more likely to type something like “make a world that people would describe as good under CEV”.
It’s true that if the transition to the AGI era involves some sort of 1917-Russian-revolution-esque teardown of existing forms of social organization to impose a utopian ideology, pre-existing property isn’t going to help much.
Unless you’re all-in on such a scenario, though, it’s still worth preparing for other scenarios too. And I don’t think it makes sense to be all-in on a scenario that many people (including me) would consider to be a bad outcome.