Space has resources people don’t own. The earth’s mantle a couple thousand feet down potentially has resources people don’t own. More to the point maybe, I don’t think humans will be able to continue enforcing laws barring a hostile takeover in the way you seem to think.
Imagine we find out that aliens are headed for earth and will arrive in a few years. Just from the light emissions their probes and expanding civilisation give off, we can infer that they’re obviously more technologically mature than us, probably already engineered themselves to be much smarter than us, and can basically do whatever they want with the atoms that make up our solar system and there’s nothing we can do about it. We don’t know what they want yet though. Maybe they’re friendly?
I think guessing that the aliens will be friendly and share human morality to an extent seems like a pretty specific guess about their minds to be making, and is maybe flase more likely than not. But guessing that they don’t care about human preferences or well-being but do care about human legal structures, that they won’t at all help you or gift you things, also won’t disassemble you and your property for its atoms[1], but will try to buy atoms from those whom the atoms belong to according to human legal records, now that strikes me as a really really really specific guess to be making that is very likely false.
Superintelligent AGIs don’t start out having giant space infrastructure, but qualitatively, I think they’d very quickly overshadow the collective power of humanity in a similar manner. They can see paths through the future to accomplish their goals much better than we can, routing around attempts by us to oppose them. The force that backs up our laws does not bind them. If you somehow managed to align them, they might want to follow some of our laws, because they care about them. But if someone managed to make them care about the legal system, they probably also managed to make them care about your well-being. Few humans, I think, would not at all care about other humans’ welfare, but would care about the rule of law, when choosing what to align their AGI with. That’s not a kind of value system that shows up in humans much.
So in that scenario, you don’t need a legal claim to part of the pre-exisiting economy to benefit from the superintelligences’ labours. They will gift some of their labour to you. Say the current value of the world economy is x, owned by humans roughly in proportion to how much money they have, and two years after superintelligence the value of the economy is 101x, with ca.99x of the new surplus owned by aligned superintelligences[2] because they created most of that value, and ca.x owned by rich humans who sold the superintelligence valuable resources and infrastructure to get the new industrial base started faster[3]. The superintelligence will then probably distribute its gains among humans according to some system that either treats conscious minds pretty equally, or follows the idiosyncratic preferences of the faction that aligned it, not according to how large a fraction of the total economy they used to own two years ago. So someone who started out with much more money than you two years ago doesn’t have much more money in expectation now than you do.
You can’t just demand super high share percentages from the superintelligence in return for that startup capital. It’s got all the resource owners in the world as potential bargain partners to compete with you. And really, the only reason it wouldn’t be steering the future into a deal where you get almost nothing, or just steal all your stuff, is to be nice to you. Decision theoretically, this is a handout with extra steps, not a negotiation between equals.
A question in my head is what range of fixed points are possible in terms of different numeric (“monetary”) economic mechanisms and contracts. Seems to me those are a kind of AI component that has been in use since before computers.
Space has resources people don’t own. The earth’s mantle a couple thousand feet down potentially has resources people don’t own. More to the point maybe, I don’t think humans will be able to continue enforcing laws barring a hostile takeover in the way you seem to think.
Imagine we find out that aliens are headed for earth and will arrive in a few years. Just from the light emissions their probes and expanding civilisation give off, we can infer that they’re obviously more technologically mature than us, probably already engineered themselves to be much smarter than us, and can basically do whatever they want with the atoms that make up our solar system and there’s nothing we can do about it. We don’t know what they want yet though. Maybe they’re friendly?
I think guessing that the aliens will be friendly and share human morality to an extent seems like a pretty specific guess about their minds to be making, and is maybe flase more likely than not. But guessing that they don’t care about human preferences or well-being but do care about human legal structures, that they won’t at all help you or gift you things, also won’t disassemble you and your property for its atoms[1], but will try to buy atoms from those whom the atoms belong to according to human legal records, now that strikes me as a really really really specific guess to be making that is very likely false.
Superintelligent AGIs don’t start out having giant space infrastructure, but qualitatively, I think they’d very quickly overshadow the collective power of humanity in a similar manner. They can see paths through the future to accomplish their goals much better than we can, routing around attempts by us to oppose them. The force that backs up our laws does not bind them. If you somehow managed to align them, they might want to follow some of our laws, because they care about them. But if someone managed to make them care about the legal system, they probably also managed to make them care about your well-being. Few humans, I think, would not at all care about other humans’ welfare, but would care about the rule of law, when choosing what to align their AGI with. That’s not a kind of value system that shows up in humans much.
So in that scenario, you don’t need a legal claim to part of the pre-exisiting economy to benefit from the superintelligences’ labours. They will gift some of their labour to you. Say the current value of the world economy is x, owned by humans roughly in proportion to how much money they have, and two years after superintelligence the value of the economy is 101x, with ca.99x of the new surplus owned by aligned superintelligences[2] because they created most of that value, and ca.x owned by rich humans who sold the superintelligence valuable resources and infrastructure to get the new industrial base started faster[3]. The superintelligence will then probably distribute its gains among humans according to some system that either treats conscious minds pretty equally, or follows the idiosyncratic preferences of the faction that aligned it, not according to how large a fraction of the total economy they used to own two years ago. So someone who started out with much more money than you two years ago doesn’t have much more money in expectation now than you do.
For its conserved quantum numbers really
Or owned by whomever the superintelligences take orders from.
You can’t just demand super high share percentages from the superintelligence in return for that startup capital. It’s got all the resource owners in the world as potential bargain partners to compete with you. And really, the only reason it wouldn’t be steering the future into a deal where you get almost nothing, or just steal all your stuff, is to be nice to you. Decision theoretically, this is a handout with extra steps, not a negotiation between equals.
A question in my head is what range of fixed points are possible in terms of different numeric (“monetary”) economic mechanisms and contracts. Seems to me those are a kind of AI component that has been in use since before computers.