“We shouldn’t colonize mars until we have a world government”
But it would take a world government to be able to enact and enforce “don’t colonize mars” worldwide.
On the other hand, if an AI Gardner was hard, but not impossible, and we only managed to make one after we had a thriving interstellar empire, then it could still stop the decent into malthusianism.
However if we escape earth before that happens, speed of light limitations will forever fragment us into competing factions impossible to garden.
If we escape earth before ASI, the ASI will still be able to garden the fragments.
Sort of related, I’m not persuaded by the conclusion to his parable. Won’t superintelligent AIs be subject to the same natural selective pressures as any other entity? What happens when our benevolent gardener encounters the expanding sphere of computronium from five galaxies over?
Firstly, if there is a singleton AI, it can use lots of error correction on itself. There is exactly one version of it, and it is far more powerful than anything else around. Whatsmore, the AI is well aware of these sorts of phenomena, and will move to squash any tiny traces of molochyness that it spots.
If humans made multiple AI’s, then there is a potential for conflict. However, the AI’s are motivated to avoid conflict. They would prefer to merge their resources into a single AI with a combined utility function, but they would prefer to pull a fast one on the other AI even more. I suspect that a fraction of a percent of available resources is spent on double checking and monitoring systems. The rest goes into an average of the utility functions.
If alien AI’s meet humanities AI’s, then either we get the value merging, or it turns out to be a lot harder to attack a star system than to defend it, so we get whichever stars we can reach first.
“We shouldn’t colonize mars until we have a world government”
But it would take a world government to be able to enact and enforce “don’t colonize mars” worldwide.
On the other hand, if an AI Gardner was hard, but not impossible, and we only managed to make one after we had a thriving interstellar empire, then it could still stop the decent into malthusianism.
If we escape earth before ASI, the ASI will still be able to garden the fragments.
Firstly, if there is a singleton AI, it can use lots of error correction on itself. There is exactly one version of it, and it is far more powerful than anything else around. Whatsmore, the AI is well aware of these sorts of phenomena, and will move to squash any tiny traces of molochyness that it spots.
If humans made multiple AI’s, then there is a potential for conflict. However, the AI’s are motivated to avoid conflict. They would prefer to merge their resources into a single AI with a combined utility function, but they would prefer to pull a fast one on the other AI even more. I suspect that a fraction of a percent of available resources is spent on double checking and monitoring systems. The rest goes into an average of the utility functions.
If alien AI’s meet humanities AI’s, then either we get the value merging, or it turns out to be a lot harder to attack a star system than to defend it, so we get whichever stars we can reach first.