Epistemic status: Rambly; probably unimportant; just getting an idea that’s stuck with me out there. Small dath-ilan-verse spoilers throughout, as well as spoilers for Sid Meier’s Alpha Centauri (1999), in case you’re meaning to get around to that.
The idea that’s most struck me reading Yudkowsky et al.’s dath ilani fiction is the idea that dath ilan is puzzled by war. Theirs isn’t a moral puzzlement; they’re puzzled at how ostensibly intelligent actors could fail to notice that everyone can do strictly better if they could just avoid fighting and instead sign enforceable treaties to divide the resources that would have been spent or destroyed fighting.
This … isn’t usually something that figures into our vision for the science-fiction future. Take Sid Meier’s Alpha Centauri (SMAC), a game whose universe absolutely fires my imagination. It’s a 4X title, meaning that it’s mostly about waging and winning ideological space war on the hardscrabble space frontier. As the human factions on Planet’s surface acquire ever more transformative technology ever faster, utterly transforming that world in just a couple hundred years as their Singularity dawns … they put all that technology to use blowing each other to shreds. And this is totally par for the course for hard sci-fi and the human future of the vision generally. War is a central piece of the human condition; always has been. We don’t really picture that changing as we get modestly superintelligent AI. Millenarian ideologies that preach the end of war … so preach because of how things will be once they have completely won the final war, not because of game theory that could reach across rival ideologies. The idea that intelligent actors who fundamentally disagree with one another’s moral outlooks will predictably stop fighting at a certain intelligence threshold not all that far above IQ 100, because fighting isn’t Pareto optimal … totally blindsides my stereotype of the science-fiction future. The future can be totally free of war, not because any single team has taken over the world, but because we got a little smarter about bargaining.
Let’s jump back to SMAC:
This is the game text that appears as the human factions on Planet approach their singularity. Because the first faction to kick off their singularity will have an outsized influence on the utility function inherited by their superintelligence, late-game war with horrifyingly powerful weapons is waged to prevent others from beating your faction to the singularity. The opportunity to make everything way better … creates a destructive race to that opportunity, waged with antimatter bombs and more exotic horrors.
I bet when dath ilan kicks off their singularity, they end up implementing their CEV in such a way as to not create an incentive for any one group to race to the end, to be sure my values aren’t squelched if someone else gets there first. That whole final fight over the future can be avoided, since the overall value-pie is about to grow enormously! Everyone can have more of what they want in the end, if we’re smart enough to think through our binding contracts now.
Moral of the story: beware of pattern matching the future you’re hoping for to either more of the past or to familiar fictional examples. Smart, more agentic actors behave differently.
Dath Ilan vs. Sid Meier’s Alpha Centauri: Pareto Improvements
Epistemic status: Rambly; probably unimportant; just getting an idea that’s stuck with me out there. Small dath-ilan-verse spoilers throughout, as well as spoilers for Sid Meier’s Alpha Centauri (1999), in case you’re meaning to get around to that.
The idea that’s most struck me reading Yudkowsky et al.’s dath ilani fiction is the idea that dath ilan is puzzled by war. Theirs isn’t a moral puzzlement; they’re puzzled at how ostensibly intelligent actors could fail to notice that everyone can do strictly better if they could just avoid fighting and instead sign enforceable treaties to divide the resources that would have been spent or destroyed fighting.
This … isn’t usually something that figures into our vision for the science-fiction future. Take Sid Meier’s Alpha Centauri (SMAC), a game whose universe absolutely fires my imagination. It’s a 4X title, meaning that it’s mostly about waging and winning ideological space war on the hardscrabble space frontier. As the human factions on Planet’s surface acquire ever more transformative technology ever faster, utterly transforming that world in just a couple hundred years as their Singularity dawns … they put all that technology to use blowing each other to shreds. And this is totally par for the course for hard sci-fi and the human future of the vision generally. War is a central piece of the human condition; always has been. We don’t really picture that changing as we get modestly superintelligent AI. Millenarian ideologies that preach the end of war … so preach because of how things will be once they have completely won the final war, not because of game theory that could reach across rival ideologies. The idea that intelligent actors who fundamentally disagree with one another’s moral outlooks will predictably stop fighting at a certain intelligence threshold not all that far above IQ 100, because fighting isn’t Pareto optimal … totally blindsides my stereotype of the science-fiction future. The future can be totally free of war, not because any single team has taken over the world, but because we got a little smarter about bargaining.
Let’s jump back to SMAC:
This is the game text that appears as the human factions on Planet approach their singularity. Because the first faction to kick off their singularity will have an outsized influence on the utility function inherited by their superintelligence, late-game war with horrifyingly powerful weapons is waged to prevent others from beating your faction to the singularity. The opportunity to make everything way better … creates a destructive race to that opportunity, waged with antimatter bombs and more exotic horrors.
I bet when dath ilan kicks off their singularity, they end up implementing their CEV in such a way as to not create an incentive for any one group to race to the end, to be sure my values aren’t squelched if someone else gets there first. That whole final fight over the future can be avoided, since the overall value-pie is about to grow enormously! Everyone can have more of what they want in the end, if we’re smart enough to think through our binding contracts now.
Moral of the story: beware of pattern matching the future you’re hoping for to either more of the past or to familiar fictional examples. Smart, more agentic actors behave differently.