Couldn’t that have the effect of dramatically accelerating human technological progress, without sufficiently increasing the quality of government or the state of AI safety?
You aren’t bringing democracy or other significantly improved governmental forms to the world. In the end it’s just another empire. It might last a few thousand years if you’re really lucky.
Hmm I don’t share this intuition. I think a possible crux is answering the following question:
Relative to possible historical trajectories, is our current trajectory unusually likely or unlikely to navigate existential risk well?
I claim that unless you have good outside view or inside view reasons to believe otherwise, you should basically assume our current trajectory is ~50th percentile of possible worlds. (One possible reason to think we’re better than average is anthropic survivorship bias, but I don’t find it plausible since I’m not aware of any extinction-level near misses).
With the 50th percentile baseline in mind, I think that a culture that is broadly
consequentialist
longtermist
one-world government (so lower potential for race dynamics)
permissive of privacy violations for the greater good
prone to long reflection and careful tradeoffs
has ancient texts explicitly warning of a) the dangers of apocalypse and b) a strong ingrained belief that the end of the world, is, in fact, bad.
Specific scenarios (from the ancient texts) warning of specific anticipated anthropogenic risks (dangers of intelligent golems, widespread disease, etc)
seems to just have a significantly better shot at avoiding accidental existential catastrophe than our current timeline. For example, you can imagine them spending percentage points of their economy on mitigating existential risks, the best scholars of their generation taking differential technological progress seriously, bureaucracies willing to delay dangerous technologies, etc.
Does this seem right to you? If not, approximately what percentile you will place our current trajectory?
___
In that case I think what you’ve done is essentially risk 2 thousand years of time for humans to live life on Earth, balancing this against the gamble that a Mohist empire offers a somewhat more sane and stable environment in which to navigate technological risks.
This seems like a bad bargain to me.
Moral uncertainty aside, sacrificing 2000 years of near subsistence-level existence of billions of humans seems like a fair price to trade for even a percentage point higher chance of achieving utopia for many orders of magnitude more sentient beings for billion of years (or avoiding S-risks, etc). And right now I think that (conditional upon success large enough to change the technological curve) this plan will increase the odds of an existential win by multiple percentage points.
Moral uncertainty aside, sacrificing 2000 years of near subsistence-level existence of billions of humans seems like a fair price to trade for even a percentage point higher chance of achieving utopia for many orders of magnitude more sentient beings for billion of years (or avoiding S-risks, etc). And right now I think that (conditional upon success large enough to change the technological curve) this plan will increase the odds of an existential win by multiple percentage points.
Fair enough, you’ve convinced me (moral uncertainty aside).
I was anchoring too much to minimizing downside risk.
Hmm I don’t share this intuition. I think a possible crux is answering the following question:
I claim that unless you have good outside view or inside view reasons to believe otherwise, you should basically assume our current trajectory is ~50th percentile of possible worlds. (One possible reason to think we’re better than average is anthropic survivorship bias, but I don’t find it plausible since I’m not aware of any extinction-level near misses).
With the 50th percentile baseline in mind, I think that a culture that is broadly
consequentialist
longtermist
one-world government (so lower potential for race dynamics)
permissive of privacy violations for the greater good
prone to long reflection and careful tradeoffs
has ancient texts explicitly warning of a) the dangers of apocalypse and b) a strong ingrained belief that the end of the world, is, in fact, bad.
Specific scenarios (from the ancient texts) warning of specific anticipated anthropogenic risks (dangers of intelligent golems, widespread disease, etc)
seems to just have a significantly better shot at avoiding accidental existential catastrophe than our current timeline. For example, you can imagine them spending percentage points of their economy on mitigating existential risks, the best scholars of their generation taking differential technological progress seriously, bureaucracies willing to delay dangerous technologies, etc.
Does this seem right to you? If not, approximately what percentile you will place our current trajectory?
___
Moral uncertainty aside, sacrificing 2000 years of near subsistence-level existence of billions of humans seems like a fair price to trade for even a percentage point higher chance of achieving utopia for many orders of magnitude more sentient beings for billion of years (or avoiding S-risks, etc). And right now I think that (conditional upon success large enough to change the technological curve) this plan will increase the odds of an existential win by multiple percentage points.
Fair enough, you’ve convinced me (moral uncertainty aside).
I was anchoring too much to minimizing downside risk.