Broadly, I think I’m fairly optimistic about “increasing the power, wisdom, and maybe morality of good actors, particularly during times pivotal to humanity’s history.”
After this first sentence, I expected that I would disagree with you along the lines of “you’re going to accelerate progress, and hasten the end”, but now I’m not so sure. It does seem like you’re putting more of an emphasis on wisdom than power, broadly taking the approach of handing out power to the wise in order to increase their influence.
But suppose you roll a 20 on your first idea and have a Critical Success establishing a worldwide Mohist empire.
Couldn’t that have the effect of dramatically accelerating human technological progress, without sufficiently increasing the quality of government or the state of AI safety?
You aren’t bringing democracy or other significantly improved governmental forms to the world. In the end it’s just another empire. It might last a few thousand years if you’re really lucky.
If we assume technological progress is about the same or only accelerated a little, then this means consequentialist ideals (Mohist thinking plus whatever you bring back) get instilled across the whole world, completely changing the face of human moral and religious development. This seems pretty good?
But part of how you’re creating a worldwide empire is by giving the Mohists a technological lead. I’m going to guess that you bring them up to the industrial revolution or so.
In that case I think what you’ve done is essentially risk 2 thousand years of time for humans to live life on Earth, balancing this against the gamble that a Mohist empire offers a somewhat more sane and stable environment in which to navigate technological risks.
Couldn’t that have the effect of dramatically accelerating human technological progress, without sufficiently increasing the quality of government or the state of AI safety?
You aren’t bringing democracy or other significantly improved governmental forms to the world. In the end it’s just another empire. It might last a few thousand years if you’re really lucky.
Hmm I don’t share this intuition. I think a possible crux is answering the following question:
Relative to possible historical trajectories, is our current trajectory unusually likely or unlikely to navigate existential risk well?
I claim that unless you have good outside view or inside view reasons to believe otherwise, you should basically assume our current trajectory is ~50th percentile of possible worlds. (One possible reason to think we’re better than average is anthropic survivorship bias, but I don’t find it plausible since I’m not aware of any extinction-level near misses).
With the 50th percentile baseline in mind, I think that a culture that is broadly
consequentialist
longtermist
one-world government (so lower potential for race dynamics)
permissive of privacy violations for the greater good
prone to long reflection and careful tradeoffs
has ancient texts explicitly warning of a) the dangers of apocalypse and b) a strong ingrained belief that the end of the world, is, in fact, bad.
Specific scenarios (from the ancient texts) warning of specific anticipated anthropogenic risks (dangers of intelligent golems, widespread disease, etc)
seems to just have a significantly better shot at avoiding accidental existential catastrophe than our current timeline. For example, you can imagine them spending percentage points of their economy on mitigating existential risks, the best scholars of their generation taking differential technological progress seriously, bureaucracies willing to delay dangerous technologies, etc.
Does this seem right to you? If not, approximately what percentile you will place our current trajectory?
___
In that case I think what you’ve done is essentially risk 2 thousand years of time for humans to live life on Earth, balancing this against the gamble that a Mohist empire offers a somewhat more sane and stable environment in which to navigate technological risks.
This seems like a bad bargain to me.
Moral uncertainty aside, sacrificing 2000 years of near subsistence-level existence of billions of humans seems like a fair price to trade for even a percentage point higher chance of achieving utopia for many orders of magnitude more sentient beings for billion of years (or avoiding S-risks, etc). And right now I think that (conditional upon success large enough to change the technological curve) this plan will increase the odds of an existential win by multiple percentage points.
Moral uncertainty aside, sacrificing 2000 years of near subsistence-level existence of billions of humans seems like a fair price to trade for even a percentage point higher chance of achieving utopia for many orders of magnitude more sentient beings for billion of years (or avoiding S-risks, etc). And right now I think that (conditional upon success large enough to change the technological curve) this plan will increase the odds of an existential win by multiple percentage points.
Fair enough, you’ve convinced me (moral uncertainty aside).
I was anchoring too much to minimizing downside risk.
After this first sentence, I expected that I would disagree with you along the lines of “you’re going to accelerate progress, and hasten the end”, but now I’m not so sure. It does seem like you’re putting more of an emphasis on wisdom than power, broadly taking the approach of handing out power to the wise in order to increase their influence.
But suppose you roll a 20 on your first idea and have a Critical Success establishing a worldwide Mohist empire.
Couldn’t that have the effect of dramatically accelerating human technological progress, without sufficiently increasing the quality of government or the state of AI safety?
You aren’t bringing democracy or other significantly improved governmental forms to the world. In the end it’s just another empire. It might last a few thousand years if you’re really lucky.
If we assume technological progress is about the same or only accelerated a little, then this means consequentialist ideals (Mohist thinking plus whatever you bring back) get instilled across the whole world, completely changing the face of human moral and religious development. This seems pretty good?
But part of how you’re creating a worldwide empire is by giving the Mohists a technological lead. I’m going to guess that you bring them up to the industrial revolution or so.
In that case I think what you’ve done is essentially risk 2 thousand years of time for humans to live life on Earth, balancing this against the gamble that a Mohist empire offers a somewhat more sane and stable environment in which to navigate technological risks.
This seems like a bad bargain to me.
Hmm I don’t share this intuition. I think a possible crux is answering the following question:
I claim that unless you have good outside view or inside view reasons to believe otherwise, you should basically assume our current trajectory is ~50th percentile of possible worlds. (One possible reason to think we’re better than average is anthropic survivorship bias, but I don’t find it plausible since I’m not aware of any extinction-level near misses).
With the 50th percentile baseline in mind, I think that a culture that is broadly
consequentialist
longtermist
one-world government (so lower potential for race dynamics)
permissive of privacy violations for the greater good
prone to long reflection and careful tradeoffs
has ancient texts explicitly warning of a) the dangers of apocalypse and b) a strong ingrained belief that the end of the world, is, in fact, bad.
Specific scenarios (from the ancient texts) warning of specific anticipated anthropogenic risks (dangers of intelligent golems, widespread disease, etc)
seems to just have a significantly better shot at avoiding accidental existential catastrophe than our current timeline. For example, you can imagine them spending percentage points of their economy on mitigating existential risks, the best scholars of their generation taking differential technological progress seriously, bureaucracies willing to delay dangerous technologies, etc.
Does this seem right to you? If not, approximately what percentile you will place our current trajectory?
___
Moral uncertainty aside, sacrificing 2000 years of near subsistence-level existence of billions of humans seems like a fair price to trade for even a percentage point higher chance of achieving utopia for many orders of magnitude more sentient beings for billion of years (or avoiding S-risks, etc). And right now I think that (conditional upon success large enough to change the technological curve) this plan will increase the odds of an existential win by multiple percentage points.
Fair enough, you’ve convinced me (moral uncertainty aside).
I was anchoring too much to minimizing downside risk.