Broadly, I think I’m fairly optimistic about “increasing the power, wisdom, and maybe morality of good actors, particularly during times pivotal to humanity’s history.”
(Baseline: I’m bringing myself. I’m also bringing 100-300 pages of the best philosophy available in the 21st century, focused on grounding people in the best cross-cultural arguments for values/paradigms/worldviews I consider the most important).
How to achieve power: Before traveling back in time, learn old Chinese languages and a lot of history and ancient Chinese philosophy. Bring with me technological designs from the future, particularly things expected to provide decisive strategic advantages to even small states (eg, gunpowder, Ming-era giant repeating cross bows, etc. Might need some organizational theory/logistical advances stuff to help maintain the empire later, but possible Mohists are smart enough to figure this out on their own. Maybe some agricultural advances too). Find the local Mohists, teach them the relevant technologies and worldviews. Help them identify a state willing to listen to Mohists to prevent getting crushed, and slowly change the government from within while winning more and more wars.
Desired outcome: Broadly consequentialist one-world government, expanding outwards from Mohist China. Aware of all the classical arguments for utilitarianism, longtermism, existential risks, long reflection, etc.
Other possible pivotal points:
Give power to leaders of whichever world religion we think is most conducive for longterm prosperity (maybe Buddhism? High impartiality, scientific-ish, vegetarian, less of a caste system than close contender Hinduism)
Eg, a) give cool toys to Ashoka and b) convince Ashoka of the “right” flavors of Buddhism
Increase power to old-school English utilitarians.
One possible way to do this is by stopping the American revolution. If we believe Bentham and Gwern, the American revolution was a big mistake.
Talking to Ben Franklin and other reasonable people at the time might do this
Might be useful in general to talk to people like Bentham and other intellectual predecessors to make them seem even more farsighted than actually were
It’s possible you can increase power to them through useful empirical/engineering demonstrations that helps people think they’re knowledgeable.
Achieve personal power
Standard thing where you go back in time by <50 years and invest in early Microsoft, Google, Dominoes Pizza, bitcoin etc
Useful if we believe now is at or near the hinge of history
Increase power and wisdom to early transhumanists, etc.
“Hello SL4. My name is John Titor. I am from the future, and here’s what I know...”
Useful in most of the same worlds #3 is useful.
Long-haul AI Safety research
Bring up current alignment/safety concerns to early pioneers like Turing, make it clear you expect AGI to be a long time away (so AGI fears aren’t dismissed after the next AI winter).
May need to get some renown first by casually proving/stealing a few important theorems from the present.
In general I suspect I might not be creative enough. I wouldn’t be surprised if there are many other pivotal points around, eg, the birth of Communism, Christianity, the Scientific Revolution, etc.
Fuck yeah! I’d love to see a short story written with this premise. The Mohists sound really cool, and probably would have been receptive to EA ideas, and it’s awesome to imagine what the world would have been like if they had survived and thrived. Make sure you bring back lots of stories about why and how communism can go wrong, since one failure mode I anticipate for this plan is that the government becomes totalitarian, starts saying the ends justify the means, etc. Maybe bring an econ textbook.
I’m also generally excited of many different stories involving Mohism and alternative history. I’d also like to see somebody exploring the following premises (for different stories):
1) a young Mohist disciple thought about things for a long time, discovered longtermism, and realized (after some calculations with simplified assumptions) that the most important Mohist thing to do is guarantee a good future hundreds or thousands of years in the future. He slowly convinces the others. The Mohists try to execute on thousand-year plans (like Asimov’s Foundation minus the availability of computers and advanced math).
2) An emperor converts to Mohism.
3) The Mohists go underground after the establishment of Qin dynasty and alleged extreme suppression of dissenting thought. They develop into a secret society (akin to Freemasons) dedicated to safeguarding the longterm trajectory of the empire while secretly spreading consequentialist ideas.
4) Near-schism within the now-Mohist China due to the introduction of a compelling religion. Dissent about whether to believe in the supernatural, burden of proof, concerns with infinite ethics, etc
Oh, wow, Mohists do sound really awesome.From wikipedia:
The Mohists formed a highly structured political organization that tried to realize the ideas they preached, the writings of Mozi. Like Confucians, they hired out their services not only for gain, but also in order to realize their own ethical ideals. This political structure consisted of a network of local units in all the major kingdoms of China at the time, made up of elements from both the scholarly and working classes. Each unit was led by a juzi (literally, “chisel”—an image from craft making). Within the unit, a frugal and ascetic lifestyle was enforced. Each juzi would appoint his own successor. Mohists developed the sciences of fortification[clarification needed] and statecraft, and wrote treatises on government, ranging in topic from efficient agricultural production to the laws of inheritance. They were often hired by the many warring kingdoms as advisers to the state. In this way, they were similar to the other wandering philosophers and knights-errant of the period.
Sure, it’s “similar to the other wandering philosophers and knights-errant of the period”, but it’s such a good position for a group of proto-rationalists to be in.
And their philosophy has so much in common with EA! They’re utilitarian consequentialists! (Not hedonists, but some kind of utilitarian.)
Personally, I feel a lot of spiritual kinship towards Mohists (imo much cooler by my modern/Westernized tastes than Legalists, Daoists, Confucians and other philosophies popular during that time).
(the story below is somewhat stylized. Don’t take it too literally).
The Mohists’ main shtick is that they’d travel the land teaching their ways during the Warring States period, particularly towards weaker nations at risk of being crushed by larger/more powerful ones. Their reputation was great enough that kings will call off invasions based only on the knowledge that Mohist disciples are defending targeted cities.
One (somewhat anachronistic) analogy I like thinking of Mohists is as nerdy Jedi. They are organized in semi-monastic orders. They live ascetic lifestyles, denying themselves worldly pleasures for the greater good. They are exquisitely trained in the relevant crafts (diplomacy and lightsaber combat for Jedi; logic, philosophy, and siege engineering for Mohists).
Even their most critical flaws are similar to that of Jedi. In particular, their rejection of partiality and emotion feels reminiscent of what led to the fall of the Jedi (though I have no direct evidence it was actually bad for Mohist goals). More critically, their short-term moral goals do not align with a long-term stable strategy. In hindsight, we know that preserving “balance” between the various kingdoms was not a stable strategy since “empire” was an attractor state.
In the Mohists’ case, they fought on the side of losing states. Unfortunately, eventually one state won, and then the ruling empire’s morality were not fans of philosophies that espoused defending the weak.
Talking to Ben Franklin and other reasonable people at the time might do this
FWIW, after reading his biography, I get the impression that Franklin was very much under pressure from Bostonians who were really mad at the British, and could not have been less pro-revolution without being hated and discredited. I think what you actually want is to somehow prevent the Boston ‘massacre’ or similar.
Darn. Hmm I guess another possibility is to see if ~300 years of advances in propaganda social technology would mean someone from our timeline is much more persuasive than 1700s people, and, after some pre-time travel reading and marketing/rhetoric classes, try to write polemical newsletters directly (I’m unfortunately handicapped by being the wrong ethnicity so I need someone else to be my mouthpiece if I do this).
Preventing specific pivotal moments (like assassinations or Boston ‘massacre’) seems to rely on a very narrow theory of change, though maybe it’s enough?
How to achieve power: Before traveling back in time, learn old Chinese and a lot of history and philosophy. Bring with me technological designs from the future, particularly things expected to provide decisive strategic advantages to even small states (eg, gunpowder, Ming-era giant repeating cross bows, etc. Maybe some agricultural advances too). Find the local Mohists, teach them the relevant technologies and worldviews. Help them identify a state willing to listen to Mohists to prevent getting crushed, and slowly change the government from within while winning more and more wars.
Responding only to whether this part would work:
I like the idea of bringing back an entire sequence of war tech, so that you can always keep your side ahead of the curve.
I’m no historian of war, so the following might not be good enough, but something like...
Bring back horseback fighting techniques, which won out over chariot-based cavalry at some point. Teach this to the Mohists.
When others imitate, start teaching improved blacksmithing to the Mohists, for better swords, arrows, and armor.
When others imitate, bring out the gunpowder.
Etc.
Of course, this kind of sequence might require multiple generations, since any one of these technologies has the potential to continue providing momentum for a long time. But it seems like it could be incredibly effective.
Perhaps, if the Mohists are sane enough, you could teach them everything at once, but with the plan to carefully stage the use of various technologies.
Broadly, I think I’m fairly optimistic about “increasing the power, wisdom, and maybe morality of good actors, particularly during times pivotal to humanity’s history.”
After this first sentence, I expected that I would disagree with you along the lines of “you’re going to accelerate progress, and hasten the end”, but now I’m not so sure. It does seem like you’re putting more of an emphasis on wisdom than power, broadly taking the approach of handing out power to the wise in order to increase their influence.
But suppose you roll a 20 on your first idea and have a Critical Success establishing a worldwide Mohist empire.
Couldn’t that have the effect of dramatically accelerating human technological progress, without sufficiently increasing the quality of government or the state of AI safety?
You aren’t bringing democracy or other significantly improved governmental forms to the world. In the end it’s just another empire. It might last a few thousand years if you’re really lucky.
If we assume technological progress is about the same or only accelerated a little, then this means consequentialist ideals (Mohist thinking plus whatever you bring back) get instilled across the whole world, completely changing the face of human moral and religious development. This seems pretty good?
But part of how you’re creating a worldwide empire is by giving the Mohists a technological lead. I’m going to guess that you bring them up to the industrial revolution or so.
In that case I think what you’ve done is essentially risk 2 thousand years of time for humans to live life on Earth, balancing this against the gamble that a Mohist empire offers a somewhat more sane and stable environment in which to navigate technological risks.
Couldn’t that have the effect of dramatically accelerating human technological progress, without sufficiently increasing the quality of government or the state of AI safety?
You aren’t bringing democracy or other significantly improved governmental forms to the world. In the end it’s just another empire. It might last a few thousand years if you’re really lucky.
Hmm I don’t share this intuition. I think a possible crux is answering the following question:
Relative to possible historical trajectories, is our current trajectory unusually likely or unlikely to navigate existential risk well?
I claim that unless you have good outside view or inside view reasons to believe otherwise, you should basically assume our current trajectory is ~50th percentile of possible worlds. (One possible reason to think we’re better than average is anthropic survivorship bias, but I don’t find it plausible since I’m not aware of any extinction-level near misses).
With the 50th percentile baseline in mind, I think that a culture that is broadly
consequentialist
longtermist
one-world government (so lower potential for race dynamics)
permissive of privacy violations for the greater good
prone to long reflection and careful tradeoffs
has ancient texts explicitly warning of a) the dangers of apocalypse and b) a strong ingrained belief that the end of the world, is, in fact, bad.
Specific scenarios (from the ancient texts) warning of specific anticipated anthropogenic risks (dangers of intelligent golems, widespread disease, etc)
seems to just have a significantly better shot at avoiding accidental existential catastrophe than our current timeline. For example, you can imagine them spending percentage points of their economy on mitigating existential risks, the best scholars of their generation taking differential technological progress seriously, bureaucracies willing to delay dangerous technologies, etc.
Does this seem right to you? If not, approximately what percentile you will place our current trajectory?
___
In that case I think what you’ve done is essentially risk 2 thousand years of time for humans to live life on Earth, balancing this against the gamble that a Mohist empire offers a somewhat more sane and stable environment in which to navigate technological risks.
This seems like a bad bargain to me.
Moral uncertainty aside, sacrificing 2000 years of near subsistence-level existence of billions of humans seems like a fair price to trade for even a percentage point higher chance of achieving utopia for many orders of magnitude more sentient beings for billion of years (or avoiding S-risks, etc). And right now I think that (conditional upon success large enough to change the technological curve) this plan will increase the odds of an existential win by multiple percentage points.
Moral uncertainty aside, sacrificing 2000 years of near subsistence-level existence of billions of humans seems like a fair price to trade for even a percentage point higher chance of achieving utopia for many orders of magnitude more sentient beings for billion of years (or avoiding S-risks, etc). And right now I think that (conditional upon success large enough to change the technological curve) this plan will increase the odds of an existential win by multiple percentage points.
Fair enough, you’ve convinced me (moral uncertainty aside).
I was anchoring too much to minimizing downside risk.
Broadly, I think I’m fairly optimistic about “increasing the power, wisdom, and maybe morality of good actors, particularly during times pivotal to humanity’s history.”
(Baseline: I’m bringing myself. I’m also bringing 100-300 pages of the best philosophy available in the 21st century, focused on grounding people in the best cross-cultural arguments for values/paradigms/worldviews I consider the most important).
Scenario 0: Mohist revolution in China
When: Warring States Period (~400BC)
Who: The Mohists, an early school of proto-consequentialists in China, focused on engineering, logic, and large population sizes.
How to achieve power: Before traveling back in time, learn old Chinese languages and a lot of history and ancient Chinese philosophy. Bring with me technological designs from the future, particularly things expected to provide decisive strategic advantages to even small states (eg, gunpowder, Ming-era giant repeating cross bows, etc. Might need some organizational theory/logistical advances stuff to help maintain the empire later, but possible Mohists are smart enough to figure this out on their own. Maybe some agricultural advances too). Find the local Mohists, teach them the relevant technologies and worldviews. Help them identify a state willing to listen to Mohists to prevent getting crushed, and slowly change the government from within while winning more and more wars.
Desired outcome: Broadly consequentialist one-world government, expanding outwards from Mohist China. Aware of all the classical arguments for utilitarianism, longtermism, existential risks, long reflection, etc.
Other possible pivotal points:
Give power to leaders of whichever world religion we think is most conducive for longterm prosperity (maybe Buddhism? High impartiality, scientific-ish, vegetarian, less of a caste system than close contender Hinduism)
Eg, a) give cool toys to Ashoka and b) convince Ashoka of the “right” flavors of Buddhism
Increase power to old-school English utilitarians.
One possible way to do this is by stopping the American revolution. If we believe Bentham and Gwern, the American revolution was a big mistake.
Talking to Ben Franklin and other reasonable people at the time might do this
Might be useful in general to talk to people like Bentham and other intellectual predecessors to make them seem even more farsighted than actually were
It’s possible you can increase power to them through useful empirical/engineering demonstrations that helps people think they’re knowledgeable.
Achieve personal power
Standard thing where you go back in time by <50 years and invest in early Microsoft, Google, Dominoes Pizza, bitcoin etc
Useful if we believe now is at or near the hinge of history
Increase power and wisdom to early transhumanists, etc.
“Hello SL4. My name is John Titor. I am from the future, and here’s what I know...”
Useful in most of the same worlds #3 is useful.
Long-haul AI Safety research
Bring up current alignment/safety concerns to early pioneers like Turing, make it clear you expect AGI to be a long time away (so AGI fears aren’t dismissed after the next AI winter).
May need to get some renown first by casually proving/stealing a few important theorems from the present.
In general I suspect I might not be creative enough. I wouldn’t be surprised if there are many other pivotal points around, eg, the birth of Communism, Christianity, the Scientific Revolution, etc.
Fuck yeah! I’d love to see a short story written with this premise. The Mohists sound really cool, and probably would have been receptive to EA ideas, and it’s awesome to imagine what the world would have been like if they had survived and thrived. Make sure you bring back lots of stories about why and how communism can go wrong, since one failure mode I anticipate for this plan is that the government becomes totalitarian, starts saying the ends justify the means, etc. Maybe bring an econ textbook.
I’d love to see this. I’ve considered doing it myself but decided that I’m not good enough of a fiction writer (yet).
I’m also generally excited of many different stories involving Mohism and alternative history. I’d also like to see somebody exploring the following premises (for different stories):
1) a young Mohist disciple thought about things for a long time, discovered longtermism, and realized (after some calculations with simplified assumptions) that the most important Mohist thing to do is guarantee a good future hundreds or thousands of years in the future. He slowly convinces the others. The Mohists try to execute on thousand-year plans (like Asimov’s Foundation minus the availability of computers and advanced math).
2) An emperor converts to Mohism.
3) The Mohists go underground after the establishment of Qin dynasty and alleged extreme suppression of dissenting thought. They develop into a secret society (akin to Freemasons) dedicated to safeguarding the longterm trajectory of the empire while secretly spreading consequentialist ideas.
4) Near-schism within the now-Mohist China due to the introduction of a compelling religion. Dissent about whether to believe in the supernatural, burden of proof, concerns with infinite ethics, etc
Oh, wow, Mohists do sound really awesome. From wikipedia:
Sure, it’s “similar to the other wandering philosophers and knights-errant of the period”, but it’s such a good position for a group of proto-rationalists to be in.
And their philosophy has so much in common with EA! They’re utilitarian consequentialists! (Not hedonists, but some kind of utilitarian.)
Personally, I feel a lot of spiritual kinship towards Mohists (imo much cooler by my modern/Westernized tastes than Legalists, Daoists, Confucians and other philosophies popular during that time).
(the story below is somewhat stylized. Don’t take it too literally).
The Mohists’ main shtick is that they’d travel the land teaching their ways during the Warring States period, particularly towards weaker nations at risk of being crushed by larger/more powerful ones. Their reputation was great enough that kings will call off invasions based only on the knowledge that Mohist disciples are defending targeted cities.
One (somewhat anachronistic) analogy I like thinking of Mohists is as nerdy Jedi. They are organized in semi-monastic orders. They live ascetic lifestyles, denying themselves worldly pleasures for the greater good. They are exquisitely trained in the relevant crafts (diplomacy and lightsaber combat for Jedi; logic, philosophy, and siege engineering for Mohists).
Even their most critical flaws are similar to that of Jedi. In particular, their rejection of partiality and emotion feels reminiscent of what led to the fall of the Jedi (though I have no direct evidence it was actually bad for Mohist goals). More critically, their short-term moral goals do not align with a long-term stable strategy. In hindsight, we know that preserving “balance” between the various kingdoms was not a stable strategy since “empire” was an attractor state.
In the Mohists’ case, they fought on the side of losing states. Unfortunately, eventually one state won, and then the ruling empire’s morality were not fans of philosophies that espoused defending the weak.
FWIW, after reading his biography, I get the impression that Franklin was very much under pressure from Bostonians who were really mad at the British, and could not have been less pro-revolution without being hated and discredited. I think what you actually want is to somehow prevent the Boston ‘massacre’ or similar.
Darn. Hmm I guess another possibility is to see if ~300 years of advances in propaganda social technology would mean someone from our timeline is much more persuasive than 1700s people, and, after some pre-time travel reading and marketing/rhetoric classes, try to write polemical newsletters directly (I’m unfortunately handicapped by being the wrong ethnicity so I need someone else to be my mouthpiece if I do this).
Preventing specific pivotal moments (like assassinations or Boston ‘massacre’) seems to rely on a very narrow theory of change, though maybe it’s enough?
Responding only to whether this part would work:
I like the idea of bringing back an entire sequence of war tech, so that you can always keep your side ahead of the curve.
I’m no historian of war, so the following might not be good enough, but something like...
Bring back horseback fighting techniques, which won out over chariot-based cavalry at some point. Teach this to the Mohists.
When others imitate, start teaching improved blacksmithing to the Mohists, for better swords, arrows, and armor.
When others imitate, bring out the gunpowder.
Etc.
Of course, this kind of sequence might require multiple generations, since any one of these technologies has the potential to continue providing momentum for a long time. But it seems like it could be incredibly effective.
Perhaps, if the Mohists are sane enough, you could teach them everything at once, but with the plan to carefully stage the use of various technologies.
After this first sentence, I expected that I would disagree with you along the lines of “you’re going to accelerate progress, and hasten the end”, but now I’m not so sure. It does seem like you’re putting more of an emphasis on wisdom than power, broadly taking the approach of handing out power to the wise in order to increase their influence.
But suppose you roll a 20 on your first idea and have a Critical Success establishing a worldwide Mohist empire.
Couldn’t that have the effect of dramatically accelerating human technological progress, without sufficiently increasing the quality of government or the state of AI safety?
You aren’t bringing democracy or other significantly improved governmental forms to the world. In the end it’s just another empire. It might last a few thousand years if you’re really lucky.
If we assume technological progress is about the same or only accelerated a little, then this means consequentialist ideals (Mohist thinking plus whatever you bring back) get instilled across the whole world, completely changing the face of human moral and religious development. This seems pretty good?
But part of how you’re creating a worldwide empire is by giving the Mohists a technological lead. I’m going to guess that you bring them up to the industrial revolution or so.
In that case I think what you’ve done is essentially risk 2 thousand years of time for humans to live life on Earth, balancing this against the gamble that a Mohist empire offers a somewhat more sane and stable environment in which to navigate technological risks.
This seems like a bad bargain to me.
Hmm I don’t share this intuition. I think a possible crux is answering the following question:
I claim that unless you have good outside view or inside view reasons to believe otherwise, you should basically assume our current trajectory is ~50th percentile of possible worlds. (One possible reason to think we’re better than average is anthropic survivorship bias, but I don’t find it plausible since I’m not aware of any extinction-level near misses).
With the 50th percentile baseline in mind, I think that a culture that is broadly
consequentialist
longtermist
one-world government (so lower potential for race dynamics)
permissive of privacy violations for the greater good
prone to long reflection and careful tradeoffs
has ancient texts explicitly warning of a) the dangers of apocalypse and b) a strong ingrained belief that the end of the world, is, in fact, bad.
Specific scenarios (from the ancient texts) warning of specific anticipated anthropogenic risks (dangers of intelligent golems, widespread disease, etc)
seems to just have a significantly better shot at avoiding accidental existential catastrophe than our current timeline. For example, you can imagine them spending percentage points of their economy on mitigating existential risks, the best scholars of their generation taking differential technological progress seriously, bureaucracies willing to delay dangerous technologies, etc.
Does this seem right to you? If not, approximately what percentile you will place our current trajectory?
___
Moral uncertainty aside, sacrificing 2000 years of near subsistence-level existence of billions of humans seems like a fair price to trade for even a percentage point higher chance of achieving utopia for many orders of magnitude more sentient beings for billion of years (or avoiding S-risks, etc). And right now I think that (conditional upon success large enough to change the technological curve) this plan will increase the odds of an existential win by multiple percentage points.
Fair enough, you’ve convinced me (moral uncertainty aside).
I was anchoring too much to minimizing downside risk.
Could you turn that into a link? I was not previously familiar with this, and it was not immediately obvious which Gwern essay to look up.
Added some links! I love how Gwern has “American Revolution” under his “My Mistakes” list.