A Case for Taking Over the World—Or Not.
As my education progresses, I’m seeing more and more paralells, through some fictional but generally nonfictional accounts, that sugget that the world is broken in a way that causes suffering to be an emergent property, if not intrinsic. Not just HPMoR (and Significant Digits after it), but in FDR’s State of the Union Speech, The Grapes of Wrath, Jared Diamond’s Guns, Germs, and Steel, among other works. People have been aware for most of human history that, because of humanity’s rules, people must suffer.
Knowing that this whole site is more or less dedicated to defeating unconscious ays of thinking and holds the mental enlightening of the human race paramount, I would like to pose this question:
What would we have to do to save the world?
Before breaking this question and this intent down, I’d like to clarify some things:
I am solely concerned with the practicalities, not with what people would or should do. Anybody who’s seen enough of the world and how it works have an idea of the immensity of it, but humans made the current state of events what they are today (barring other undiscovered factors not covered by my priors), with the majority of them being largely redundant to the process in one way or another. People have demonstrated repeatedly that a group of people can have an impact disproportionate to their individual means.
What would have to occur, in the current political, economic, social etc. climate and onwards?
Would it have to be conspiracy? Or something else?
You can ask any other bounding questions that you would like, such as “What is the minimum amount of manpower and resources required to accomplish X through the most expedient and readily available means?” At the end of the day (so to speak) we should be able to shortly arrive at some sort of operational plan. There’s no sense in taking this knowledge and not using it to further our cause.
As a corollary to my other comment…
HPMOR is a work of fiction. Significant Digits is a work of fiction. The Grapes of Wrath is a work of fiction. The logical fallacy of generalization from fictional evidence has not gotten any less fallacious in the last ten years.
FDR’s State of the Union speech (I assume you’re referring to his 1941 SotU address, a.k.a. the famous “Four Freedoms” speech, though the point stands regardless) is a piece of political propaganda. That designation, and that fact, needn’t imply anything bad about the speech’s intent or its effect, but we should understand that such oratory isn’t optimized for delivering objective truth.
Jared Diamond’s book is the only work of actual non-fiction—indeed, of scholarship—on your list. Its thesis (in broad strokes and in details both) is also not exactly free from academic controversy. But that’s beside the point; one book of popular science, even if it’s a work of pure genius, does not suffice to constitute a coherent and complete picture of the world.
Be careful that you do not let narrative—either in the form of fiction or of propaganda—shape your map of the world. Reality is not a story. Stick to the facts.
P.S.: I said before that “precision is everything”—and it is somewhat ironic that “the world is broken” is not nearly precise enough an evaluation from which to start fixing a broken world.
I should probably explicate my arguments here.
By referencing fictional works, I am referencing schools of thought that illustrate the viewpoint I advocate. That the works became popular, and that they were written at all, is evidence favoring the idea that a significant group of people were affected by the literature.
Additionally, I did not intend this post to be an exhaustive proof of my thesis, only an introduction to it. I should probably come back hen I have a version of this post that is, though, judging by the comments I’m recieving.
Lastly, my imprecision was supposed to solicit brainstorming on the explication itself, rather than calling attention to how vague it was.
TL;DR: Message recieved, I’ll come back when my formulations are more rigourous.
Why do you think the world needs saving and from what?
The current state of things, where people suffer wwhen they don’t have to due to circumstances outside of their control. Just because the world is the product of seven billion (largely) uncoordinated people and untold dead doesn’t mean that we have the excuse that seven billion people (or probably fewer) can’t fix “the way things are.” While I concede that we aren’t permanent fixtures on the planet, I am sufficiently disturbed by the idea of our version of humanity being one of many possible versions that destroys itself out of shortsightedness that I am willing to embark on any plan with a reasonable chance of working (and a suite of backup plans) with all of the resources that may be mustered by the means available to us.
Reducing suffering is a good goal, but what you’re talking about, in that case, is not saving the world, but improving it. It’s not just a matter of semantics; it’s a critically different perspective.
On the other hand, you also mention the possibility of humanity destroying ourselves. This is certainly something that we can rightly speak of “saving” the world from. But notice that this is a different concern than the “reducing suffering” one!
When you ask “What do we have to do to [accomplish goal X]?”, you have to be quite clear on what, precisely, goal X is.
The two goals that you mention can (and likely do!) have very different optimal approaches/strategies. It is even possible (in fact, due to resource constraints, it is likely) that they’re at odds with one another. If so, you may have to prioritize—at the very least.
“Save the world” sounds punchy, memorable, inspiring. But it’s not a great frame for thinking practically about the problem, which is quite difficult enough to demand the greatest rigor. With problems of this magnitude, errors compound and blossom into catastrophes. Precision is everything.
Probably should have made it clearer that I was inviting debate on that specific angle that you just brought up. I was trying to limit my bias by not being the first person to answer my own question. You’re right about the framing of the problem being problematic.
>It is even possible (in fact, due to resource constraints, it is likely) that they’re at odds with one another.
They’re almost certainly extremely at odds with each other. Saving humanity from destroying itself points in the other direction from reducing suffering, not by 180 degrees, but at a very sharp angle. This is not just because of resource constraints, but even more so because humanity is a species of torturers and it will try to spread life to places where it doesn’t naturally occur. And that life obviously will contain large amounts of suffering. People don’t like hearing that, especially in the x-risk reduction demographic, but it’s pretty clear the goals are at odds.
Since I’m a non-altruist, there’s not really any reason to care about most of that future suffering (assuming I’ll be dead by then), but there’s not really any reason to care about saving humanity from extinction, either.
There are some reasons why the angle is not a full 180 degrees: There might be aliens who would also cause suffering and humanity might compete with them for resources, humanity might wipe itself out in ways that also cause suffering such as AGI, or there might be a practical correlations between political philosophies that cause high-suffering and also high-extinction-probability, e.g. torturers are less likely to care about humanity’s survival. But none of these make the goals point in the same direction.
Ah, I can very much relate to that sentiment! The Effective Altruism movement was spawned largely in response to the concerns like that. Have you looked into their agenda, methods and achievements?
Previous LW discussions on taking over the world (last updated in 2013).
Comments of mine on “utopian hope versus reality” (dating from 2012).
Since that era, a few things have happened.
First change: LW is not quite the point of focus that it was. There was a rationalist diaspora into social media, and “Slate Star Codex” (and its associated subreddits?) became a more prominent locus of rationalist discussion. The most important “LW-ish” forums that I know about now, might be those which focus on quasi-technical discussion of AI issues like “alignment”. I call them the most important because of...
Second change: The era of deep learning, and of commercialized AI in the guise of “machine learning”, arrived. The fact that these algorithms are not limited to the resources of a single computer, but can in principle tap the resources of an entire data center or even the entire cloud of a major tech corporation, means that we have also arrived at the final stage of the race towards superintelligence.
In the past, taking over the world meant building or taking over the strongest superpower. Now it simply means being the first to create strongly superhuman intelligence; and saving the world means identifying a value system that will make an autonomous AI “friendly”, and working to ensure that the winner of the mind race is guided by friendly rather than unfriendly values. Every other concern is temporary, and any good work done towards other causes, will potentially be undone by unfriendly AI, if unfriendly values win the AI race.
(I do not say with 100% certainty that this is the nature of the world, but this scenario has sufficient internal logic that, if it does not apply to reality, there must be some other factor which somehow overrides it.)
I would certainly appreciate a workable solution brought about by those means, but I would take care to mention that this approach might not necessarily be the first or only approach. Ideally, I would like to asssemble a balanced portfolio of conspiracies.
If I take a PIDOOMA number of 90% expectation of failure over any timescale for an individual scheme, and assume strict success or failure, then seven is the minimum required to have a better than even chance of having a plan succeed, thirteen for a 75% chance, and a twenty-nine to be rolling for ones. Even though this probably doesn’t correlate to any real probabilities of success, the 7-13-29 is the benchmark I’m going to use going in.
Those are important distinctions. Thanks for the clarification!