Also worth noting: if the onset of global catastrophes is better, then global catastrophes will tend to cluster together, so we might expect another global catastrophe before this one is over. (See the “clustering illusion.”)
philosophytorres
“A Harmful Idea”
Were the Great Tragedies of History “Mere Ripples”?
Part 2 can now be read here: https://www.lesswrong.com/posts/pbFGhMSWfccpW48wd/a-detailed-critique-of-one-section-of-steven-pinker-s
A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” in Enlightenment Now (Part 2)
A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” in Enlightenment Now (Part 1)
It’s amazing how many people on FB answered this question, “Annihilation, no question.” Really, I’m pretty shocked!
“The fact that there are more ‘real’ at any given time isn’t relevant to the fact of whether any of these mayfly sims are, themselves, real.” You’re right about this, because it’s a metaphysical issue. The question, though, is epistemology: what does one have reason to believe at any given moment. If you want to say that one should bet on being a sim, then you should also say that one is in room Y in Scenario 2, which seems implausible.
“Like, it seems perverse to make up an example where we turn on one sim at a time, a trillion trillion times in a row. … Who cares? No reason to think that’s our future.” The point is to imagine a possible future—and that’s all it needs to be—that instantiates none of the three disjuncts of the simulation argument. If one can show that, then the simulation argument is flawed. So far as I can tell, I’ve identified a possible future that is neither (i), (ii), nor (iii).
Is there a flaw in the simulation argument?
“My 5 dollars: maxipoc is mostly not about space colonisation, but prevention of total extinction.” But the goal of avoiding an x-catastrophe is to reach technological maturity, and reaching technological maturity would require space colonization (to satisfy the requirement that we have “total control” over nature). Right?
Yes, good points. As for “As result, we only move risks from one side equation to another, and even replace known risks with unknown risks,” another way to put the paper’s thesis is this: insofar as the threat of unilateralism becomes widespread, thus requiring a centralized surveillance apparatus, solving the control problem is that mush more important! I.e., it’s an argument for why MIRI’s work matters.
Could the Maxipok rule have catastrophic consequences? (I argue yes.)
What do you mean? How is mitigating climate change related to blackmail?
I actually think most historical groups wanted to vanquish the enemy, but not destroy either themselves or the environment to the point at which it’s no longer livable. This is one of the interesting things that shifts to the foreground when thinking about agents in the context of existential risks. As for people fighting to the death, often this was done for the sake of group survival, where the group is the relevant unit here. (Thoughts?)
Totally agree that some x-risks are non-agential, such as (a) risks from nature, and (b) risks produced by coordination problems, resulting in e.g. climate change and biodiversity loss. As for superpowers, I would classify them as (7). Thoughts? Any further suggestions? :-)
(2) is quite different in that it isn’t motivated by supernatural eschatologies. Thus, the ideological and psychological profiles of ecoterrorists are quite different than apocalyptic terrorists, which are bound together by certain common worldview-related threads.
I think my language could have been more precise: it’s not merely genocidal, but humanicidal or omnicidal that we’re talking about in the context of x-risks. Also, Khmer Rough wasn’t suicidal to my knowledge. Am I less right?
As for your first comment, imagine that everyone “wakes up” in a room with only the information provided and no prior memories. After 5 minutes, they’re put back to sleep—but before this occurs they’re asked about which room they’re in. (Does that make sense?)
I guarantee that the religious ideologues who have so far downvoted this haven’t read it.