I don’t understand your comment. Do you mean that climate change and biodiversity are not discussed because everyone in LW thinks the same about them? because there is nothing to say? because there is nothing that can be done? because it is settled science? Please explain how issues falling within the correct contrarian cluster are not discussed at all and why you think that these issues fall within the cluster.
Well, I was just speculating—I don’t actually have any idea what the LW community in general thinks of the issue. What I was attempting to speculate is that the reason these topics aren’t discussed much is because the contrarian/skeptical position on them is clustered with the set of contrarian positions commonly held by LWers, and therefore aren’t discussed much since the contrarian position on them is basically that they aren’t deserving of much attention, especially relative to the kinds of existential risks LW is concerned with.
I’m not sure how much more detail I can go into on my thinking without violating the “no current politics” rule.
Something I should have said in my previous reply. I agree with the “no current politics” rule. My problem is with what is politics—to some everything is and to some almost nothing is. When a subject is a purely scientific one and the disagreement is about whether there is evidence and how to interpret it, then this is a area for rationality. We should be looking at evidence and evaluating it. That does not involve what I would call politics.
When I first got here I thought “existential risk” referred to a generalization of the ideas related to catastrophic climate change. That is, if we should plan for the low-probability but deadly event that climate change will be very severe, then we should also plan for other low-probability (or far-future) catastrophes: asteroid impacts, biological and nuclear weapons, and unfriendly AI, among others. I was surprised that, of the existential risks discussed, catastrophic climate change never seems to come up at all.
It’s possible that this is an innocent result of specialization: people here spend most of their time thinking about AI, and not about other things that they aren’t trained for.
If there were an organization committed to clarifying how we think about planning for low-probability risks, that organization really ought to consider climate change among other risks. It would be an interesting thing to study: how far in the future is it reasonable for present-day institutions to plan? How can scientists with predictions of possible catastrophe effectively communicate to governments, businesses, etc. that they need to plan, without starting a panic? The art of planning for existential risks in general is something that could really benefit from more study.
And it ought to include well-studied and well-publicized risks (like climate change) in addition to less-studied and less-publicized risks (like risks from technology not yet developed.) People have been planning for floods for a long time; surely people concerned about other risks can learn something from people who plan for the risk of floods.
But I don’t think SIAI or LessWrong is equipped for that mission.
It would be nice if people could use some rationality in deciding which ideas to be contrarian on. Maybe I live in an ivory tower but I don’t see any connection between biological/environmental dangers and politics.
My hypothesis would be that this is due to these issues falling within the Correct Contrarian Cluster
I don’t understand your comment. Do you mean that climate change and biodiversity are not discussed because everyone in LW thinks the same about them? because there is nothing to say? because there is nothing that can be done? because it is settled science? Please explain how issues falling within the correct contrarian cluster are not discussed at all and why you think that these issues fall within the cluster.
Well, I was just speculating—I don’t actually have any idea what the LW community in general thinks of the issue. What I was attempting to speculate is that the reason these topics aren’t discussed much is because the contrarian/skeptical position on them is clustered with the set of contrarian positions commonly held by LWers, and therefore aren’t discussed much since the contrarian position on them is basically that they aren’t deserving of much attention, especially relative to the kinds of existential risks LW is concerned with.
I’m not sure how much more detail I can go into on my thinking without violating the “no current politics” rule.
Something I should have said in my previous reply. I agree with the “no current politics” rule. My problem is with what is politics—to some everything is and to some almost nothing is. When a subject is a purely scientific one and the disagreement is about whether there is evidence and how to interpret it, then this is a area for rationality. We should be looking at evidence and evaluating it. That does not involve what I would call politics.
When I first got here I thought “existential risk” referred to a generalization of the ideas related to catastrophic climate change. That is, if we should plan for the low-probability but deadly event that climate change will be very severe, then we should also plan for other low-probability (or far-future) catastrophes: asteroid impacts, biological and nuclear weapons, and unfriendly AI, among others. I was surprised that, of the existential risks discussed, catastrophic climate change never seems to come up at all.
It’s possible that this is an innocent result of specialization: people here spend most of their time thinking about AI, and not about other things that they aren’t trained for.
If there were an organization committed to clarifying how we think about planning for low-probability risks, that organization really ought to consider climate change among other risks. It would be an interesting thing to study: how far in the future is it reasonable for present-day institutions to plan? How can scientists with predictions of possible catastrophe effectively communicate to governments, businesses, etc. that they need to plan, without starting a panic? The art of planning for existential risks in general is something that could really benefit from more study.
And it ought to include well-studied and well-publicized risks (like climate change) in addition to less-studied and less-publicized risks (like risks from technology not yet developed.) People have been planning for floods for a long time; surely people concerned about other risks can learn something from people who plan for the risk of floods.
But I don’t think SIAI or LessWrong is equipped for that mission.
I think you’re looking for the Future of Humanity Institute and their work on Global Catastrophic Risks
It would be nice if people could use some rationality in deciding which ideas to be contrarian on. Maybe I live in an ivory tower but I don’t see any connection between biological/environmental dangers and politics.