Edit: in response to thought provoking commentary from hairyfigment I updated the first set of human-made risks from marginal to conditional on no overshoot, and downgraded the risk of overshoot to likely. Thanks for your help.
Within the next 50 years...
grey goo : theoretically possible but unlikely
meteor impact : theoretically possible but unlikely
Yellowstone Caldera: theoretically possible but unlikely
gamma ray burster: theoretically possible but unlikely
solar flare: theoretically possible but unlikely
green goo | no overshoot: somewhat likely
global nuclear war | no overshoot: somewhat likely
global pandemic | no overshoot: somewhat likely
new dark ages | no overshoot: somewhat likely
near extinction due to climate change | no overshoot: theoretically possible but unlikely
widespread and severe suffering and death due to climate change | no overshoot: likely
overshoot: somewhat likely
green goo | overshoot: somewhat likely
global nuclear war | overshoot: likely
global pandemic | overshoot: likely
new dark ages | overshoot: very likely
near extinction due to climate change | overshoot: somewhat likely
widespread and severe suffering and death due to climate change | overshoot: inevitable
Shouldn’t matter, I don’t assign high weight to amateur probabilities. I believe bokov’s argument is that this threat should be taken seriously purely on the grounds that we take much more theoretical of dangers seriously. Do we only take the hypotheticals seriously? If so, this is a serious oversight.
Hmm… I would have pegged your main argument as being more related to overpopulation than blind spots specifically. Although.… I admit I skimmed a little. X_X
At least I managed to pick up that it was a critical part of the article!
Now that I think about it… I’m not actually worried about overpopulation/resource collapse, but I am worried about LessWrong being willfully ignorant without intending to be so. I guess I really dropped the ball here in terms of … Wait, something about the article made me skim and I didn’t catch it on the first pass. This is intriguing. It’s been a long time since I’ve had this many introspective realizations in one thought train. I have to wonder how many others skimmed as well, what our collective reason to do so was, and what is the best route to solve this problem.
...Or else you just misspoke and resource collapse is actually your main concern/argument.
But even in that case, I skimmed, and I can see skimming being a problem. Yay for orthogonal properties!
All this from the mere statement of accuracy. …Did trying to avoid inferential silence play any role in your making this comment?
Now that I think about it… I’m not actually worried about overpopulation/resource collapse, but I am worried about LessWrong being willfully ignorant without intending to be so.
I think the chances of a significant portion of LessWrong not having thought about the issue is low. Population growth is a well understand issue compared to existiential risks like grey goo.
bokov makes a series of arguments that most people probably have heard before and many consider to be refuted and then suggests that because people don’t agree with him, they have a blindspot.
What makes you think most LessWrongers have thought about it to a degree to which the issue can be considered in the process of being solved? (For whatever needs to be done to “solve” it, whether that is “Do nothing different” or not.)
What makes you think most LessWrongers have thought about it to a degree to which the issue can be considered in the process of being solved?
I haven’t used the word solved in the post you quote. That word misses the point. Nobody claims that the issue of climate change is solved.
The question is whether it’s useful to model the issues like climate change in a way that centers around caring capacity and ignores politics.
It looks like a if you have a hammer everything looks like a nail issue. Yes, you can model the worlds problems that way, but that model isn’t very productive.
If you think about population amounts it makes sense to mentally separate different countries and continents.
Let’s say you start in the US. As an engineer you see a clear solution. We should increase the amount of abortion that happen in the US to get near to the carrying capacity.
If you try to push that policy you will see that you run into problems that are highly political.
The abortion debate is at the moment about the sacred value of life against the sacred value of woman’s control over their own body. If you come into that debate and say that you want more abortions because it has utility to keep US population down, you are not helping.
You have to remember that the US is a country where a good portion waits for the second coming of Christ and thinks that the bible says that they should procreate as much as possible.
Political issues like that make reducing population growth a very different issue than getting more telescopes to detect potentially dangerous asteroids or cooling down yellowstone by building a giant lake on top of it.
It makes sense to use an engineering lense to talk about asteroids because there no significant political group that considers watching asteroids with telescopes to be immoral. With yellowstone you might get some people who think that you are harming endangered spezies that live in that area, but those are people with whom you can argue directly they aren’t as politically powerful as anti-abortion Christians.
Another way to approach population growth is to approach Africa. Deciding as US or European that there should be less Africans has issues with neocolonism. That produces political problems.
It also turns out that that increasing wealth seems to be a good way to reduce the amount of children that a woman gets. That insights caused Bill Gates to focus his philantropic efforts in a way where he says things like:
The world today has 6.8 billion people. That’s headed up to about nine billion. Now, if we do a really great job on new vaccines, health care, reproductive health services, we could lower that by, perhaps, 10 or 15 percent.
You might find that GiveWell’s highest recommended charity is about malaria bed nets. Health care for the third world. Again that’s a point where we can make different arguments to encourage people to spend money on African bed nets. Saving a life for 2000$ seems to be a good argument to convince people.
GiveWell style altruistic altruism is an alternative to approaching Africa to “What can we do to reduce African population as effectively as possible”.
I think that population is an area that obvious enough that I would expect smart people on lesswrong and the effective altruism community to not be ignorant about the topic.
If you want to get a good feel for the data about population growth I would also recommend you to play a bit around with Gapminder(Press play to see how the child per woman ratio changed over the last 60 years.
I think that population is an area that obvious enough that I would expect smart people on lesswrong and the effective altruism community to not be ignorant about the topic.
Why? It seems like your comment was intended for someone asking a different question than the one I’m asking. I’m not asking for arguments and reasoning you can come up with that are population/resource usage related, but rather why you think a moderate portion of LessWrong and the effective altruism community have put sufficient thought into it that it no longer needs to be discussed in contexts like LessWrong. I had though it was obvious that that was the point I was questioning, and so would be the focus of any question I asked in response of your response, but it seems it was not as obvious as I thought it was.
Basically: Why do you think population growth is an “obvious” issue?
In a useful way? Quite frankly, I don’t trust very many people at all to spend their resources in useful ways. And this includes people who frequent LessWrong.
I realized as I was writing this that the overshoot is kind of like AIDS or aging—it doesn’t kill you directly, just predisposes you toward things that will.
I’ll edit it so the union of the whole set of conditional-on-overshoot disasters plus “other” is the likelihood of overshoot itself.
This looks incoherent. You call overshoot “very likely” and “near extinction due to climate change conditional on overshoot: somewhat likely”. Even if I interpret those as .7 and .2 respectively, we wind up with an unconditional probability of at least .14, which I hope is not what you mean by “theoretically possible but unlikely”. If that is what you meant then I do not understand how the world looks to you, or why you’re not spending this time fundraising for CSER / taking heroin.
If that is what you meant then I do not understand how the world looks to you, or why you’re not spending this time fundraising for CSER / taking heroin.
Classy.
I have only 5 bins here with which to span everything in (0,1): theoretically possible but unlikely, somewhat likely, likely, very likely, and inevitable. The goal is a rough ranking, at this point, I don’t have enough information to meaningfully estimate actual probabilities. You have a good point, though: it would be more self-consistent to say conditional on no overshoot for the first set.
If flaming me is what it takes for you to think seriously about this, then maybe it’s worth it.
Edit: in response to thought provoking commentary from hairyfigment I updated the first set of human-made risks from marginal to conditional on no overshoot, and downgraded the risk of overshoot to likely. Thanks for your help.
Within the next 50 years...
grey goo : theoretically possible but unlikely
meteor impact : theoretically possible but unlikely
Yellowstone Caldera: theoretically possible but unlikely
gamma ray burster: theoretically possible but unlikely
solar flare: theoretically possible but unlikely
green goo | no overshoot: somewhat likely
global nuclear war | no overshoot: somewhat likely
global pandemic | no overshoot: somewhat likely
new dark ages | no overshoot: somewhat likely
near extinction due to climate change | no overshoot: theoretically possible but unlikely
widespread and severe suffering and death due to climate change | no overshoot: likely
overshoot: somewhat likely
green goo | overshoot: somewhat likely
global nuclear war | overshoot: likely
global pandemic | overshoot: likely
new dark ages | overshoot: very likely
near extinction due to climate change | overshoot: somewhat likely
widespread and severe suffering and death due to climate change | overshoot: inevitable
Could you use probability numbers instead of works like likely/unlikely/very likely. It’s difficult to know what you mean with them.
I notice you have the probability of various scenarios conditional on the overshoot but no probability for the overshoot itself.
Shouldn’t matter, I don’t assign high weight to amateur probabilities. I believe bokov’s argument is that this threat should be taken seriously purely on the grounds that we take much more theoretical of dangers seriously. Do we only take the hypotheticals seriously? If so, this is a serious oversight.
That is precisely my main argument.
Hmm… I would have pegged your main argument as being more related to overpopulation than blind spots specifically. Although.… I admit I skimmed a little. X_X
At least I managed to pick up that it was a critical part of the article!
Now that I think about it… I’m not actually worried about overpopulation/resource collapse, but I am worried about LessWrong being willfully ignorant without intending to be so. I guess I really dropped the ball here in terms of … Wait, something about the article made me skim and I didn’t catch it on the first pass. This is intriguing. It’s been a long time since I’ve had this many introspective realizations in one thought train. I have to wonder how many others skimmed as well, what our collective reason to do so was, and what is the best route to solve this problem.
...Or else you just misspoke and resource collapse is actually your main concern/argument.
But even in that case, I skimmed, and I can see skimming being a problem. Yay for orthogonal properties!
All this from the mere statement of accuracy. …Did trying to avoid inferential silence play any role in your making this comment?
I think the chances of a significant portion of LessWrong not having thought about the issue is low. Population growth is a well understand issue compared to existiential risks like grey goo.
bokov makes a series of arguments that most people probably have heard before and many consider to be refuted and then suggests that because people don’t agree with him, they have a blindspot.
What makes you think most LessWrongers have thought about it to a degree to which the issue can be considered in the process of being solved? (For whatever needs to be done to “solve” it, whether that is “Do nothing different” or not.)
I haven’t used the word solved in the post you quote. That word misses the point. Nobody claims that the issue of climate change is solved.
The question is whether it’s useful to model the issues like climate change in a way that centers around caring capacity and ignores politics.
It looks like a if you have a hammer everything looks like a nail issue. Yes, you can model the worlds problems that way, but that model isn’t very productive.
If you think about population amounts it makes sense to mentally separate different countries and continents.
Let’s say you start in the US. As an engineer you see a clear solution. We should increase the amount of abortion that happen in the US to get near to the carrying capacity. If you try to push that policy you will see that you run into problems that are highly political.
The abortion debate is at the moment about the sacred value of life against the sacred value of woman’s control over their own body. If you come into that debate and say that you want more abortions because it has utility to keep US population down, you are not helping.
You have to remember that the US is a country where a good portion waits for the second coming of Christ and thinks that the bible says that they should procreate as much as possible.
Political issues like that make reducing population growth a very different issue than getting more telescopes to detect potentially dangerous asteroids or cooling down yellowstone by building a giant lake on top of it.
It makes sense to use an engineering lense to talk about asteroids because there no significant political group that considers watching asteroids with telescopes to be immoral. With yellowstone you might get some people who think that you are harming endangered spezies that live in that area, but those are people with whom you can argue directly they aren’t as politically powerful as anti-abortion Christians.
Another way to approach population growth is to approach Africa. Deciding as US or European that there should be less Africans has issues with neocolonism. That produces political problems.
It also turns out that that increasing wealth seems to be a good way to reduce the amount of children that a woman gets. That insights caused Bill Gates to focus his philantropic efforts in a way where he says things like:
You might find that GiveWell’s highest recommended charity is about malaria bed nets. Health care for the third world. Again that’s a point where we can make different arguments to encourage people to spend money on African bed nets. Saving a life for 2000$ seems to be a good argument to convince people.
GiveWell style altruistic altruism is an alternative to approaching Africa to “What can we do to reduce African population as effectively as possible”.
I think that population is an area that obvious enough that I would expect smart people on lesswrong and the effective altruism community to not be ignorant about the topic.
If you want to get a good feel for the data about population growth I would also recommend you to play a bit around with Gapminder(Press play to see how the child per woman ratio changed over the last 60 years.
Why? It seems like your comment was intended for someone asking a different question than the one I’m asking. I’m not asking for arguments and reasoning you can come up with that are population/resource usage related, but rather why you think a moderate portion of LessWrong and the effective altruism community have put sufficient thought into it that it no longer needs to be discussed in contexts like LessWrong. I had though it was obvious that that was the point I was questioning, and so would be the focus of any question I asked in response of your response, but it seems it was not as obvious as I thought it was.
Basically: Why do you think population growth is an “obvious” issue?
Do we? How much resources is allocated to the risk of gray goo or, say, the Yellowstone supervolcano?
Talk is cheap.
Yes, it is, but talk and attention are the only resources LessWrong reliably provides at the moment.
Well then, bokov is talking about the overshoot so we’re good, right?
Depends on how motivated others will now be to bring up this issue.
But others are already allocating resources to the overshoot, in my opinion way more than it deserves.
In a useful way? Quite frankly, I don’t trust very many people at all to spend their resources in useful ways. And this includes people who frequent LessWrong.
I realized as I was writing this that the overshoot is kind of like AIDS or aging—it doesn’t kill you directly, just predisposes you toward things that will.
I’ll edit it so the union of the whole set of conditional-on-overshoot disasters plus “other” is the likelihood of overshoot itself.
OK then, you put forward an estimate: an overshoot is very likely. Now what makes you think so?
This looks incoherent. You call overshoot “very likely” and “near extinction due to climate change conditional on overshoot: somewhat likely”. Even if I interpret those as .7 and .2 respectively, we wind up with an unconditional probability of at least .14, which I hope is not what you mean by “theoretically possible but unlikely”. If that is what you meant then I do not understand how the world looks to you, or why you’re not spending this time fundraising for CSER / taking heroin.
Classy.
I have only 5 bins here with which to span everything in (0,1): theoretically possible but unlikely, somewhat likely, likely, very likely, and inevitable. The goal is a rough ranking, at this point, I don’t have enough information to meaningfully estimate actual probabilities. You have a good point, though: it would be more self-consistent to say conditional on no overshoot for the first set.
If flaming me is what it takes for you to think seriously about this, then maybe it’s worth it.