My reason for mentioning AMF and global health is that doing so provides a concrete, pretty robustly researched example, rather than as to compare with efforts to improve the far future of humanity.
I think that working in global health in a reflective and goal directed way is probably better for improving global health than “earning to give” to AMF. Similarly, I think that working directly on things that bear on the long term future of humanity is probably a better way of improving the far future of humanity than “earning to give” to efforts along these lines.
I’ll discuss particular opportunities to impact the far future of humanity later on.
My reason for mentioning AMF and global health is that doing so provides a concrete, pretty robustly researched example
That depends on what you want to know, doesn’t it? As far as I know the impact of AMF on x-risk, astronomical waste, and total utilons integrated over the future of the galaxies, is very poorly researched and not at all concrete. Perhaps some other fact about AMF is concrete and robustly researched, but is it the fact I need for my decision-making?
(Yes, let’s talk about this later on. I’m sorry to be bothersome but talking about AMF in the same breath as x-risk just seems really odd. The key issues are going to be very different when you’re trying to do something so near-term, established, without scary ambiguity, etc. as AMF.)
I’m somewhat confused by the direction that this discussion has taken. I might be missing something, but I believe that the points related to AMF that I’ve made are:
GiveWell’s explicit cost-effectiveness estimate for AMF is much higher than the cost per DALY saved implied by the figure that MacAskill cited.
GiveWell’s explicit estimates for the cost-effectiveness of the best giving opportunities in the field of direct global health interventions have steadily gotten lower, and by conservation of expected evidence, one can expect this trend to continue.
The degree of regression to the mean observed in practice suggests that there’s less variance amongst the cost-effectiveness of giving opportunities than may initially appear to be the case.
By choosing an altruistic career path, one can cut down on the number of small probability failure modes associated with what you do.
I don’t remember mentioning AMF and x-risk reduction together at all. I recognize that it’s in principle possible that the “earning to give” route is better for x-risk reduction than it is for improving global health, but I believe the analogy between the two domains is sufficiently strong that my remarks on AMF have relevance (on a meta-level, not on an object level).
Yeah, I also have the feeling that I’m questioning you improperly in some fashion. I’m mostly driven by a sense that AMF is very disanalogous to the choices that face somebody trying to optimize x-risk charity (or rather total utilons over all future time, but x-risk seems to be the word we use for that nowadays). It seems though that we’re trying to have a discussion in an ad-hoc fashion that should be tabled and delayed for explicit discussion in a future post, as you say.
If I may list some differences I perceive between AMF and MIRI:
AMF’s impact is quite certain. MIRI’s impact feels more like a long shot —or even a pipe dream.
AMF’s impact is sizeable. MIRI’s potential impact is astronomic.
AMF’s impact is immediate. MIRI’s impact is long term only.
AMF’s have photos of children. MIRI have science fiction.
In mainstream circles, donating to AMF gives you pats in the back, while donating to MIRI gives you funny looks.
Near mode thinking will most likely direct one to AMF. MIRI probably requires one to shut up and multiply. Which is probably why I’m currently giving a little money to Greenpeace, despite being increasingly certain that it’s far, far from the best choice.
AMF’s impact is very likely to be net positive for the world under all reasonable hypotheses.
MIRI appears to me to have a chance to be massively net negative for humanity. I.e. if AI of the level they predict is actually possible, MIRI might end up creating or assisting in the creation of UFAI that would not otherwise be created, or perhaps not created as soon.
But what if AMF saves a child who grows up to be a biotechnologist and goes on to weaponize malaria and spread it to millions?
If you try hard enough, you can tell a story where any effort to accomplish X somehow turns out to accomplish ~X, but one must distinguish possibility from the balance of probability.
Yes, and the story where the child who grows up to be a biotechnologist and goes on to weaponize malaria and spread it to millions doesn’t pass the balance of probability test. The story that MIRI creates a dangerous AI fails to pass the balance of probability test only to the extent that one believes it is improbable that anyone can create such an AI. I do indeed consider it far more likely than not that there will never be the all-powerful AI you fear. And by that standard donations to MIRI are simply ineffective compared to donations to AMF.
However if I’m wrong about that and powerful FOOMing UFAIs are in fact possible, then I need to consider whether MIRI’s work is wise. If AIs do FOOM, there seems to me to be a very real possibility that MIRI’s work will either create a UFAI while trying to create a FAI, or alternatively enable others to do so. I’m not sure that’s more likely than that MIRI will one day create a FAI, but you can’t just multiply by the value of a very positive and very speculative outcome without including the possibility of a very negative and very speculative outcome.
The story that MIRI creates a dangerous AI fails to pass the balance of probability test only to the extent that one believes it is improbable that anyone can create such an AI.
[...]
However if I’m wrong about that and powerful FOOMing UFAIs are in fact possible, then I need to consider whether MIRI’s work is wise. If AIs do FOOM, there seems to me to be a very real possibility that MIRI’s work will either create a UFAI while trying to create a FAI, or alternatively enable others to do so.
If you increase the probability of uFAI in order for MIRI to kill everyone, the probability of someone else doing it goes up even more.
Maybe. I’m not sure about that though. MIRI is the only person or organization I’m aware of that seems to want to create a world controlling AI; and it’s the world-controlling part that I find especially dangerous. That could send MIRI’s AI in directions others won’t go. Are there other organizations attempting to develop AIs to control the world? Is anyone else trying to build a benevolent dictator?
The Machine Intelligence Research Institute exists to ensure that the creation of smarter-than-human intelligence benefits society.
They are well aware of the dangers of creating a uFAI, and you can be certain they will be real careful before they push a button that have the slightest chance of launching the ultimate ending (good or bad). Even then, they may very well decide that “being real careful” is not enough.
Are there other organizations attempting to develop AIs to control the world?
Anthropomorphic ideas of a “robot rebellion,” in which AIs spontaneously develop primate-like resentments of low tribal status, are the stuff of science fiction. The more plausible danger stems not from malice, but from the fact that human survival requires scarce resources: resources for which AIs may have other uses.
Many AIs will converge toward being optimizing systems, in the sense that, after self-modification, they will act to maximize some goal. For instance, AIs developed under evolutionary pressures would be selected for values that maximized reproductive fitness, and would prefer to allocate resources to reproduction rather than supporting humans.
Are there other organizations attempting to develop AIs to control the world? Is anyone else trying to build a benevolent dictator?
Is MIRI attempting to develop any sort of AI? I understood the current focus of its research to be the logic of Friendly AGI, i.e. given the ability to create a superintelligent entity, how do you build one that we would like to have created? This need not involve working on developing one.
AMF’s impact is very likely to be net positive for the world under all reasonable hypotheses.
That seems like a bizarre belief to hold. Or perhaps just overwhelmingly shortsighted. There are certainly reasonable hypotheses in which more people alive right now result in worse outcomes a single generation down the line, without even considering extinction level threats and opportunities. The world isn’t nearly easy enough to model and optimize for us to be that certain a disruptive influence on that scale will be a net positive under all reasonable hypotheses.
Would you care to cite any such reasonable hypotheses? I.e. under what assumptions do you think that saving a random poor person’s life is likely to be a net negative? Sum over the number of lives saved and even if one person grows up to be a serial killer, the total is still way positive. Can you really defend a situation in which it is preferable to have living people today die from malaria?
The problem with MIRI-hypothesized AI (beyond its implausibility) is that we don’t get to sum over all possible results. We get one result. Even if the chance of a good result is 80%, the chance of a disastrous result is still way too high for comfort.
Would you care to cite any such reasonable hypotheses? I.e. under what assumptions do you think that saving a random poor person’s life is likely to be a net negative? Sum over the number of lives saved and even if one person grows up to be a serial killer, the total is still way positive.
Most obviously it could cause an increase in world GDP without a commensurate acceleration in various risk prevention mechanisms. Species can evolve themselves to extinction and in a similar way humans could easily develop themselves to extinction if they are not careful or lucky. Messing around with various aspects of the human population would influence this… in one direction or another. It’s damn hard to predict.
Having a heuristic “short term lives saved == good” is useful. It massively simplifies calculations and if you have no information either way about side effects of the influence then it works well enough. But it would a significant epistemic error to mistake the heuristic for operating under uncertainty with confidence about the unpredictable (or difficult to predict) system in which you are operating.
Can you really defend a situation in which it is preferable to have living people today die from malaria?
What is socially defensible is not the same thing as what is accurate. But that isn’t the point here. All else being equal I would prefer AMF to have an extra million dollars to spend than to not have that extra million dollars. The expected value is positive. What I criticise is “very likely under all reasonable hypotheses” which is just way off. I do not have the epistemic resources to arrive at that confidence and I believe that you are arriving at that conclusion in error, not because of additional knowledge or probabilistic computational resources.
In fact, I’d expect AMF to have a net-negative impact (and a large one at that) a few decades down the line, unless there are unrealistic, unprecedented, imperialistic-in-scope, gigantic efforts to educate and provide for the dozen then-adult children (and their dozen children) a saved-from-malaria child can typically have.
Here’s Tom Friedman in his recent “Tell Me How This Ends” column:
I’ve been traveling to Yemen, Syria and Turkey to film a documentary on how environmental stresses contributed to the Arab awakening. As I looked back on the trip, it occurred to me that three of our main characters — the leaders of the two Yemeni [different countries, same dynamic] villages that have been fighting over a single water well and the leader of the Free Syrian Army in Raqqa Province, whose cotton farm was wiped out by drought — have 36 children among them: 10, 10 and 16.
It is why you can’t come away from a journey like this without wondering not just who will rule in these countries but how will anyone rule in these countries?
Do you really want to propose that it is better to let children in poor countries die of disease now than to save them, because they might have more children later? My prior on this is that you’re trolling, but if you really believe that and are willing to state it that baldly; then it might be worth having a serious conversation about population.
I’m not trolling. It’s a very touchy subject for sure. I would certainly highly prefer a world in which AMF succeeds if it is coupled with the necessary, massive changes to deal with the consequences of AMF succeeding.
A world in which just AMF succeeds, but in which the changes to deal with the 5 or 6 additional persons for every child surviving malaria do not happen is heading towards even greater disaster. The birth rate is not a “might have more children”, it’s a probabilistic certainty, without the aforementioned new pseudo-imperialism.
However, the task of nation-building and uplifting civil-war ravaged tribal societies is a task that dwarfs AMF (plenty of recent examples), or even the worldwide charity budget. Yet without it, what’s gonna happen, other than mass famines and other catastrophes?
I’m not talking about general Malthusian dynamics, but about countries whose population far exceeds the natural resources to support it, and which often do not offer the political environment, the infrastructure or the skills to exploit and develop what resources they have, other than trade them to the Chinese to prop up the ruling classes.
I’d expect a world in which AMF succeeds, leading to predictable tragedies on a more massive scale down the line, to be off worse than a world without AMF, with tragedies on a smaller scale. (To reiterate: A world with AMF succeeding and a long-term perspective for the survivers would be much better still.)
I’d rather contribute to charities which do not promise short-term benefits with probable long-term calamities, but rather to e.g. education projects and the development of stable civil institutions in such countries. (The picture gets fuzzied because eliminating certain disruptive diseases also has such positive externalities, but to a smaller degree.)
This ignores the social-scientific consensus that reducing infant mortality leads to reductions in family sizes. The moral dilemma you’re worried about doesn’t exist.
Citations needed. The relevant time horizons here are only 2-3 generations, do you suggest that societal norms will adapt faster than that (Edit: without accompanying larger efforts to build civil institutions)? The population explosion in, say, Bangladesh (1951: 42 million, 2011: 142 million) seems to suggest otherwise.
The phenomenon HaydnB refers to is the demographic transition, the theory of which is perhaps the best-established theory in the field of demography. Hereare two highly-cited reviews of the topic.
The relevant time horizons here are only 2-3 generations, do you suggest that societal norms will adapt faster than that? The population explosion in, say, Bangladesh (1951: 42 million, 2011: 142 million) seems to suggest otherwise.
HaydnB’s referring to family size, you’re referring to population, and it’s quite possible for the second to increase even as the first drops. This appears to be what happened in Bangladesh. I have not found any data stretching back to 1951 for completed family size in Bangladesh, but here is a paper that plots the total fertility rate from 1963 to 1996: it dropped from just under 8 to about 3½. I did find family size data going back to 1951 for neighbouring India: it fell from 6.0 in 1951 to 3.3 in 1997, with a concurrent decrease in infant mortality.
So I’m not HaydnB, but I have to answer your question with a “yes”: fertility norms can change, and have changed, greatly in the course of 2-3 generations. Bangladesh’s population, incidentally, is due to top out in about 40 years at ~200 million, only 40% higher than its current population.
During the transition, first mortality and then fertility declined, causing population growth rates first to accelerate and then to slow again, moving toward low fertility, long life and an old population.
From the second review:
It is true, however, that mortality reductions in poor countries and the consequent
rapid growth of population may impede capital formation and other aspects of
development. (Goes on to call the consequences mostly positive.)
Like Democratic Peace Theory, the demographic transition has historically been modeled after the now developed countries. At least that is where we get the latter “stages” from. Countries in which the reduction in mortality was achieved from within the country, a token of the relative strength of some aspects of its civil society. Not countries in which mortality reduction would be a solely external influence, transplanted from a more developed society into a tribal society.
but I have to answer your question with a “yes”: fertility norms can change, and have changed
Note that the question was whether societal norms will adapt faster than that, not whether they can and have in e.g. European countries. Especially if—and that’s the whole point of the dilemma—there are stark interventions (AMF) only in infant and disease mortality, without the much more difficult and costly interventions in nation building.
Will reducing infant / disease mortality alone thrust a country into a more developed status? Rather the contrary, since even the sources agree that the immediate effect would be even more of the already catastrophic population growth. Once you’re over the brink, a silver lining at the horizon isn’t as relevant.
As with the Bangladesh example, “only 40% higher than its current population” (and Bangladesh is comparatively developed anyways), if that figure translated (which it doesn’t) to Sub-Saharan populations, that would already be a catastrophe right there.
The question is, without nation building, would such countries be equipped to deal with just a 40% population rise over 40 years, let alone the one that’s actually prognosticated?
HaydnB doesn’t see the dilemma, since he seems to say that taking a tribal society, then externally implementing mortality reductions without accompanying large scale nation building will still reduce family sizes drastically, to the point that there are no larger scale catastrophes, even without other measures.
[quotations from reviews about population growth, emphasizing rapid/accelerating population growth]
These are consistent with what I wrote. Moreover, the world has already passed through the phase of accelerating population growth. The world’s population was increasing most rapidly 20-50 years ago (the exact period depends on whether one considers relative or absolute growth rates).
Like Democratic Peace Theory, the demographic transition has historically been modeled after the now developed countries. [...] Not countries in which mortality reduction would be a solely external influence, transplanted from a more developed society into a tribal society.
True enough, but mostly a moot point nowadays, because we’re no longer just predicting a fertility decline based on history; we’re watching it happen before our eyes. The global total fertility rate (not just mortality) has been in freefall for 50 years and even sub-Saharan Africa has had a steadily falling TFR since 1980.
Note that the question was whether societal norms will adapt faster than that, not whether they can and have in e.g. European countries.
Right, but the fact that they can change, have changed, and continue to change (in two large, poor, and very much non-European countries) is good evidence they’ll carry on changing. If medical interventions and other forms of non-institutional aid haven’t arrested the TFR decline so far, why would they arrest it in future?
Will reducing infant / disease mortality alone thrust a country into a more developed status? Rather the contrary, since even the sources agree that the immediate effect would be even more of the already catastrophic population growth.
The long-run effect matters more than the immediate effect (which ended decades ago).
The question is, without nation building, would such countries be equipped to deal with just a 40% population rise over 40 years, let alone the one that’s actually prognosticated?
The question I was addressing was the narrower one of whether reducing infant mortality reduces family sizes. Correlational evidence suggests (though does not prove) it does, maybe with a lag of a few years. I know of no empirical evidence that reductions in infant mortality increase family size in the long run, although they might in the short run.
Still, I might as well comment quickly on the broader question. As far as I know, the First World already focuses on stark interventions (like mass vaccination) more than nation building, and has done since decolonization. This has been accompanied by large declines in infant mortality, TFRs & family sizes, alongside massive population growth. It’s unclear to me why carrying on along this course will unleash disaster, not least because the societies you’re talking about are surely less “tribal” now than they were 10 or 20 or 50 years ago.
I don’t want to come off as Dr. Pangloss here. It’s quite possible global disaster awaits. But if it does happen, I’d be very surprised if it were because of the mechanism you’re proposing.
if development of newer institutions is what you are interested in, you can choose to contribute to charter cities or seasteading. That would be an intermediate risk-reward option between a low risk option like AMF and high risk high reward one like MIRI/FHI.
I’ll grant that MIRI could accelerate the creation of AGI, if their efforts to educate people about UFAI risks are particularly ineffective. But as far as UFAI creation at all is concerned, there are any number of very smart idiots in the world who would love to be on the news as “the first person to program an artificial general intelligence”. Or to be the first person to use a general AI to beat the stock market, as soon as enough parts of the puzzle have been worked out to make one by pasting together published math results. (Maybe a slightly more self-aware variation of AIXI-mc would do the trick.)
In my view, AGI is more or less inevitable, and MIRI is seemingly the only group publically interested in making it safe.
by conservation of expected evidence, one can expect this trend to continue
Not really related to the current discussion, but I want to make sure I understand the above statement. Is this assuming that the trend has not already been taken into account in forming the estimates?
Yes — the cost-effectiveness estimate has been adjusted every time a new issue has arisen, but on a case by case basis, without an attempt to extrapolate based on the historical trend.
My reason for mentioning AMF and global health is that doing so provides a concrete, pretty robustly researched example, rather than as to compare with efforts to improve the far future of humanity.
I think that working in global health in a reflective and goal directed way is probably better for improving global health than “earning to give” to AMF. Similarly, I think that working directly on things that bear on the long term future of humanity is probably a better way of improving the far future of humanity than “earning to give” to efforts along these lines.
I’ll discuss particular opportunities to impact the far future of humanity later on.
That depends on what you want to know, doesn’t it? As far as I know the impact of AMF on x-risk, astronomical waste, and total utilons integrated over the future of the galaxies, is very poorly researched and not at all concrete. Perhaps some other fact about AMF is concrete and robustly researched, but is it the fact I need for my decision-making?
(Yes, let’s talk about this later on. I’m sorry to be bothersome but talking about AMF in the same breath as x-risk just seems really odd. The key issues are going to be very different when you’re trying to do something so near-term, established, without scary ambiguity, etc. as AMF.)
I’m somewhat confused by the direction that this discussion has taken. I might be missing something, but I believe that the points related to AMF that I’ve made are:
GiveWell’s explicit cost-effectiveness estimate for AMF is much higher than the cost per DALY saved implied by the figure that MacAskill cited.
GiveWell’s explicit estimates for the cost-effectiveness of the best giving opportunities in the field of direct global health interventions have steadily gotten lower, and by conservation of expected evidence, one can expect this trend to continue.
The degree of regression to the mean observed in practice suggests that there’s less variance amongst the cost-effectiveness of giving opportunities than may initially appear to be the case.
By choosing an altruistic career path, one can cut down on the number of small probability failure modes associated with what you do.
I don’t remember mentioning AMF and x-risk reduction together at all. I recognize that it’s in principle possible that the “earning to give” route is better for x-risk reduction than it is for improving global health, but I believe the analogy between the two domains is sufficiently strong that my remarks on AMF have relevance (on a meta-level, not on an object level).
Yeah, I also have the feeling that I’m questioning you improperly in some fashion. I’m mostly driven by a sense that AMF is very disanalogous to the choices that face somebody trying to optimize x-risk charity (or rather total utilons over all future time, but x-risk seems to be the word we use for that nowadays). It seems though that we’re trying to have a discussion in an ad-hoc fashion that should be tabled and delayed for explicit discussion in a future post, as you say.
If I may list some differences I perceive between AMF and MIRI:
AMF’s impact is quite certain. MIRI’s impact feels more like a long shot —or even a pipe dream.
AMF’s impact is sizeable. MIRI’s potential impact is astronomic.
AMF’s impact is immediate. MIRI’s impact is long term only.
AMF’s have photos of children. MIRI have science fiction.
In mainstream circles, donating to AMF gives you pats in the back, while donating to MIRI gives you funny looks.
Near mode thinking will most likely direct one to AMF. MIRI probably requires one to shut up and multiply. Which is probably why I’m currently giving a little money to Greenpeace, despite being increasingly certain that it’s far, far from the best choice.
One more difference:
AMF’s impact is very likely to be net positive for the world under all reasonable hypotheses.
MIRI appears to me to have a chance to be massively net negative for humanity. I.e. if AI of the level they predict is actually possible, MIRI might end up creating or assisting in the creation of UFAI that would not otherwise be created, or perhaps not created as soon.
But what if AMF saves a child who grows up to be a biotechnologist and goes on to weaponize malaria and spread it to millions?
If you try hard enough, you can tell a story where any effort to accomplish X somehow turns out to accomplish ~X, but one must distinguish possibility from the balance of probability.
Yes, and the story where the child who grows up to be a biotechnologist and goes on to weaponize malaria and spread it to millions doesn’t pass the balance of probability test. The story that MIRI creates a dangerous AI fails to pass the balance of probability test only to the extent that one believes it is improbable that anyone can create such an AI. I do indeed consider it far more likely than not that there will never be the all-powerful AI you fear. And by that standard donations to MIRI are simply ineffective compared to donations to AMF.
However if I’m wrong about that and powerful FOOMing UFAIs are in fact possible, then I need to consider whether MIRI’s work is wise. If AIs do FOOM, there seems to me to be a very real possibility that MIRI’s work will either create a UFAI while trying to create a FAI, or alternatively enable others to do so. I’m not sure that’s more likely than that MIRI will one day create a FAI, but you can’t just multiply by the value of a very positive and very speculative outcome without including the possibility of a very negative and very speculative outcome.
If you increase the probability of uFAI in order for MIRI to kill everyone, the probability of someone else doing it goes up even more.
Maybe. I’m not sure about that though. MIRI is the only person or organization I’m aware of that seems to want to create a world controlling AI; and it’s the world-controlling part that I find especially dangerous. That could send MIRI’s AI in directions others won’t go. Are there other organizations attempting to develop AIs to control the world? Is anyone else trying to build a benevolent dictator?
MIRI’s stated goal is more meta:
They are well aware of the dangers of creating a uFAI, and you can be certain they will be real careful before they push a button that have the slightest chance of launching the ultimate ending (good or bad). Even then, they may very well decide that “being real careful” is not enough.
It probably doesn’t matter, as any uFAI is likely to emerge by mistake:
Is MIRI attempting to develop any sort of AI? I understood the current focus of its research to be the logic of Friendly AGI, i.e. given the ability to create a superintelligent entity, how do you build one that we would like to have created? This need not involve working on developing one.
That seems like a bizarre belief to hold. Or perhaps just overwhelmingly shortsighted. There are certainly reasonable hypotheses in which more people alive right now result in worse outcomes a single generation down the line, without even considering extinction level threats and opportunities. The world isn’t nearly easy enough to model and optimize for us to be that certain a disruptive influence on that scale will be a net positive under all reasonable hypotheses.
Would you care to cite any such reasonable hypotheses? I.e. under what assumptions do you think that saving a random poor person’s life is likely to be a net negative? Sum over the number of lives saved and even if one person grows up to be a serial killer, the total is still way positive. Can you really defend a situation in which it is preferable to have living people today die from malaria?
The problem with MIRI-hypothesized AI (beyond its implausibility) is that we don’t get to sum over all possible results. We get one result. Even if the chance of a good result is 80%, the chance of a disastrous result is still way too high for comfort.
Most obviously it could cause an increase in world GDP without a commensurate acceleration in various risk prevention mechanisms. Species can evolve themselves to extinction and in a similar way humans could easily develop themselves to extinction if they are not careful or lucky. Messing around with various aspects of the human population would influence this… in one direction or another. It’s damn hard to predict.
Having a heuristic “short term lives saved == good” is useful. It massively simplifies calculations and if you have no information either way about side effects of the influence then it works well enough. But it would a significant epistemic error to mistake the heuristic for operating under uncertainty with confidence about the unpredictable (or difficult to predict) system in which you are operating.
What is socially defensible is not the same thing as what is accurate. But that isn’t the point here. All else being equal I would prefer AMF to have an extra million dollars to spend than to not have that extra million dollars. The expected value is positive. What I criticise is “very likely under all reasonable hypotheses” which is just way off. I do not have the epistemic resources to arrive at that confidence and I believe that you are arriving at that conclusion in error, not because of additional knowledge or probabilistic computational resources.
In fact, I’d expect AMF to have a net-negative impact (and a large one at that) a few decades down the line, unless there are unrealistic, unprecedented, imperialistic-in-scope, gigantic efforts to educate and provide for the dozen then-adult children (and their dozen children) a saved-from-malaria child can typically have.
Here’s Tom Friedman in his recent “Tell Me How This Ends” column:
Do you really want to propose that it is better to let children in poor countries die of disease now than to save them, because they might have more children later? My prior on this is that you’re trolling, but if you really believe that and are willing to state it that baldly; then it might be worth having a serious conversation about population.
I’m not trolling. It’s a very touchy subject for sure. I would certainly highly prefer a world in which AMF succeeds if it is coupled with the necessary, massive changes to deal with the consequences of AMF succeeding.
A world in which just AMF succeeds, but in which the changes to deal with the 5 or 6 additional persons for every child surviving malaria do not happen is heading towards even greater disaster. The birth rate is not a “might have more children”, it’s a probabilistic certainty, without the aforementioned new pseudo-imperialism.
However, the task of nation-building and uplifting civil-war ravaged tribal societies is a task that dwarfs AMF (plenty of recent examples), or even the worldwide charity budget. Yet without it, what’s gonna happen, other than mass famines and other catastrophes?
I’m not talking about general Malthusian dynamics, but about countries whose population far exceeds the natural resources to support it, and which often do not offer the political environment, the infrastructure or the skills to exploit and develop what resources they have, other than trade them to the Chinese to prop up the ruling classes.
I’d expect a world in which AMF succeeds, leading to predictable tragedies on a more massive scale down the line, to be off worse than a world without AMF, with tragedies on a smaller scale. (To reiterate: A world with AMF succeeding and a long-term perspective for the survivers would be much better still.)
I’d rather contribute to charities which do not promise short-term benefits with probable long-term calamities, but rather to e.g. education projects and the development of stable civil institutions in such countries. (The picture gets fuzzied because eliminating certain disruptive diseases also has such positive externalities, but to a smaller degree.)
This ignores the social-scientific consensus that reducing infant mortality leads to reductions in family sizes. The moral dilemma you’re worried about doesn’t exist.
Citations needed. The relevant time horizons here are only 2-3 generations, do you suggest that societal norms will adapt faster than that (Edit: without accompanying larger efforts to build civil institutions)? The population explosion in, say, Bangladesh (1951: 42 million, 2011: 142 million) seems to suggest otherwise.
The phenomenon HaydnB refers to is the demographic transition, the theory of which is perhaps the best-established theory in the field of demography. Here are two highly-cited reviews of the topic.
HaydnB’s referring to family size, you’re referring to population, and it’s quite possible for the second to increase even as the first drops. This appears to be what happened in Bangladesh. I have not found any data stretching back to 1951 for completed family size in Bangladesh, but here is a paper that plots the total fertility rate from 1963 to 1996: it dropped from just under 8 to about 3½. I did find family size data going back to 1951 for neighbouring India: it fell from 6.0 in 1951 to 3.3 in 1997, with a concurrent decrease in infant mortality.
So I’m not HaydnB, but I have to answer your question with a “yes”: fertility norms can change, and have changed, greatly in the course of 2-3 generations. Bangladesh’s population, incidentally, is due to top out in about 40 years at ~200 million, only 40% higher than its current population.
From the first review:
From the second review:
Like Democratic Peace Theory, the demographic transition has historically been modeled after the now developed countries. At least that is where we get the latter “stages” from. Countries in which the reduction in mortality was achieved from within the country, a token of the relative strength of some aspects of its civil society. Not countries in which mortality reduction would be a solely external influence, transplanted from a more developed society into a tribal society.
Note that the question was whether societal norms will adapt faster than that, not whether they can and have in e.g. European countries. Especially if—and that’s the whole point of the dilemma—there are stark interventions (AMF) only in infant and disease mortality, without the much more difficult and costly interventions in nation building.
Will reducing infant / disease mortality alone thrust a country into a more developed status? Rather the contrary, since even the sources agree that the immediate effect would be even more of the already catastrophic population growth. Once you’re over the brink, a silver lining at the horizon isn’t as relevant.
As with the Bangladesh example, “only 40% higher than its current population” (and Bangladesh is comparatively developed anyways), if that figure translated (which it doesn’t) to Sub-Saharan populations, that would already be a catastrophe right there.
The question is, without nation building, would such countries be equipped to deal with just a 40% population rise over 40 years, let alone the one that’s actually prognosticated?
HaydnB doesn’t see the dilemma, since he seems to say that taking a tribal society, then externally implementing mortality reductions without accompanying large scale nation building will still reduce family sizes drastically, to the point that there are no larger scale catastrophes, even without other measures.
These are consistent with what I wrote. Moreover, the world has already passed through the phase of accelerating population growth. The world’s population was increasing most rapidly 20-50 years ago (the exact period depends on whether one considers relative or absolute growth rates).
True enough, but mostly a moot point nowadays, because we’re no longer just predicting a fertility decline based on history; we’re watching it happen before our eyes. The global total fertility rate (not just mortality) has been in freefall for 50 years and even sub-Saharan Africa has had a steadily falling TFR since 1980.
Right, but the fact that they can change, have changed, and continue to change (in two large, poor, and very much non-European countries) is good evidence they’ll carry on changing. If medical interventions and other forms of non-institutional aid haven’t arrested the TFR decline so far, why would they arrest it in future?
The long-run effect matters more than the immediate effect (which ended decades ago).
The question I was addressing was the narrower one of whether reducing infant mortality reduces family sizes. Correlational evidence suggests (though does not prove) it does, maybe with a lag of a few years. I know of no empirical evidence that reductions in infant mortality increase family size in the long run, although they might in the short run.
Still, I might as well comment quickly on the broader question. As far as I know, the First World already focuses on stark interventions (like mass vaccination) more than nation building, and has done since decolonization. This has been accompanied by large declines in infant mortality, TFRs & family sizes, alongside massive population growth. It’s unclear to me why carrying on along this course will unleash disaster, not least because the societies you’re talking about are surely less “tribal” now than they were 10 or 20 or 50 years ago.
I don’t want to come off as Dr. Pangloss here. It’s quite possible global disaster awaits. But if it does happen, I’d be very surprised if it were because of the mechanism you’re proposing.
if development of newer institutions is what you are interested in, you can choose to contribute to charter cities or seasteading. That would be an intermediate risk-reward option between a low risk option like AMF and high risk high reward one like MIRI/FHI.
I’ll grant that MIRI could accelerate the creation of AGI, if their efforts to educate people about UFAI risks are particularly ineffective. But as far as UFAI creation at all is concerned, there are any number of very smart idiots in the world who would love to be on the news as “the first person to program an artificial general intelligence”. Or to be the first person to use a general AI to beat the stock market, as soon as enough parts of the puzzle have been worked out to make one by pasting together published math results. (Maybe a slightly more self-aware variation of AIXI-mc would do the trick.)
In my view, AGI is more or less inevitable, and MIRI is seemingly the only group publically interested in making it safe.
Not really related to the current discussion, but I want to make sure I understand the above statement. Is this assuming that the trend has not already been taken into account in forming the estimates?
Yes — the cost-effectiveness estimate has been adjusted every time a new issue has arisen, but on a case by case basis, without an attempt to extrapolate based on the historical trend.