Blind Spot: Malthusian Crunch
In an unrelated thread, one thing led to another and we got onto the subject of overpopulation and carrying capacity. I think this topic needs a post of its own.
TLDR mathy version:
let f(m,t) be the population that can be supported using the fraction of Earth’s theoretical resource limit m we can exploit at technology level t
let t = k(x) be the technology level at year x
let p(x) be population at year x
What conditions must constant m and functions f(m,k(x)), k(x), and p(x) satisfy in order to insure that p(x) - f(m,t) > 0 for all x > today()? What empirical data are relevant to estimating the probability that these conditions are all satisfied?
Long version:
Here I would like to explore the evidence for and against the possibility that the following assertions are true:
Without human intervention, the carrying capacity of our environment (broadly defined1) is finite while there are no *intrinsic* limits on population growth.
Therefore, if the carrying capacity of our environment is not extended at a sufficient rate to outpace population growth and/or population growth does not slow to a sufficient level that carrying capacity can keep up, carrying capacity will eventually become the limit on population growth.
Abundant data from zoology show that the mechanisms by which carrying capacity limits population growth include starvation, epidemics, and violent competition for resources. If the momentum of population growth carries it past the carrying capacity an overshoot occurs, meaning that the population size doesn’t just remain at a sustainable level but rather plummets drastically, sometimes to the point of extinction.
The above three assertions imply that human intervention (by expanding the carrying capacity of our environment in various ways and by limiting our birth-rates in various ways) are what have to rely on to prevent the above scenario, let’s call it the Malthusian Crunch.
Just as the Nazis have discredited eugenics, mainstream environmentalists have discredited (at least among rationalists) the concept of finite carrying capacity by giving it a cultish stigma. Moreover, solutions that rely on sweeping, heavy-handed regulation have recieved so much attention (perhaps because the chain of causality is easier to understand) that to many people they seem like the *only* solutions. Finding these solutions unpalatable, they instead reject the problem itself. And by they, I mean us.
The alternative most environmentalists either ignore or outright oppose is deliberately trying to accelerate the rate of technological advancement to increase the “safety zone” between expansion of carrying capacity and population growth. Moreover, we are close to a level of technology that would allow us to start colonizing the rest of the solar system. Obviously any given niche within the solar system will have its own finite carrying capacity, but it will be many orders of magnitude higher than that of Earth alone. Expanding into those niches won’t prevent die-offs on Earth, but will at least be a partial hedge against total extinction and a necessary step toward eventual expansion to other star systems.
Please note: I’m not proposing that the above assertions must be true, only that they have a high enough probability of being correct that they should be taken as seriously as, for example, grey goo:
Predictions about the dangers of nanotech made in the 1980′s shown no signs of coming true. Yet, there is no known logical or physical reason why they can’t come true, so we don’t ignore it. We calibrate how much effort should be put into mitigating the risks of nanotechnology by asking what observations should make us update the likelihood we assign to a grey-goo scenario. We approach mitigation strategies from an engineering mindset rather than a political one.
Shouldn’t we hold ourselves to the same standard when discussing population growth and overshoot? Substitute in some other existential risks you take seriously. Which of them have an expectation2 of occuring before a Malthusian Crunch? Which of them have an expectation of occuring after?
Footnotes:
1: By carrying capacity, I mean finite resources such as easily extractable ores, water, air, EM spectrum, and land area. Certain very slowly replenishing resources such as fossil fuels and biodiversity also behave like finite resources on a human timescale. I also include non-finite resources that expand or replenish at a finite rate such as useful plants and animals, potable water, arable land, and breathable air. Technology expands carrying capacity by allowing us to exploit all resource more efficiently (paperless offices, telecommuting, fuel efficiency), open up reserves that were previously not economically feasible to exploit (shale oil, methane clathrates, high-rise buildings, seasteading), and accelerate the renewal of non-finite resources (agriculture, land reclamation projects, toxic waste remediation, desalinization plants).
2: This is a hard question. I’m not asking which catastrophe is the mostly likely to happen ever while holding everything else constant (the possible ones will be tied for 1 and the impossible ones will be tied for 0). I’m asking you to mentally (or physically) draw a set of survival curves, one for each catastrophe, with the x-axis representing time and the y-axis representing fraction of Everett branches where that catastrophe has not yet occured. Now, which curves are the upper bound on the curve representing Malthusian Crunch, and which curves are the lower bound? This is how, in my opinioon (as an aging researcher and biostatistician for whatever that’s worth) you think about hazard functions, including those for existential hazards. Keep in mind that some hazard functions change over time because they are conditioned on other events or because they are cyclic in nature. This means that the thing most likely to wipe us out in the next 50 years is not necessarily the same as the thing most likely to wipe us out in the 50 years after that. I don’t have a formal answer for how to transform that into optimal allocation of resources between mitigation efforts but that would be the next step.
- 22 Oct 2013 17:46 UTC; 5 points) 's comment on Is it immoral to have children? by (
- US default as a risk to mitigate by 15 Oct 2013 16:41 UTC; 4 points) (
- 18 Oct 2013 16:22 UTC; 0 points) 's comment on US default as a risk to mitigate by (
- 18 Oct 2013 16:53 UTC; 0 points) 's comment on US default as a risk to mitigate by (
- 18 Oct 2013 15:44 UTC; 0 points) 's comment on Looking for opinions of people like Nick Bostrom or Anders Sandberg on current cryo techniques by (
Advocating that population control as the most important priority that there is damages efforts at vaccination.
If it’s plausible that your morals are okay with giving vaccinations in a way to damage human reproductive capacity your effort of vaccination people against important diseases runs into trouble.
There are enough conspiracy theorists out there that claim that the UN cares about population control enough to vaccinate in a way reduces reproduction capacity that’s an issue. It’s valuable to signal that you care more about saving lives than you can about population control when you want that a African nation welcomes your help at vaccinating it’s population to get rid of nasty diseases.
The politics of going to an African nation and saying: “We come with an engineering solution to reduce your population growth are just terrible.”
An African community is less likely to take your condoms when they think that you want to reduce their population growth than when they think you care about protecting them from AIDS.
Politics matter. Trying to tackle the issue of population growth by ignoring politics has the danger that you make a lot of political mistakes that hurt your course.
Yes, it’s a socially tough question. It might be so tough that the bulk of mitigation efforts might have to be put into the technological advancement side of the equation, and that seems to be what’s happening, though it’s unclear how deliberate this is.
But just because publicly acknowledging the nature of a problem will make one unpopular doesn’t mean that one should privately start to deny it. On the contrary, one should correct for the Koolaid by privately reminding one’s self what the real problem is, and that a socially acceptable framing of the problem has to be part of any solution that one expects to work.
Resources inside a light cone go according to T cubed, population growth is exponential: thus we see resource limitation ubiquitously: Malthus was (essentially) correct.
Maybe “T cubed” will turn out to be completely wrong, and there will be some way of getting hold of exponential resources—but few will be holding their breath for news of this.
Stein’s Law: If something cannot go on forever, it will stop.
On a more basic note, population growth is exponential only under certain conditions which tend to be rare and do not persist.
It’s not just population growth. It’s resource growth. Our entire modern system of economy is based on the idea of exponential growth. This system must collapse eventually, it’s only a question of how many planets we consume before that happens.
I’ve heard this phrase before. I see no reason to believe it’s true. Japan has been at basically zero growth for more than 20 years by now and seems to be doing fine. Sure, it could be doing better but it’s not like its system of economy collapsed.
Now, government social programs tend to be based on the hope of exponential growth, but that’s a different problem altogether.
Politicians and other decision makers base their economic decisions on the assumption of growth. You are correct that continued exponential growth is not necessary for a healthy society. In fact we must eventually learn to live with zero or near-zero growth, so we had best start doing it now and adjusting our policies accordingly.
Hopefully more than one. There are a lot of underutilized planets out there, even within our own solar system.
Upvote.
This just illustrates the craziness. You present a fact of basic algebra in the abstract, and nobody has a problem with it. Even though it’s a direr prediction because it is fully general in its relevance.
I say the same thing but on a local scale, and get a very vigorous reaction.
The Jevons paradox: technological improvements make each unit of natural resources more useful, increasing the rate at which they are used up. (Though I’m not convinced that most environmentalists actually are opposed to all relevant technological improvements. I’ve definitely never heard any complain about solar energy research, for example.) Additionally, a safety margin maintained through an ever-increasing rate of technological advancement is brittle and seems like it should increase catastrophic risk. An analogy: “let’s not build below sea level” is more robust than “intricate dyke system vulnerable to catastrophic failure.”
I like the idea of space colonization, but it’s not clear that it’s a practical, let alone robust, way to get our eggs into more baskets.
On existential risk overall, my reading on AI has been pushing me towards the point of view that actually global warming → civilizational collapse may be our best hope for the future, if it can only happen fast enough to prevent the development of a superintelligence.
I read somewhere that to calibrate the logistics of getting everyone off Earth, you should consider how much it would cost and how long it would take to load every human onto a passenger jet and fly them all to the same continent. I wish I could find that essay. Long story short, it would take a loooot of resources. So, it probably won’t be our eggs in particular getting into more baskets, but at least the eggs of some fellow humans.
I see two outcomes: either there are enough exploitable resources left to rebuild a technological civilization, in which case someone will get back to pursuing superintelligence, or there are not enough exploitable resources left to rebuild a technological civilization in which case we piss away our last days throwing spears and dying of dysentery. Or maybe we evolve into non tool-using creatures like in Galapagos. In any case, the left of the Drake Equation remains at zero. Breaking out of the overshoot/collapse cycle means the risk of going out with a bang, but the alternative is the certainty of going out with a whimper.
As far as x-risk is concerned, we all have the same eggs.
I am not sure that’s true. Consider a similar analogy: “let’s not develop agriculture” is more robust than “dependence on fickle weather or intricate irrigation system”. Is that so? Not likely—you just get hit by a different set of risks. One day a lot of pale people with thundersticks appear, they kill your men and herd women and children into reservations to die.
Given the fate of the societies which did not climb the technological tree sufficiently fast, I’d say throttling down progress sure doesn’t look like a wise choice.
I completely agree, that’s a great point! The sixth one, to be exact.
Being willing to take on greater existential risks will definitely help in short- or medium-term competition, as your example shows (greater risk of famine in exchange for ability to conquer other societies). So no, I don’t think we can necessarily coordinate to avoid a “brittle” situation in which we are vulnerable to catastrophic failure. That doesn’t mean it’s not desirable.
Individual technological advances can increase the efficiency of resource utilization, but presently and historically higher levels of technological development are correlated with higher per capita resource consumption.
Anyway, even if future technologies could lower per capita resource consumption, how do you accelerate the rate of technological advancement?
That’s a pretty strong claim. How do you support it?
That should be a new discussion.
The fact that all serious criticisms of Mars 1 have to do with whether or not they’ll raise enough money to send a private mission to Mars in 2023 rather than any question than technological feasibility.
By colonizing, I don’t mean Dyson Cloud within our lifetimes, obviously. Just a permanent foothold outside Earth from which to start.
You claimed that people ignore or outright oppose trying to accelerate the rate of technological advancement. Could it be instead that nobody has any idea how to do it?
I’m under the impression that Mars 1 is a hoax, most likely intended to be the premise of a survivor-like “reality” tv show about the selection of the prospective “colonists”.
If I understand correctly, even a manned flyby mission to Mars is considered technologically difficult, mainly due to ionizing radiation concerns.
Setting up a settlement that constantly depended on Earth for supplies (a Martian version of the ISS, essentially) might be technologically possible but only at enormous costs, many orders of magnitudes more what claimed by Mars 1.
An independent settlement seems quite beyond the possibilities of present and foreseeable technology.
And global GDP is about four orders of magnitude greater than NASA’s budget. What requirements do you see as being difficult for an independent settlement? I find both a solar array capable of delivering several terawatts, and a system that, given enough energy, can recycle all the air, food, and water used by a colony, to be well within the “foreseeable technology” category, especially if we were to start pouring in several billion dollars a year in research.
An independent settlement has to locally manufacture all its food, consumable supplies and broken equipment. Since it can’t realistically trade anything with Earth, it must have a self-sustaining closed economy.
Most of stuff we consume in our everyday lives, even food, is the product of complex industrial processes, involving large factories that use lots of energy, many different kinds of resources that come from every corner of the world and lots of labour, exploiting economies of scale.
The type of stuff that would be needed on a Martian settlement would be even much more hi-tech. There is no practical way to do all this hi-tech manufacturing on a small scale in a hostile, resource starved environment with current technology.
Keep in mind that even most of the Earth surface is uninhabited. There are no permanent settlements in the middle of the Sahara desert, or at the South Pole, or in the oceans. Anything like that would be way more technologically feasible that a space settlement, it wouldn’t even need to be fully independent, yet we don’t settle there.
EDIT:
For reference, the ISS already costs several billion dollars a year, and it’s far from independent. NASA estimates that a manned mission to Mars would cost about 100 billion dollars.
Very, very possible.
I’m not saying its easy. I guess I calibrate my concept of foreseeable technology as sleeker, faster mobile devices being trivially predictable, fusion as possible, and general-purpose nanofactories as speculative.
On that scale, I would place permanent off-world settlements as closer than nanofactories, around the same proximity as fusion. Closer, since no new discoveries are required, only an enormous outpouring of resources into existing technologies.
If the permanent Martian settlements are to do their own manufacturing, it seems that they would need both fusion power and nanofactories, or something equivalent. The type of energy sources and resource ores we use on Earth for manufacturing would probably not be available in any sufficient amount.
You might be right. I hope not, though, because that means it will take even longer to escape from the planetary cycle of overshoot and collapse.
Then again, it’s good to be ready for the worst and be pleasantly surprised if things turn out better than expected.
I’d be suspicious of that ‘many’ unless you plan on moving lots of asteroids in-system. Earth is some prime real estate for humans.
I’m envisioning a slowly growing Dyson Cloud, limited by the total output of the sun, availability of atoms in the solar system, and the 14 or so billion years until the sun burns out.
So, if not “many” orders of magnitude then would perhaps “several” be appropriate?
That’s not a ‘niche’, that’s completely rearranging the place.
To me this looks like a very familiar mulberry bush around which plenty of people have been going since the early 1970s.
Are you claiming something different from the classic population-bomb limits-to-growth arguments? Because if you do not, there seems little reason to revisit this well-trampled territory.
Let me just step back and ask you what your goal is. Is it...
Convincing me to stop discussing existential risks?
Convincing me to stop discussing some class of existential risks that includes the Malthusian Crunch?
Convincing me to stop discussing the Malthusian Crunch specifically?
How do you hope to benefit from discouraging the discussion of this topic or topics?
Were you all over Robin Hanson for his Malthusian scenario as well?
The Malthusian Crunch is not an existential risk. It leads to a smaller and poorer humanity, but not to the absence of humanity.
My goal is to lead you to light and wisdom, of course :-P
Other than that I’m just expressing my views and pointing out holes in your constructions.
Fair enough.
But each time we have to rebuild from a collapse, we have a degraded carrying capacity and fewer easily exploitable resources with which to rebuild. This becomes a existential risk if the cycle repeats so many times that we can no longer rebuild. Or if it keeps us stuck on one planet long enough for one of the other existential risks to get us.
Like I said somewhere else, overshoot is like AIDS—it doesn’t kill you, it just predisposes you to other problems that do.
At any rate, there’s always the selfish motive. I don’t want to have to waste time rebuilding even after one collapse because I might not live to see it to completion, and what I want even less is to already be cryosuspended when the next one happens because I won’t be revived and won’t be around to do anything about it.
I think that anything that risks a collapse of civilization is an existential risk.
When the Roman Empire fell, people in Europe were able to fall back on iron-age technologies to survive; there was a mass die-off, but humanity in Europe was able to survive and eventually recover. In most of the first world, that wouldn’t really be an option today.
If our civilization were to collapse now, the human race would be in a dramatically resource-exhausted world, in the middle of a mass-extinction event, without our technology to help us, it’s possible the human race might not survive. And even if it does, I’m not sure that it’s guaranteed that we will come back up to a technological civilization.
I think such an approach dilutes the useful concept on “existential risk” into uselessness.
I would agree that the collapse of the Western civilization would be unpleasant for everyone involved. But that’s a bit different thing.
Let me be a little more clear. My rough estimate would be that a complete collapse of modern civilization in the next 50 yeas would have in the neighborhood of a 25% chance of resulting in a complete human extinction, from a combination of natural factors, resource depletion, environmental depletion, and the inevitable wars that would accompany the collapse.
I think that that kind of scenario is far more likely in the near future then many other existential risks people worry about.
First, I think we’re using the word “civilization” in different senses. You’re talking about the global single human civilization where civilization means having running water and taking tea in the afternoon. I’m talking about multiple civilization which are, basically, long-lived cultural agglomerations (e.g. there is a Western civilization but China isn’t part of it).
That will probably depend on exactly how did the modern civilization collapse. An all-out nuclear exchange will have different consequences than a snowballing freeze-up of the financial payments system.
In any case, I find complete human extinction as the result of the civilization collapse to be highly unlikely. There are peoples who haven’t changed much for thousands of years—would they even notice? And absent things like nuclear winter, why would they die out?
Moreover, let’s even say 99% of the North American population will die. OK. But what would kill the remaining 1%? Sure, technology will revert to a much more primitive form, but then humans have already been there, they survived quite nicely.
I would say that the whole global system is so intermingled and global right now that a complete collapse of civilization of the type I am talking about would likely have to include the entire world if it happened at all. 1500 years ago Roman civilization could fall without badly hurting Chinese civilization, but I don’t think that’s true anymore.
In the kind of global demographic overexertion and resource exhaustion leading to a total collapse that we’re talking about, a lot of traditional food sources would be exhausted before the collapse. In the face of impending global starvation, I would expect every major fishery in the world to be rapidly wiped out, I would expect the rainforests to be burned for more farmland, I would expect decent soil and easily available water to be completely exhausted, ect. I would expect that process would take away most of the resources that people need to survive, and that people living in a traditional hunter-gather existence or a traditional subsistence farming existence would probably had their land and resources taken from them before the end. If we’re talking about billions of people facing potential starvation, I suspect that all thought of environmental preservation or sustainability would go right out the window, as well as concern for the well-being of aboriginal people.
There might be some pockets of people left living traditional lifestyles somewhere (that’s actually what I was thinking about when I put the extinction possibility at 25%, instead of higher), but even they would also be affected by global environmental destruction. (And, of course, small pockets of humans surviving on their own can have issues from lack of genetic diversity and such.)
What would they live on?
When the Roman Empire collapsed, the population of Europe dropped dramatically, perhaps by half according to some estimates, but people still remembered how to farm using old iron-age technology, people still had the knowledge of how to build houses out of wood and straw when better building materials stopped coming from distant parts of the Empire, ect. It was a catastrophe, but people still had enough knowledge of how to survive without the civilization to hang on.
How many people in North America today do you think have the knowledge of how to farm without any technology at all? How many have the knowledge to forge their own farming tools? A few do; but places known to have organic farms or traditional farming knowledge (the Amish, for example) would likely be swamped by millions of starving refugees. And besides that, once a stretch of land has been farmed using industrial farming techniques for several decades, it is very hard to change it back into something that can be farmed with old-fashioned techniques; the soil is basically completely exhausted of all it’s natural nutrients by that point, and only can be farmed with advanced techniques.
Total human extinction might not be the result, but I wouldn’t rule it out as a significant possibility.
And even if we didn’t end up with total extinction, remember that an existential risk is anything that prevents mankind from achieving it’s potential; you have to not just consider the risk of extinction, but then try to estimate the chances of us re-developing advanced technology after a collapse. That’s harder to estimate, but I don’t think it’s 100%.
Yes, and I suspect collapsing due to overpopulation is a much smaller risk then collapsing due to bad policy decisions made by people who overestimated overpopulation risks.
What kind of policy decisions are we talking about? As I posted elsewhere in this thread, I think the best way to control population is education, access to birth control, economic development in the third world, and women’s rights; that has worked better then anything else that has been tried. (Bizarrely I was downvoted for that; are people somehow opposed to education and aid for the third world? I don’t really understand.)
Seriously, you think policy decisions based on an overestimated overpopulation risk is an existential threat?
Or is it just fun to turn arguments around and say stuff like that? My Bayesian a posteriori are screaming that THIS is what you are doing here, to me.
For better or worse, there are people making policy decisions and I know of no reason why that would change on the time scales we’re working with.
At the moment, these decision makers are acting as though they believe:
Overpopulation is not related to environmental degradation, violent conflict, and resource depletion.
That technological progress is not the main risk mitigator against overpopulation and its various consequences.
Supposing the above conventional wisdom is incorrect, but either way policy makers will make policy, and if that is inherently a bad thing (strong assumption), isn’t it better to limit the damage by them having a better approximation of reality?
So, if you agree with the conventional view, you have nothing to worry about (but I have yet to see here convincing arguments why I should agree with this view if I don’t already). If you disagree with the conventional view, that has implications at the very least for allowing its public apologists to stand un-debated and for whichpublic policies and charitable activities you endorse. If you are undecided, then perhaps you’re curious to develop better estimates, because they may have bearing on your survival and prosperity. I know I am.
It could lead to absence of human civilization as we know it. A lot of things that we take for granted—like technological development—are actually quite fragile and depend on a lot of conditions being ‘just right’. Throughout history there are many examples of great civilizations reduced to poverty and stagnation.
I think I can pretty safely guarantee that in a thousand years there will be no human civilization as we know it. There will be something different, I have no idea what.
And there is a huge difference between “reduced to poverty and stagnation” and “there are no humans any more”.
Agreed; the question is: will it be better or worse. I can’t imagine a future where resources are scarce as being anything but bad.
This is not well-trampled territory, just willfully ignored territory, unless someone has anything like refutation of any of the sequence of assertions I make above rather than that it just hasn’t happened yet (in recent times).
Because here is what other predictions haven’t happened yet: grey goo, asteroid impact (in recent times), nuclear war, global pandemic (in recent times), and unfriendly AI. All that any of these scenarios have going for them is that there are a-priori reasons why they are possible. If the Club of Rome underestimated the impact of technological growth and so predicted disaster several decades early, that no more invalidates the underlying threat than if Eliezer made a falsifiable prediction (which he hasn’t as far as I know) about when self-augmenting AI will arrive but overestimated Moore’s Law and so predicted disaster several decades early.
But in response to your question, assertions 4-6 are a different perspective from mainstream environmentalism. Your reply sounds like you didn’t even read that far. Did you?
If so, which specific points do you disagree with? Forget what you think has been refuted by others long ago. What, if anything, do you see that’s implausible or contradictory?
Does the 1918 flu count?
It’s the reason I put (in recent times).
1918 isn’t recent in this context?
How does whether or not I call it recent alter the point I’m trying to make?
That point being: if we understand why something can happen, and we don’t understand why it hasn’t happened, we need to do so before we decide that it’s safe to ignore.
Maybe it would help if you assigned one of the following labels: { theoretically possible but unlikely | likely | very likely | inevitable } to the future scenarios you’re concerned about.
I’m perfectly fine with allocating the same amount of money/attention/energy to the population overshoot problem as we are allocating now to the grey goo problem. As far as I know that’s indistinguishable from zero in any kind of a large-picture view.
Why don’t you think normal market mechanisms (the more scarce resource X is, the higher its price, the larger the incentives to use less, the more intense search for its replacement) will handle the problem?
It is not clear that markets can see that far in to the future. Part of the value of a high-ish discount rate is that it discounts the more unreliable far future predictions against the closer in ones. To someone with better future vision, the optimum discount rate would be lower.
Yes, of course. On the other hand what are your alternatives for better far-seers?
Someone who actually has better future vision can become very rich via the financial markets.
Edit: in response to thought provoking commentary from hairyfigment I updated the first set of human-made risks from marginal to conditional on no overshoot, and downgraded the risk of overshoot to likely. Thanks for your help.
Within the next 50 years...
grey goo : theoretically possible but unlikely
meteor impact : theoretically possible but unlikely
Yellowstone Caldera: theoretically possible but unlikely
gamma ray burster: theoretically possible but unlikely
solar flare: theoretically possible but unlikely
green goo | no overshoot: somewhat likely
global nuclear war | no overshoot: somewhat likely
global pandemic | no overshoot: somewhat likely
new dark ages | no overshoot: somewhat likely
near extinction due to climate change | no overshoot: theoretically possible but unlikely
widespread and severe suffering and death due to climate change | no overshoot: likely
overshoot: somewhat likely
green goo | overshoot: somewhat likely
global nuclear war | overshoot: likely
global pandemic | overshoot: likely
new dark ages | overshoot: very likely
near extinction due to climate change | overshoot: somewhat likely
widespread and severe suffering and death due to climate change | overshoot: inevitable
Could you use probability numbers instead of works like likely/unlikely/very likely. It’s difficult to know what you mean with them.
I notice you have the probability of various scenarios conditional on the overshoot but no probability for the overshoot itself.
Shouldn’t matter, I don’t assign high weight to amateur probabilities. I believe bokov’s argument is that this threat should be taken seriously purely on the grounds that we take much more theoretical of dangers seriously. Do we only take the hypotheticals seriously? If so, this is a serious oversight.
That is precisely my main argument.
Hmm… I would have pegged your main argument as being more related to overpopulation than blind spots specifically. Although.… I admit I skimmed a little. X_X
At least I managed to pick up that it was a critical part of the article!
Now that I think about it… I’m not actually worried about overpopulation/resource collapse, but I am worried about LessWrong being willfully ignorant without intending to be so. I guess I really dropped the ball here in terms of … Wait, something about the article made me skim and I didn’t catch it on the first pass. This is intriguing. It’s been a long time since I’ve had this many introspective realizations in one thought train. I have to wonder how many others skimmed as well, what our collective reason to do so was, and what is the best route to solve this problem.
...Or else you just misspoke and resource collapse is actually your main concern/argument.
But even in that case, I skimmed, and I can see skimming being a problem. Yay for orthogonal properties!
All this from the mere statement of accuracy. …Did trying to avoid inferential silence play any role in your making this comment?
I think the chances of a significant portion of LessWrong not having thought about the issue is low. Population growth is a well understand issue compared to existiential risks like grey goo.
bokov makes a series of arguments that most people probably have heard before and many consider to be refuted and then suggests that because people don’t agree with him, they have a blindspot.
What makes you think most LessWrongers have thought about it to a degree to which the issue can be considered in the process of being solved? (For whatever needs to be done to “solve” it, whether that is “Do nothing different” or not.)
I haven’t used the word solved in the post you quote. That word misses the point. Nobody claims that the issue of climate change is solved.
The question is whether it’s useful to model the issues like climate change in a way that centers around caring capacity and ignores politics.
It looks like a if you have a hammer everything looks like a nail issue. Yes, you can model the worlds problems that way, but that model isn’t very productive.
If you think about population amounts it makes sense to mentally separate different countries and continents.
Let’s say you start in the US. As an engineer you see a clear solution. We should increase the amount of abortion that happen in the US to get near to the carrying capacity. If you try to push that policy you will see that you run into problems that are highly political.
The abortion debate is at the moment about the sacred value of life against the sacred value of woman’s control over their own body. If you come into that debate and say that you want more abortions because it has utility to keep US population down, you are not helping.
You have to remember that the US is a country where a good portion waits for the second coming of Christ and thinks that the bible says that they should procreate as much as possible.
Political issues like that make reducing population growth a very different issue than getting more telescopes to detect potentially dangerous asteroids or cooling down yellowstone by building a giant lake on top of it.
It makes sense to use an engineering lense to talk about asteroids because there no significant political group that considers watching asteroids with telescopes to be immoral. With yellowstone you might get some people who think that you are harming endangered spezies that live in that area, but those are people with whom you can argue directly they aren’t as politically powerful as anti-abortion Christians.
Another way to approach population growth is to approach Africa. Deciding as US or European that there should be less Africans has issues with neocolonism. That produces political problems.
It also turns out that that increasing wealth seems to be a good way to reduce the amount of children that a woman gets. That insights caused Bill Gates to focus his philantropic efforts in a way where he says things like:
You might find that GiveWell’s highest recommended charity is about malaria bed nets. Health care for the third world. Again that’s a point where we can make different arguments to encourage people to spend money on African bed nets. Saving a life for 2000$ seems to be a good argument to convince people.
GiveWell style altruistic altruism is an alternative to approaching Africa to “What can we do to reduce African population as effectively as possible”.
I think that population is an area that obvious enough that I would expect smart people on lesswrong and the effective altruism community to not be ignorant about the topic.
If you want to get a good feel for the data about population growth I would also recommend you to play a bit around with Gapminder(Press play to see how the child per woman ratio changed over the last 60 years.
Why? It seems like your comment was intended for someone asking a different question than the one I’m asking. I’m not asking for arguments and reasoning you can come up with that are population/resource usage related, but rather why you think a moderate portion of LessWrong and the effective altruism community have put sufficient thought into it that it no longer needs to be discussed in contexts like LessWrong. I had though it was obvious that that was the point I was questioning, and so would be the focus of any question I asked in response of your response, but it seems it was not as obvious as I thought it was.
Basically: Why do you think population growth is an “obvious” issue?
Do we? How much resources is allocated to the risk of gray goo or, say, the Yellowstone supervolcano?
Talk is cheap.
Yes, it is, but talk and attention are the only resources LessWrong reliably provides at the moment.
Well then, bokov is talking about the overshoot so we’re good, right?
Depends on how motivated others will now be to bring up this issue.
But others are already allocating resources to the overshoot, in my opinion way more than it deserves.
In a useful way? Quite frankly, I don’t trust very many people at all to spend their resources in useful ways. And this includes people who frequent LessWrong.
I realized as I was writing this that the overshoot is kind of like AIDS or aging—it doesn’t kill you directly, just predisposes you toward things that will.
I’ll edit it so the union of the whole set of conditional-on-overshoot disasters plus “other” is the likelihood of overshoot itself.
OK then, you put forward an estimate: an overshoot is very likely. Now what makes you think so?
This looks incoherent. You call overshoot “very likely” and “near extinction due to climate change conditional on overshoot: somewhat likely”. Even if I interpret those as .7 and .2 respectively, we wind up with an unconditional probability of at least .14, which I hope is not what you mean by “theoretically possible but unlikely”. If that is what you meant then I do not understand how the world looks to you, or why you’re not spending this time fundraising for CSER / taking heroin.
Classy.
I have only 5 bins here with which to span everything in (0,1): theoretically possible but unlikely, somewhat likely, likely, very likely, and inevitable. The goal is a rough ranking, at this point, I don’t have enough information to meaningfully estimate actual probabilities. You have a good point, though: it would be more self-consistent to say conditional on no overshoot for the first set.
If flaming me is what it takes for you to think seriously about this, then maybe it’s worth it.
Because ‘Seek and ye shall find’ is not always true. If a resource becomes so scarce as to cripple the functioning of society, it may be already too late to start searching for an alternative.
It is best to have foresight and start investigating solutions to problems before they become, you know, actual problems. It’s irresponsible to do otherwise. As mwengler pointed out, markets can’t see that far into the future.
It is, of course, best. The problem is that we do not have it. At least for the last few centuries humans have shown remarkable lack of ability to predict where the twists and turns of technology will lead.
To use the classic example,
In 50 years, every street in London will be buried under nine feet of manure. -- The London Times, 1894
This isn’t about technology though. It’s about resources. The London Times correctly identified that if the proliferation of horses were to continue, the streets would be covered by manure. The solution to this problem was to stop using horses, which is exactly what happened, although of course not everyone was able to see it at that time.
And that’s precisely the point. People investigated alternative means of transportation while they still had the luxury of doing so. Not because of fear of manure, of course, but because they realized that horses were non-ideal in many other ways. Imagine what would have happened if the streets had been covered by nine feet of manure, and only then people started thinking about ways of getting out of that mess (literally).
Normal market mechanisms: Imagine a world nearing material limits, and a population where each individual owns the same fraction of those materials. If half the population has one child per couple and the other half has four children per couple, the supply/demand changes drive the price of human capital in terms of materials way down, the first half of the population sees their already-larger inheritances grow further in value, and the latter half of the population finds themselves unable to afford a second generation. Near material limits there is an incentive to limit your own reproduction.
Normal political mechanisms: When half of our population has one child per couple and the other half has four children per couple, the first half of the population is outvoted four-to-one, and the inheritances and their gains get redistributed to support the poor majority, until material to redistribute runs out anyway. Although the society as a whole has an incentive to limit its reproduction, this incentive takes the form of a billion-way Prisoner’s Dilemma, and each member of the society has a strong incentive to defect.
(There are strong problems with the above reasoning. For instance the discussion of “incentive” is from the perspective of genes rationally trying to maximize their population, which is a dubious description of human behavior at best. But most “market incentives will fix it” arguments assume rational reactions to incentives to begin with, so if those aren’t present anyway then Q.E.D.)
The technical term for that is “starving to death”, so let’s call it what it is. Don’t worry, I won’t judge you—I’m a pragmatist deeply skeptical of prescriptive morality. I respect someone who frankly doesn’t give a damn about the less fortunate more than I respect someone who pretends to give a damn. From a pragmatic point of view, though, that starving half will resort to the other standard response to scarcity: violence.
Which brings us to...
Do we live in a world where normal political mechanisms operate? Uh-oh, we do. Does the impact of political mechanisms outweigh the opposite impact of market mechanisms to limit family size? Uh-oh, we don’t know.
So, for you to continue believing that the markets will prevent overpopulation without any specific person thinking or doing anything about it, it’s your turn to come up with estimates on the net effect government has on technological progress and population growth, and the net effect an pure free market would have on technological progress and population growth.
Can’t we imagine something either more useful or more fun? This doesn’t resemble reality at all so I don’t see the point.
I’ll freely concede that you can imagine a world where the markets won’t work at all—so what?
No they don’t. These arguments point to the empirics of human history. Humans are not rational and yet markets work (again, empirically) remarkably well.
I was intending to offer a deliberately oversimplified example to illustrate a much more general point.
If your physics professor talks about conic section orbits, it doesn’t mean that he’s an idiot who thinks there are only two astronomical bodies in the universe, that astronomical bodies are point masses, that general relativity doesn’t exist, that quantum mechanics doesn’t exist, etc.
(I am not suggesting that you are currently enrolled in a physics class, but am again using a simple example to illustrate a more general point.)
In both cases, do you see the point I was intending? This isn’t a rhetorical question: I would be happy to try to “imagine something more useful” if it’s actually necessary to communicate, but I’m afraid I get the impression that the failure to communicate here is that you’re not trying to meet me halfway.
Your specific point before was not “empirically, markets work, therefore by induction they will always continue to work”, it was “prices create incentives to use less of scarce things”. I’m pointing out that when things necessary to survive and reproduce become really scarce, we empirically stop using markets and start using politically-directed rationing. Even attempts to trade temporarily scarce necessities at non-shortage-inducing prices are vilified as “profiteering”. Do you disagree?
No, I don’t. The point that human societies can and sometimes do override or simply just ban the markets is rather obvious and I fail to see the relevance to the topic under discussion.
Not only. Notably prices create incentives to use substitutes, as well as invent and produce new and better substitutes.
Sometimes we do and sometimes we don’t. All politically-directed rationing is invariably accompanied by a black market anyway. And I still don’t understand your point.
Well, “I don’t understand your point” is a big improvement over “you’re just talking about something imaginary”, so let’s start from here.
Let’s see what the remaining inferential distance might be composed of:
Is there even such a thing as “overpopulation”? I.e. is it even possible for humans to reproduce faster than they can increase their effective resources to support the increased population? I’d say “yes”, but it’s starting to sound like your answer would be “no”.
If we were in an “overpopulated” world, what would the market solution be?
What would actually happen in that world when we tried to implement the market solution?
Possible, yes. But there are two further questions: is that likely? and would resource constraints cause a “soft landing” for the global population or will there be a massive crash to numbers far below what the resources can sustain?
Make it more expensive and less valuable to have children.
No idea, depends on the particulars. Not to mention that the “market solution” generally doesn’t need to be implemented—all it needs is for the government not to interfere.
It looks like we’re closer than I feared—I’d agree with your first two answers, and “no idea” is hard to disagree with on the third. I’d have to also answer “no idea” to “is that likely?”, I’m afraid. If really pressed for an answer I’d say it’s probably not likely to happen (sub 50%), but it’s likely enough (greater than 5%?) to be worth worrying about, considering the magnitude of the consequences.
Answering your second question then only depends on a couple issues:
First: is it possible to “save” and “spend” wealth? I.e., can we turn long-term capital into short-term consumables and vice-versa? I’d say the answer is “yes”, there are lots of ways we can divert resources between luxury/maintenance/upkeep and immediate survival. This is usually a good thing, since it means that we can accumulate savings against disaster in a way that isn’t just pushing accountants’ numbers around or shifting wealth between demographics… but it also opens up the possibility of a massive crash, in which it’s possible to “eat our seed corn” and continue to grow and survive in an unsustainable way which can have sudden discontinuities when the savings start to run out.
Second: what would actually happen when we allowed the market solution to occur? (is that better language? you’re right that “tried to implement” had some dubious connotations)
“No idea” is a good honest start, but it’s not hard to make a few educated guesses. If poor kids are too numerous for their parents and voluntary charity to pay for, but there are still wealthy people around too, what happens? We might ask for “the government not to interfere”, but even if you can make a case for that being the correct default normative expectation, is that truly your positive expectation? Is this a world where governments typically don’t interfere with markets, and they won’t let some hungry kids stop them from sticking to those non-interference principles?
I am considerably more doubtful about that. A resource shortage is about lack of particular molecules or atoms (or, maybe, cheap enough energy). Long-term capital mostly exists as financial instruments, land, buildings, and such. As the old saying goes, you can’t eat money.
The usual. That’s the normal state of being for most of humanity’s history. It’s happening right now—look at Africa. All historical lessons (about the comparative utility of markets vs direct government intervention) are fully applicable.
Does “let the market handle it” apply to every risk equally?
If not, what distinguishes risks to which it applies less? What do we do about those risks?
If it applies equally to all risks, then either it’s pointless to talk about risks because the market will handle them all the way we would like them to be handled, or it’s pointless to say that the market will handle them because that’s already implied and the fact that we still consider them risks means we’re not completely confident that the market will handle them the way we would like.
Of course not. Don’t erect silly strawmen.
Don’t erect silly strawmen. The market provides no guarantees. There will be winners and losers. On occasions the market will be spectacularly wrong. So? If you have a provably-better alternative let’s use that. Do you happen to have one?
No, I wish.
This is the the stage where I’m hoping to collaboratively identify what the relevant unknowns are and what bounds we can assign to them. The next stage is to brainstorm what solutions might work and see if they cluster into any particular regions of the solution space. Also, to break them up by scale—individual, local, national, global. Come up with recommendations for actions one can take immediately to implement the first two (i.e. how to make the place where you live more likely to be a beacon of civilization). If we have some really smart/entrepreneurial LessWronger get interested, possibly come up with individual/local actions that scale to have national/global impact if they catch on with enough people.
This is a big problem and I’m not under the illusion that it’s going to be solved by this post. But we have to start somewhere. And if the best problem-solvers are ignoring this problem because the moral scolds and luddites have pissed all over it, maybe changing that state of affairs is a good place to start.
“The market will handle it” is a curiousity killer rather than an explanation, no different from “God will provide”. How will the market handle it? Why hasn’t it done so already? How long will it take? Is it possible for the market to fail? How do we estimate the chances of failure?
If a bias or blind spot is widespread enough, the market will not be immune to it either.
The market is just another optimization process. A useful and successful one most of the time, but not to be blindly trusted any more than any other optimization process (especially an optimization process that is not understood).
Markets are prone to optimizing over a short time window. The human race won’t just spontaneously opt to take the equivalent of a 10-year pay-cut to avoid dying horribly on year 11.
Markets are not invincible. They can fail to keep up with events. They may systematically under- or over-value certain items. There might simply not be an adequate solution in the part of the solution-space that is accessible by a market.
Hiding in the phrase “more intense search for its replacement” is an unknown unknown. If estimating how and whether the market will handle the problem depends on estimating the outcomes and timescales of ongoing research, that doesn’t inspire much optimism.
Am I saying that the UN or some government should step in with resource quotas and compulsory sterilization? No, because centralized bureaucracies have an even worse track record. I’m just saying that magical thinking compromises our problem solving abilities, and “we are in trouble if one of us doesn’t soon come up with a better plan to accelerate technology and/or limit population” is a more productive state of mind than a comforting black box like “the markets will handle it”.
How about AI? Do you think normal market mechanisms (the more people want to not be turned into paperclips the larger the incentive to make a friendly AI) can be trusted to handle the friendly AI problem?
Nope, it’s neither (unless you think of the market as magical, a surprisingly popular attitude).
In this context it’s a forecast, a prediction of what will happen when some resource X becomes scarce. The market is a particular mechanism in a human society and “the market will handle it” is an assertion about allocation of resources under specific conditions.
No one is saying that the markets are “invincible” or any other nonsense like that. However if you look at empirical evidence aka history, the markets helped human societies adapt and flourish in a wide variety of conditions, most of which were characterized by scarcity of some resources.
Given this, I am happy to have “the markets will handle this” as my prior.
If you think that in this particular case there will be a market failure, please provide arguments and evidence. If you think that there is a better alternative, please name it and again, provide arguments and evidence why it’s better.
Otherwise you’re just engaging in a nirvana fallacy.
So? The two broad defaults for responding to scarcity are trade and violence, history has plenty of examples of both, and polities that were successful at either will be more thoroughly documented in history due to survivor bias.
Nevertheless, every complex civilization previous to ours eventually failed and collapsed. If markets explained their success, do market failures explain their demise? What reason do you have to be so confident that ours has nothing to worry about?
Well, the economy of the Roman Empire collapsed when Diocletian undermined the markets by imposing price controls.
What do you mean with civilisation in that sentence? Are you refering to Fermi or are you taking about human civilisations?
In this context, individual human civilizations.
Humanity is still here and looks pretty complex to me :-) Individual civilizations come and go, sure, but that’s not the question we’re discussing. If e.g. the Western civilization collapses, there will be others ready and willing to take its place.
Actually, for the first time in history, we might have achieved a global civilization, as well as a global single point of failure.
How well did the market handle real estate in the mid 2000′s? How well did it handle tech stocks in 1999? Tulip bulbs back in the day?
Who thinks it is magic?
Sumner likes to point out that in many countries which were claimed to be ‘bubbles’, the bubble never popped. Also true of many regions in the USA—how’s that SF bubble going?
How high are the stock prices of Amazon, Google, and Apple now? Oh look, Bitcoin is at $160, how did that happen when everyone knew it was a bubble which popped?
Everything you know about Tulipomania is false or incomplete. I suggest reading Famous First Bubbles.
Bit of a glib response. (One could ask, equally rhetorically, “How high are the stock prices of Tiscali, lastminute.com, and InfoSpace/Blucora now?”) But since you elaborated below with actual arguments I won’t press this point.
Does the book go beyond Garber’s papers on tulipmania? My reading of Garber’s argument in those papers is:
Most people get their ideas about the tulip market from Charles Mackay, but he plagiarized his account, and it ultimately comes from “three anonymously written pamphlets”.
Mackay exaggerated, among other things, the amount of national-level economic distress resulting from the tulip mania.
“Mackay did not report transaction prices for the rare bulbs immediately after the collapse”, which are the prices one would need to establish the popping of a bubble. Instead he quoted high prices from before the bubble popped, and prices “from 60 or 200 years after the collapse”. But what he found could be consistent with the prices accurately reflecting changes in fundamentals. Why? Because a new & attractive variety of flower might gradually come into fashion (raising its price) and then suffer a glut over time as more bulbs become available (lowering its price).
One can confirm that’s how things normally worked by looking at changes over time in prices long after the bubble. Even in non-bubble times, bulb prices would consistently start high and then fall steadily.
I don’t disagree with those claims, as far as they go, but highlighting a lack of conclusive evidence for a bubble doesn’t mean there wasn’t a bubble. Even Garber’s seemingly damning review of the price data doesn’t mean much, because Garber (like Mackay) fails to quote prices from immediately after the collapse.
What Garber actually does is calculate that tulip bulb prices depreciated by 24%-76% per year over the 5-6 years after the peak. He compares that to the 2%-40% annual depreciation of bulb prices in the next century, says the earlier rates are only modestly higher than the later rates, and so there wasn’t a bubble-indicating deviation from normal depreciation.
But Garber would likely have seen the same thing even if there had been an abrupt bubble pop. Suppose a tulip bulb’s price peaked at 1000 guilders, crashed to 200 guilders within a week, then sank gradually to 100 guilders over the next five years. An economist who, knowing only the start & end points, interpolated to estimate the annual depreciation would (if I’ve done the sums right) get a 37% rate, which gives no sign of the initial crash. Observing a normal depreciation rate isn’t good evidence against a bubble; one has to know prices closer to the event.
Does Garber’s book have those data, or at least a novel argument missing from his papers?
Yes, but it hopefully wakes up people who glibly point at one stock or one price change as proof positive of bubbles: the claim for bubbles is a long-term statistical claim, and cannot be supported by simply going “Tulips!”
I don’t know. Not really interested in taking the time to compare them in detail. Presumably the book form includes much more detail than space-restricted papers.
Given how many people cite Tulipomania as a irrefutable smackdown in these sorts of discussions (‘Bitcoins are worthless—at least you could plant tulips!’), learning that there is minimal evidence for what is popularly considered to be a large, irrefutable, historically established, unquestionable bubble should badly damage one’s confidence in other claims relating to bubbles since it tells one a lot about what passes for evidence in those discussions.
It’s been a while since I read the book, but doesn’t he do exactly that and does compare depreciation from peak prices in places? For example, on pg64 of my copy:
I’d hope so, although I can imagine an academic padding things out with irrelevant side detail or other yakkety-yak-yak. In those cases one may as well stick with the papers.
Not based on that quote. That’s the same reasoning he uses in his papers. (Your quoted bit appears, almost word-for-word, on pages 550 & 553 of the “Tulipmania” paper.) The flaw is the same; estimating a depreciation rate based on data points 5-6 years apart won’t tell you whether there was an abrupt dip that took only a few days or weeks.
It is a good example of why one shouldn’t take people’s claims that something’s a bubble at face value. Although I don’t think the magnitude of the tulipmania has much bearing on whether tech stocks, Bitcoins, or real estate are/were bubbling; for those last three things, there are time series data that’re a lot more relevant than what happened to Dutch tulip bulbs 376 years ago.
(I also wonder whether I updated too much on the basis of one economist’s contrarianism. Really, I went too far in my last comment by referring to “a lack of conclusive evidence for a bubble” — it’s not as if I’ve looked for that evidence. I’ve just taken Garber’s word for it.)
But comparing peak prices to prices years later does tell you that any ‘abrupt dip’ must have been compensated for by other price increases or maintenance of prices. If prices, from the peak, abruptly go down and then abruptly go up, and then follow their usual depreciation curve, that’s not a very bubbly story.
Sure. It drives me nuts how people constantly bring up Tulipomania. Whether or not one agrees with Garber’s findings, it should still be obvious to them that arguing about modern finance based on Tulipomania is like trying to criticize American government based on ancient Greek politics—the sources are bad and don’t answer the questions we want to know, and even if we did have perfect knowledge of what happened so long ago, the circumstances were so different and the world was so different that it can tell us very little about vaguely similar modern situations.
Maybe! I wonder that sometimes myself. But honestly, Tulipomania has the feel of one of those parables which are too good to be true, so I don’t expect a later economist to come along and say ‘everything you thought you knew from Garber is false! yes, the stuff about tulip-breaking virus is false! and tulip bulbs don’t depreciate extremely fast! the futures contracts weren’t canceled! there were no extenuating circumstances like plague!’ etc
I don’t follow. Garber’s data are consistent with the scenario I sketched in the penultimate paragraph of this comment, where I assume away any compensation for the initial dip.
Yeah, Garber’s data are also consistent with an initial rebound.
Fair enough.
Plague? Now that’s something I don’t think he mentions in the papers. (Must...resist...urge to borrow...yet another...book.)
No, you don’t. You bury it in the ‘sank gradually’ part:
You can get an abrupt pop inside an normal-looking beginning/end comparison if something compensates for the pop, like another rise (unlikely) or prices then falling slower than they normally would (‘gradually’). The ground lost in the pop is then made up later.
His book’s capsule summary of that bit goes
It’s the topic of chapter 5, “The Bubonic Plague”.
(It’s on Libgen, and isn’t a very long book.)
Ohh, I see what you’re getting at. I’d interpreted “compensation” more narrowly as something halting or outright reversing the fall in prices, not merely decelerating it.
Yeah, my scenario implies an unusually slow price drop after the initial speedy crash. That wouldn’t surprise me in the wake of the unravelling of a self-fulfilling speculative mania.
Good to know, thanks. Added that to my mental things-to-look-at-on-a-rainy-day list.
gwern, I find your position against bubbles to be incredibly unlikely, and that is post my studying economics and finance informally for the last 3 decades. But you are gwern who my post (as opposed to my prior) warns me against dismissing.
If you can suggest any reading that you found particularly compelling against the usual interpretation of market manias, I’d love to take a look. I will google Famous First Bubbles, haven’t done that yet.
As far as real estate bubble, first I would point at Mortgage Backed Securities (MBS) rather than the direct real estate market. These were rated AAA, insured for less than a penny on the dollar, and purchased by ancient and venerable banks and others. And then in 2007/2008 they almost uniformly as a class blew up. Returned pennies on the dollar. Caused multiple firms and banks around the world to go bankrupt. Resulted in governments around the world pumping trillions of dollars of liquidity into the system in a process analagous to foaming the runway when a plane crashes. And the essence of it predicted publicly by many of the smartest minds in finance and investing. I am thinking of Buffett and Munger referring to MBS derivatives as Weapons of Financial Mass Destruction BEFORE the blowup, and I had in print in a book printed before the destruction a speecy by Munger talking about how there was going to be a tremendously horrible event because of derivatives “in the next 5 to 10 years” in a speech he gave in I think 2002. While MBS were hot, they were so in demand that brokers such as Salomon would create “synthetic” MBS, which were essentially just well documented bets that would pay off exactly as an MBS would pay off over their life, but were made up because there was still demand for MBS even after the last homeless person with a pulse in the US had been given a 100% non-doc mortgage to buy a house which would not be sellable for even 80% of what was financed two years later.
Is even this not a bubble? Not the market chasing a dream instead of a business proposition and trying to fly up to heaven with the dream and failing?
The NASDAQ composite peaked in early 2000 at over 4000. More than 13 years later it is STILL not back up to that level. Perhaps at least some of the investors in AMZN and AAPL in 1999 were not caught in a bubble, but what about the bulk of the money, of which about 70% of the value evaporated in less than 3 years, and which on the whole has not crept back up to even yet? And the NASDAQ composite is not the only place to find this result, CSCO, INTC, and QCOM were all bid up much higher in 2000 than they are selling for even now. Proof that they were overvalued in 2000, no? By a factor of a few? I’d like to know the error I make when I think of this as a bubble, as momentum overshooting value and rationality by a factor of a few?
No, the AAA rates MBS did very well; 90% suffered no loses. It was the ABS CDOs (Asset Backed Security Colateralised Debt Obligations) that did badly.
source: Why did so many people make ex post bad decisions
Indeed the data you cite shows that it was Aaa rated CDOs that had default rates about 90%. CDOs were backed by mortgages as well.
Extending what you say about MBS to some more accurate statements, the AAA rated MBS had about a 9% or 10% default rate out to 4 years. There are 26 more years of life in those mortgages in which they can still default, going to an even higher cumulative default rate potentially.
Characterizing a 9% default rate on triple-A securities as “did very well” is quite wrong. Historically, triple-A corporate bonds default at 0.6% or less rate, and triple-A municipals default at 0.00% rate. A 9% default rate 15 times higher than the ratings were intended to suggest. And the Baa MBS defaulted at over 80% rate, more than 15 times the ~5% rate on Baa Corporate bonds prior to 2007.
The ratings were CRAP, suggesting a default rate which should have been 1 to 2 orders of magnitude lower.
The ratings aren’t intra-class independent. Which is perfectly normal; junk corporate failures are correlated too.
No, the AAA rates MBS did very well; 90% suffered no loses. It was the ABS CDOs (Asset Backed Security Colateralised Debt Obligations) that did badly.
source: Why did so many people make ex post bad decisions
(Forgive me when I read this mentally as “And that is post my being a random Internet pundit for decades”.)
I don’t think it’s very useful to define a ‘bubble’ as “any large price increase followed by a price decrease”.
I’d rather use a more powerful EMH-focused definition: a bubble is large price increase which represents an inefficiency in the market which is predictable in advance (not in hindsight), exploitable, and worth exploiting. Merely pointing out some disaster, or some large price decrease, does not demonstrate the existence of bubbles, because that observation could result from unavoidable or unobjectionable causes like the inherent consequences of risk-taking, mistaken analyses, perverse incentives, etc.
People make mistakes; disasters happen. If they never happened, and AAA never went bust, couldn’t one make a lot of money by exploiting that inefficiency in the market and picking up pennies in front of the non-existent steamroller?
How much money did Munger & Buffet make off their shorts of housing, exactly? How much has Paulson made post-housing? (Does making billions off housing, and then losing billions on gold & China, look more like skill & inefficient markets or luck & selection effects?) How many economists did one hear of post-2008 who suddenly turned out to be Cassandras? You can go onto Bitcoin forums and tech websites right now, and watch people predict 20 out of the last 3 Bitcoin ‘bubbles’. Finance is just the same. Post hoc selection of people warning something vaguely similar (derivatives? that’s a rather roundabout way of predicting a housing bubble, which could have been powered by all sorts of financial instruments, not just derivatives) is worthless.
Housing prices in SF, Australia, London, Canada, Manhattan, China are holding steady at bubblelicious prices or trying to fly up to heaven. (Again, I borrow this point from Sumner.) Perhaps they are using technology from the Apollo program.
Why is this not just mistaken beliefs about the value of those loser companies and about high-tech business models? (Notice how the big IPOs lately all have pretty clear revenue streams from advertising.) How could one know in advance that Pets.com would not be Amazon.com, or vice-versa? How does a VC know which of his investments will go bankrupt and which will own an industry? Tell me: if tomorrow a break is discovered in the core Bitcoin protocol/cryptography and the price goes to $0.00, was Bitcoin a bubble or a mistake?
To summarize: I think you are grasping at surface features, not thinking about the anti-bubble arguments or are just unfamiliar, and are engaged in post hoc analysis where you select out of the buzzing hive of argument and disagreement a few strands which seem right to you with the benefit of many years of data.
OK you like EMH so much that you think 9 students from one professor all outperforming for decades is cherry picking and data mining. I think it is finding a small group of people wh oclaim to be learning from someone who has empirically verified methods, and who, when they apply these methods, get the predicted results consistently for decades. I think characterizing this as cherry picking and data mining is at more likely to be a bad explanation for what is being seen than is mine, which is that they are doing what ehy say they are doing, and it is working.
Even a broad index fund is “managed.” The conditions for being listed are quite stringent, and involve “survival bias” filters, if stocks fall below a certain value they are delisted. I actually don’t think that the difficulty of beating the SP500 is much of a proof of EMH as much as it is a proof that very straigtforward standards applied on a slow timescale capture almost all of the value available from managing a portfolio. I think people investing more broadly than SP500, people investing with people who come in to their living rooms seeking “angel” investors do a lot worse. If the market was efficient in principle, then one wouldn’t need the SP500 or even the NASDAQ seal of approval to wind up with results that were at the market mean. If usnig your brain is required to pick SP500 over living room pitch man, then in principle, using your brain is required to get reasonable results.
I think if a proposition of efficiency is to be proved true, ti si not by looking at the average performance of every tom dick and harry and noticing that with mathematical necessity they tend to have the same mean as the market which of course they comprise. I think a proper proof of efficinecy requires showing in detail that there are no consistent outliers of high performance. That funds with decades long records of outperformance occur at the proper rate to be consistent with pure luck. Indeed, to show that while it appears that some people predictably outperform, that for all these actors past performance is no predictor of future performance, and that the hangers on that joined Buffett in the 60s or 70s or 80s or 90s after seeing his record THOUGHT their outperformance was due to their identifying a winner, but that it was consistent with just pure dumb continuous luck.
I think their is a gigantic difference between “we cannot prove that their is alpha” and “the most likely explanation of what we see is that their is no alpha.”
As to identifying bubbles that were not bubbles, the only bubbles I have identified are tech,and real estate. I identified a “bubble” in a small company stock (Conductus) where a company with no real products generated excitement by talking about how they were getting in to the cellular industry, driving their stock price from 3 to about 80 before they crashed back down to 3. I shorted them at about 70, took my returns a few weeks later at 60 or so, they proceeded to rise to 80 and then within a year drop back to 2.5. I identified another mispricing in NHCS where numerically they were spinning out a company which was being completely undervalued in their current stock price. I asked others “can this really be true,” they said only in general yeah stuff like that happens. I bought a few thousand dollars worth, made the 20% or so return it seemed I was seeing laying on the table a few months later.
The main sense in which the market seems efficient is that the prices are predominantly set using sensible analyses, presumably because those who do not follow a proven technique of picking sensible prices do not survive, so the main component of market efficiency is that the processes for beating the market are broadly exercised and dominating the market. So it is hard to do better than free-riding on that. But does it turn out that some people do better at that process than others? I think the best explanation for what we see is that yes, some do, and that they are a smallish minority is not because they are just the tail of a random distribution, but because of mathematical necessity beating the average significantly can only be done by a minority.
Anyway, thanks for sticking with it and explaining your position to me.
To expand even further on my critique: you are placing a huge amount of weight on 9 students, of unknown veracity, out of an unknown number of students (itself out of an unknown number of millions of people who have tried to beat the market over the past century), who have not released audited records much less ones comparing them to indexing, who started half a century ago (which is the investing dark ages compared to what goes on now, in 2013), and at least one of whose successes seem to be partially explained by non-efficiency-related factors?
This is roughly as convincing as Acts of the Apostles documenting the 12 apostles’ successes in beating the (religious) market and earning converts.
Those angel investors are forfeiting diversification and so can easily earn below-average returns. EMH doesn’t mean that you cannot deliberately contrive to lose money.
I think in an adversarial environment where everyone claims to be able to beat the market and you should give them their money, and there are compelling theoretical reasons that any beating of the market would wipe out whatever advantage was posssed, there is not such a gigantic difference.
Congratulations on your day-trading success. You know what happens to most of them, right?
Under EMH is pretty hard to deliberately and consistently lose money. It’s very easy to get additional risk (e.g. by not diversifying), but I don’t think EMH envisions assets with negative expected return.
Mm, the way I remembered was that by not diversifying, you were taking on additional uncompensated risk; not diversifying wasn’t completely neutral, expected-value wise. (Also, there’s obvious ways to guarantee losing money: trade a lot. The fees will kill you.)
Yep, that’s what I said—that you can easily get additional risk by not diversifying.
And the trading fees are outside of EMH—there are certainly plenty of ways to reliably lose money in the real world, but not in the EMH world.
I said ‘uncompensated’ risk.
EMH doesn’t say anything about uncompensated risks.
To get to risk premium you need something like CAPM or APT which are a different kettle of fish.
Actually the records ARE audited, they ARE compared to indexing, and those records and comparisons are reported by the original article I mentioned, which I finally link to here.
If a professor’s students dominate some part of engineering or biology or chemistry, it is generally taken as evidence that the professor was teaching something real. I suppose if we had an Efficient Knowledge Theory we would understand that going to Caltech or MIT was as wasteful as picking up $20 bills on the sidewalk (which don’t exist in a classic EMH joke).
Should we be questioning whether a good education in philosophy or math or physics or engineering or biology or… is just a mismatch between the power of random chance and the human bias towards seeing patterns? Or is there something special about learning how to value companies that puts it in a category of analysis that is different from all other observations of the effects of knowledge?
In any case, the article linked discusses the randomness hypothesis extensively pointing out among other things that the various investors reported upon had exceedingly small amounts of overlap in what they actually invested in.
Gwern, these comments are not so much aimed at you, you have obviously been down these roads and decided which way you would turn. They are aimed at anybody reading this who is still not sure about EMH. The article linked is excellent and written by a guy who walks the walk better than anybody else in human history (so far).
I don’t see any mention of how they were audited (Buffett merely says that they ‘were audited’, no mention of by whom, when, what the audits said, whether he saw the results, etc, and offers as reassurance that checks were paid for the appropriate amounts, which is not my problem here), and if you really want to nitpick, then I would bring up that Buffet does not talk about ‘9’ students, he actually talks about 4 people who worked for Graham, tells us that ‘it’s possible to trace the record of three’ (well, there’s some selection bias right there...) and does not explain how the 3 partners did (more selection bias), and some of his other examples are questionable at best—including his very good friend Munger, including two funds he ‘influenced’ (while disclaiming that he might have influenced any other funds and this isn’t cherrypicking, which I don’t understand how he can honestly say how he knows for sure he has not similarly influenced any others), reporting different metrics for different examples (why is Munger compared against the Dow while others are compared against the S&P?), not comparing against an index (table 8), and some do not beat the comparison index at all (Table 9, Becker, underperforms S&P by 3%)
Buffett doesn’t dominate the markets, and the proper comparion is to ideas, not students—if a single professor’s students dominated, I’d be more inclined to suspect corruption or logrolling or the professor being a genius at academic infighting and bureaucracy...
Markets are very different from electronic circuits or particle physics or philosophy or engineering. Circuits don’t care if you found a more efficient way to design them. The properties of steel will not change when you discover it lets you build profitable bridges.
Er, yes, there is. That’s kind of the point of the efficient markets concept! Markets are unusual and special in that the attempt to find predictable regularities leads to the exploitation of the regularities and their disappearance. (Eliezer describes this as “markets are anti-inductive”, which is not wrong, but I’m convinced there must be some more intuitively understandable phrase than that.)
Is that one article really the best, solidest, most convincing criticism of EMH you can come up with, which you think will persuade people reading this conversation that EMH is to a meaningful degree false and markets are often beatable—some cherrypicked questionable examples from the dawn of time?
In its way, yes it is. You get a guy who has impeccable credentials, a massive public record who thinks he has been investing intelligently for decades, who if he IS performing randomly is a few sigma out on the positive side of the random distribution. You get to see what he has to say about what he thought he was doing, how it fits with what a whole bunch of other people were doing, a cogent description for why it might work, and a bunch of numbers about how it does indeed seem to work. Buffett understands the idea that he could just be lucky and he addresses it.
If you think the best explanation of Buffett’s life and results are that he has been fooled by randomness, then you are a very different judge of character and information than me or millions of others like me.
If the EMH was “the markets are really really efficient, it is hard to produce alpha (outperformance), hard to know when you have alpha, and easy to fool yourself because of human biases” then who would argue with that? Not me. But that step from “really hard” to “impossible” is unreasonable. It is not impossible to be a great baseball player. It is not impossible to consistently beat other players at poker, even though everybody playing has the same information, on average across all the hands. It is not impossible to understand 10 languages, even though to most of us most of them sound like noise.
If EMH was right, wouldn’t the smartest, most quantitative participants in the market have figured that out? Wouldn’t Renaissance Technologies have 1) failed, and 2) figured out that their failure was consistent with randomness where they thought there was order?
EMH is the hypothesis that because bunches of smart people all work to figure out what the best investment is, there can be no excess returns available to the smart people who all work hard to figure out what the best investment is. Well if there are not excess returns available to them, why do they do it?
Isn’t EMH the hypothesis that, for EVERYBODY in the market, it would be more efficient to free ride and use your intelligence on something where you can actually produce a return?
Isn’t EMH ultimately a big floppy tent held up by a tent pole which the EMH’ers deny exists?
First, let me point out that I put a fair amount of work into pointing out all those flaws and holes in your last best citation, and I’m a little annoyed that you completely ignored all of them in favor of saying “but Buffett is so high-status and I like him so much”. Yes, and George W. Bush famously said of Vladimir Putin, “I looked the man in the eye. I found him to be very straight forward and trustworthy and we had a very good dialogue. I was able to get a sense of his soul. He’s a man deeply committed to his country and the best interests of his country and I appreciate very much the frank dialogue and that’s the beginning of a very constructive relationship.” We all know how that turned out.
What you think about Buffett’s “character” is irrelevant to me, and for me, further emphasizes your extremely poor reasoning in this area—that when pushed back, you resort to one man and your beliefs about his “character”.
I don’t know why RenTech performs as well as they seem to. Presumably it’s not the same reason that Madoff was able to beat the market for so many years in contravention of EMH. Perhaps it was the same reason SAC did well (insider trading) and they simply haven’t been caught yet. Or maybe there were some inefficiencies back when they started which they erased and have since been coasting on their reputation. Given that it’s a very private hedge fund, we’ll probably never know.
Because there is demand for investment services, considerable cognitive biases at play along with wishful thinking (‘I will be the next Buffett!’), and normal profits available. After all, if no one was there taking even the normal profits, there would immediately be excess returns attracting people to the enterprise...
No.
No.
So your hypothesis is that some process ties all the people there needing to provide skull sweat to get normal returns (and create a market for everybody else that is efficient) works equally across all such players? It sure doesn’t work that way in any other human enterprise I can think of. Intel and AMD produce different quality chips for laptops. At the other end of tilt, Intel and Qualcomm produce very different quality of chips for mobile. The physics department at Caltech produces a very different product, research wise and teaching wise, than the physics department at USC. Writer Stephen King produces a very different quality of novel than do a thousand or more other authors populating the increasingly virtual shelves of bookstores. Even here on lesswrong, some of us write wonderful stuff which is read by many and admired, while others of us struggle to get our karma up to 1000 and then hang on by our fingernails stopping from saying what we really want to say to keep it there.
So why on FSM’s tomato-colored earth would you expect these financial creators of efficiency to all get the same results from their efforts?
And when shown the spread in effectiveness in results, to deny the evidence of your own eyes and declare it all to be the distant tail of millions of coin flippers?
It doesn’t seem like a stretch to you? It doesn’t seem that the evidence is strong that the market is VERY efficient, but that the evidence is not there that it is COMPLETELY efficient?
If you want I’ll go through them point by point.
Presumably you can see the difference between your stating these are NOT audited, and then when pointed out that they are, you back off to this.
The results of the audit are the results in this article. That is, these are results reported which survived the audits.
In many of the cases, the audits are “typical” of the investment advisory business, but I do not know what that means exactly. But it is a level playing field against all other investment advisers.
Also for a few, not all, of the investors cited here, they run/ran for decades public investment businesses. Isn’t the preponderance of your Bayesian a posts that if at least these members of this widely read, cited, and discussed “superinvestors” article was just wrong, that this would at least have lead to traceable reports of the discrepancies on the internet, finable with google search?
To the extent your objections amount to “Buffett could be an idiot and a fraud, either not knowing or not caring what it means to make these claims” my answers are going to be we have 5 decades of impeccable record, if you think Buffett is that unreliable then generally there is no arguing with you as anybody who says something with which you disagree you will question as an idiot or a fraud. If you cannot tell that Buffett is not an idiot or a fraud, or have not followed him well enough to be sure one way or the other, then I would suggest you have no business weighing in on the subtle subject of whether the market is so inefficient that the best investors in the world are just coin-flippers.
I suggest relying upon Buffett because you and everyone else out there who can read has infinitely more reason to rely upon Buffett than to rely upon me. And further, what is needed int he discussion of EMH vs non-EMH is not some brilliant new insight that I can provide that you haven’t seen somewhere else already. EMH vs non-EMH is a subtle question, is the market so efficient that Buffett can’t consistently beat it without committing a crime, either insider trading or some other information-twisting fraud, or is it just a little less efficient than that? The “insight” I have is that what pushes it towards efficiency is competing analyses on opposite sides of each trade. The “insight” I have is that every bit of evidence suggests that in business some people have superior skill or algorithms or SOMETHING and are more successful than others. And they can do it serially, command high prices in very competitive markets, blah blah blah, and show EVERY BIT as much evidence of being “real” as are great pitchers or tennis players or tenors or talk show hosts or porn stars. And your case is that no, with investing it is different, the people who do the work are so smart that they get it right in an unbeatable way, but so stupid that they don’t even realize they would be better off free riding.
What is needed is not any great insight from one or the other of us, I don’t think, but evidence that is hard to deny that yes, the market can be beat. I think that evidence would consist of market beaters coming from a narrowly defined group of people who set out to beat the market by studying it and allowing evidence to drive their future hypotheses and efforts. And what do we find in the market? Exactly that, market beaters are smart and talk in terms of causality, of what makes a business great, of where the momentum traders and the chartists missed the boat.
But my causal chain of how the market could be merely VERY efficient has been, I hope, presented by now. Let me know if it hasn’t.
As much as you might hypothesize that we will not see securities markets make the same mistakes they have made in the past, does the evidence support that? And in any case, the idea that markets do learn or have learned SOMETHING supports only the VEMH, the very efficient market hypothesis, which is not controversial. By this I mean the hypothesis that it is hard to beat the market, because all the easy stuff has been figured out and is properly accounted for by the bulk of the traded money in the market.
I tracked Chiplotle stock on and off from around 2000 forward. There were two classes of shares, A, and B, with the B’s trading at a very consistent 10% discount to the A’s. I would check once or twice a year to see if this difference persisted, and it did. The thing that was surprising was that the documentation of the company explained that these shares had equal value, represented identical fractions of the total company. Why they traded at a 10% difference I never saw an explanation, and I always questioned whether there was some detail I was missing. Here, in late 2007 is documentary evidence that the difference persisted. Here, two years later, is Chipotle’s report that they were eliminating the two classes in favor of one class, and that the exchange rate would be 1:1 just as I had always believed.
In my case, I am an electrical engineer/physicist, trying to concentrate on building new cell phone algorithms for at least a few hours a day. Instead of organizing the financing to exploit this weird inefficiency at low cost, I just checked in on it every year or two. Wanting to see if I was right. Had I been a professional trader, I would have looked more at creating an arbitrage on the A and B shares and capturing the collapse of the arbitrary pricing difference. As an amateur I didn’t know if it would ever collapse, and the brokers are neither smart enough nor dumb enough to let me buy the As and short the Bs without a lot of capital in my account to anchor what they see as two uncorrelated risky bets.
My point here is this is just ONE of MANY possible stories of moderate sized inefficiencies I have seen with my own eyes. Others I have traded. Yes, every one of them is an anecdote. The plural of anecdote is not data. But a bunch of anecdotes like that creates, it would seem, market beating performance for many traders trading different stocks.
Maybe markets COULD be different than circuits and so on, and maybe as computers and AI takes over more and more, they will get more and more efficient. But even then, the most powerful AIs will be beating the market, even as they essentially set th prices at levels that make it incredibly hard for anybody else to beat the market. The thing that drives market makers is not their stupidity, but their intelligence and rationality. Seems to me.
THIS is a hypothesis. And the only word in that hypothesis I will argue with is the last one: disappearance. The predictable regularities don’t disappear from the time-stream of prices, if there is a mispricing at 2:31 PM on Thursday it is frozen there in the permanent record. What changes is how long it takes for the record to close those various gaps. Maybe before computers a broad class of inefficient prices were never traded away. Maybe in the 1980s a broad class of inefficiencies were capitalized upon by people with computers over the course of a two week period. Maybe by the 2000s those same inefficiencies were traded away within hours or minutes.
But my points are: 1) we are not arguing efficiency vs inefficiency, we are arguing too efficient to beat vs nearly too efficient to beat and 2) without the inefficiencies, no one would be there to pay the actors making the market more efficient by trading the inefficiencies, and that no, it is not their stupidity that keeps them working for free.
I hope this is what you wanted when you suggested I was ignoring your point and merely arguing pro hominem, citing people who I thought should be much more believable than I am. If I missed anything that still seems critical, flag it to me and I’ll answer it.
I’m happy with that definition. EMH (Efficient Market Hypothesis) for those of you following along at home.
In my case I had amassed a small fortune by October of 1999 by simply holding the stock options I had been granted on taking the job 4 years earlier. They were up more than 10X at that point. Actionable? My very intelligent college roommate owned his own financial advising firm. He spent two weeks on the phone with me convincing me that it would be gigantically more sensible to cash out these options and give them to him to invest “in case, in the future, people get up in the morning, put their clothes on, and go outside instead of sitting in front of their PCs all day ordering stuff off the internet.” He sent me books to read including this one first published in 1841. This describes witch hunts as well as South Sea, Tulip and other financial bubbles. Jim, my roommate, had been referring to tech as a bubble for a year or two before I talked to him in October of 1999. The action he was taking with his other clients was to simply not get in to tech. This was a horribly unsatisfying strategy until about the middle of 2000 when tech was well into its slide from the top.
By the time I cashed out and handed him the money in about december 1999, the stock had more than doubled again. The human in me wanted to hold on to it because, obviously, this was a stock which kept on doubling. He explained to the rationalist in me that whatever the case for investing that money in something else was at half the price, the case was TWICE as good at twice the price, unless we had learned something quite important and positive about the business in the last two months. Which we hadn’t of course. What we had learned is that there was no shortage of “greater fools” willing to buy in AFTER all that price appreciation had already happened on old information that was not changing nearly as fast as the price.
Over the next three years the stock I had sold in December 1999 gave back about 75% of its price gains. Meanwhile, my friend invested my money in REITs, Berkshire Hathaway, banks, and a bunch of other asset classes not even dreamed about by most of my fellow techies. The money I had given him grew by 40% more or less, I don’t remember exactly, while the nearly half of my original stock grant I had kept in my employers stock contracted to 20% of its peak value.
So yes, to me the internet bubble appears to have been actionable before it burst. The “investors” who stayed with the bubble, myself included with what started out as nearly half of my fortune and ended as about a tenth of it. The shift of 60% of my money out of the bubble preserved my wealth at a level that may well have been unique among my peers at this company.
I realize you can’t get a drug approved with this kind of evidence. But you realize that most of what we “know” is the best model we can come up with in the absence of double blind studies. I’ve detailed the one best example in my life. I agree it is HARD to act on bubbles, shorting them is scary and fraught with risk, you are betting you can stay solvent longer than the market can stay stupid, which is quite a bet indeed. So bubbles, so spectacularly obvious in retrospect, may be no more reliably useful for making money than is any mispricing, even smaller more temporary ones.
Out of curiousity, are you enough of an EMH’er that you don’t believe in mispricings? Or at least not in publicly traded financial securities markets? Do you think it is just a roll of the dice that 9 students of Ben Graham all ran funds which had long term returns above market averages? I think a bubble is just a particular kind of mispricing, a particular kind of inefficiency. It may be no easier to exploit than the other kinds of mispricings, but it is probably not harder to exploit. And shorting is not the only way to exploit bubbles or mispricings, just sticking with a discipline which on average avoids them appears to work for a broad range of investors, including such low-entropy categories of investors as former students of one professor who espoused value investing.
This actionable advice is also 100% justifiable without recourse to claims of superior perception simply by the high value of diversification. Keeping a large sum of money in a single stock’s options is really risky, even if you think it’s +EV, and even if you think some EMH conditions don’t apply (you had insider knowledge the market didn’t, the market was not deep or liquid, you had special circumstances, etc). Same reason I keep telling kiba to cash out some of his bitcoins and diversify—I am bullish on Bitcoin, but he should not keep so much of his net worth in a single volatile & risky asset.
MacKay is not the most reliable authority on these matters, you know. The book I mention punctures a few of the myths MacKay peddles.
An anecdote, as you well realize. You recall the hits and forget the misses. How many other bubbles did Jim call over the years? Did his clients on net outperform indices?
And would have grown by how much if they had been in REITs in 2008?
It’s not just that you’re betting that you can stay solvent longer, you’re betting that you have correctly spotted a bubble. There was a guy on the Bitcoin forums who entered into a short contract targeting Bitcoin at $30. Last I heard, he was upside-down by $100k and it was assumed he would not be paying out.
As a matter of fact, someone a while ago emailed me that to try to argue that EMH was false. This is what I said to them:
Speaking of Buffett’s magical returns, I found http://www.prospectmagazine.co.uk/economics/secrets-of-warren-buffett/ interesting although I’m not competent to evaluate the research claims.
Pretty much. I believe in inefficiencies in small or niche markets like Bitcoin or prediction markets, but in big bonds or stocks? No way.
I have watched countless people, from Paulson to Spitznagel to Dr Doom to Thiel, lose billions or sell their companies or get out of finance due to failed bets they made on ‘obvious’ predictions like hyperinflation and ‘bubbles’ in US Treasuries since that housing bubble which they supposedly called based on their superior rationality & investing skills. It certainly seems like it’s harder to exploit. As I said, when you look at complete track records and not isolated examples—do they look like luck & selection effects, or skill & sustained inefficiencies?
I heartily endorse this analysis. I would recommend actually the original paper rather than the review of that paper cited by gwern.
At no point that I could find in this paper did they find that they needed to appeal to luck or random outlier quality to explain Buffett’s performance. Indeed, except that it is decades after the fact, it seemed fairly simple for them to explain Buffett’s performance quantitatively from picking stocks that the author’s say systematically outperform the market, sticking with his method of picking stocks in good and bad times for his portfolio or the market as a whole, and in using a moderate amount of leverage, they estimate about 1.6.
Not rocket science, not snake oil, and not a long sequence of lucky coin-flips.
Markets are a mechanism of resource allocation. They can be quite efficient sometimes, and fail spectacularly some other times, but in any case they don’t create new resources out of thin air.
Even in our present era of relative abundance, there are many people who die of starvation, epidemic diseases and violent conflict. In a future with scarcer resources and higher population, how are the markets going to handle the problem that they aren’t unable to handle now?
That’s a cute switcheroo.
The original issue was scarcity of resources leading to an “overshoot”—population spiking past resource constraints and then crashing back hard. I said that the markets allocate resources pretty well and there doesn’t seem to be an obvious reason why would they fail in this particular case.
No one claimed that the markets will magically solve “starvation, epidemic diseases and violent conflict”. It’s rather obvious that they don’t—but that’s an entirely separate discussion.
So, just to be clear, your position is that markets will prevent population growth from stopping in the foreseable future or that population will gracefully set at the capacity level without violent oscillations?
My position is that there is no alternative that you can credibly show to be better than the markets in dealing with the issue of resource scarcity.
Markets do not “prevent population growth from stopping”. As to the gracefulness of landing, it’s for the gymnastics judges to estimate. By the way, I do not expect the population to reach the “capacity level” in the foreseeable future.
Here’s what I expect someone who seriously believed that markets will handle it would sound like:
“Wow, overpopulation is a threat? Clearly there are inefficiencies the rest of the market is too stupid to exploit. Let’s see if I can get rich by figuring out where these inefficiencies are and how to exploit them.”
Whereas “the markets will handle it, period, full stop” is not a belief, it’s an excuse.
Here is what I sound like:
“Wow, overpopulation is a threat? I don’t believe it. Show me.”
I don’t think anybody (but the most extreme leftists) is proposing a Soviet-like centrally planned economy, but without regulation, markets mechanisms alone don’t necessarily deal well with resource scarcity. This has been both observed empirically and understood theoretically (tragedy of the commons, negative externalities, etc.)
Why not? Growth rate is already in decline, and AFAIK, most models of world population growth predict a peak in this century.
Well, markets don’t exist in a vacuum, of course, they need a reasonable framework of law and order. Just to start with you need property rights and the ability to enforce contracts.
That’s a different thing that doesn’t have much to do with the markets ability to deal with resource scarcity.
You keep pointing out that markets are not Jesus and they don’t automagically solve all humanity’s problems. Yes, yes, of course, but no one is arguing that. We’re talking about a fairly specific problem—dealing with resource scarcity—and you keep on bringing up how markets don’t solve violence and pollution...
I expect the population to reach a plateau and stabilize at some point. I do not expect that plateau to be the capacity level of the planet.
Well, they should provide a constructive alternative to the former, and the latter is isomorphous with a scarcity of non-polluted air/water/land.
If I understand correctly, after forty years, the main predictions stated in The Limits to Growth are still substantially consistent with observed data.
Can you throw some links in my direction?
http://en.wikipedia.org/wiki/Limits_to_growth#Reception
Thank you.
I’ve looked at the A Comparison of
The Limits to Growth
with Thirty Years of Reality, I wasn’t particularly impressed with how the predictions are fairing. Sorry for offhand dismissal, I don’t have much interest in fisking that report...From the abstract: “The analysis shows that 30 years of historical data compares favorably with key features of a business-as-usual scenario called the “standard run” scenario, which results in collapse of the global system midway through the 21st Century. ”
I haven’t read the abstract, I have skimmed through the paper itself. As I said, I wasn’t particularly impressed.
Did you mean to ask “What conditions must functions f(m,t), k(x), and p(x) satisfy in order to insure that p(x) - f(m,k(x)) > 0 for all x > today()?”
If so, that still leaves m as a free variable.
Fixed, thanks.
There are quite a few who argue that we are already overshooting the carrying capacity. One way to measure it is the global acre http://en.wikipedia.org/wiki/Global_hectare
And according to the footprintnetwork we are already using up 150% of earths carrying capacity: http://www.footprintnetwork.org/en/index.php/GFN/page/basics_introduction/ thus using up the available resources faster than the are (re)generated.
I think it’s fair to say that the danger of grey goo is greater now than it was in the 1980. How well do the engineering mindset work for the problem.
On the other hand when it comes to overpopulation political solutions such as the Chinese one do massive amounts of progress.
Given that some people use up no EM spectrum at all, it can be confusing to speak of something like it as “carrying capacity”. We tend to use a lot of recourses because we can and not because we have to.
Advocating for population control the way the Chinese do is politically unpopular and population control isn’t easy. Cutting resource consumption the way enviromentalists propose seems to be easier.
Bill Gates journey is also interesting for this question. He started his philantrophic efforts by focusing on reducing population growth. Given that empowered working woman don’t tend to get more than two children he switched the focus of his efforts.
That’s why Bill Gates fights malaria. It’s also the cultural background in which GiveWell recommends funding more bet nets and improving local economies through direct money transfers. It’s no accident that the effective altruist crowd doesn’t focus on reducing population. It’s certainly not because they are not smart enough to think of ways to do so that don’t involve Chinese style policy tools.
Oh, by the way, I thought of a few practical benefits I can hope to achieve with this discussion:
Next time some someone who has read enough of this post wanders into a debate about global warming or deforestation or whatever, they will be armed with a constructive alternative to the standard green vs blue talking points.
Conversely, you can find here arguments for full-steam-ahead technological progress that luddites won’t be expecting because it follows directly from some of their favorite “we’re all doomed” arguments. I even suspect the reason I’m getting such a drubbing here is because I’m being mistaken for a Greenpeace-er.
If I’m right that most environmental and a fair number of political/economic/social problems are sequelae of overpopulation, that would be very useful to know, because it would focus efforts on the root cause instead of mistaking the overwhelming array of symptoms for independent problems. A unified theory of doom and gloom, if you will.
...edit: one more
The greater the probability I assign to the shit hitting the fan before singularity/space/nano-Clause happens, the more of my resources it means I should divert from my research to measures that will increase the chances of me and my immediate monkey-sphere surviving and preserving the information needed to rebuild.
What reason do you have to make that suspection? You make it clear that you think environmentalists are wrong.
The closest I’ve come to GUAT is general incompetence as the root cause. Tracing the cause of incompetence brings up… Incompetence. I figure if it’s recursive, it’s probably something we definitely need to focus on. If there’s a more severely recursive cause, I’ve yet to discover it.
What is GUAT?
Grand Unified Armageddon Theory. :p “What will be the root cause of the end of the world?”
We’re all incompetent compared to the theoretical limits of competence permitted by our current brain architecture. But with a smaller population, the stakes are lower for whatever blunders we do commit (up to a point, of course—there is obviously such a thing as a dangerously low population, but not even Wrongians would claim that we’re close to that boundary).
So, what is the first tier of secondary causes after the root cause of incompetence?
It varies widely, but that is the nature of opinion.
Here’s a talk on population growth by the head of ‘Population Matters’ at 2011′s annual British Humanist Assiciation conference.
Thanks. I’ll watch it as soon as I’m someplace where I won’t be waking people up by doing so (or find my headphones).
Have you watch this video and does it change any of your views? Hans Rosling makes the claim that world population will top out at around 10 billion, by simply continuing to do what we do now, educate people and let them have access to birth control.
Yes, I have. In my opinion ten billion is too close to overshoot and even 7 billion is too close. Especially if it is accompanied by increased per-capita demand for resources, which it has been so far. If we’re going to rely mainly on the population term of the equation, I think we need to shrink down to about 4 billion before we’re back in the safe zone.
Why? We could just half our resource consumption.
There a good reason why magic numbers aren’t popular among rationalists. Reducing complex system where you can turn multiple variables to single numbers doesn’t help you to understand them.
If you believe we can freely choose to do so on a global basis as a preventative measure, you are far more of an optimist than I am.
If you believe that things will get bad enough that we will be forced to do so, you might be more of a pessimist than I am.
Yes, if you’re tempted to use magic numbers you should just use unknowns with clearly stated support ranges and get a general result. I would rather have this discussion at the level of “let f(m,t) be the fraction of earth’s maximum capacity ‘m’ we can exploit at technology level ‘t’ , let k(x) be the technology level at year ‘x’, and let p(x) be population at year ‘x’. What properties must f(m,t), k(x), and p(x) have to insure that p(x) - f(m,t) > 0 for all x > today()?”
I’m plugging in magic numbers because otherwise I’ll be misunderstood even worse. Maybe I’m wrong about that.
Compared to freely choose to cut population numbers in half or even further, I think the problem of resource usage seems easier. It’s still a hard problem.
But maybe we don’t even have to cut energy consumption that much. Solar energy seem to get cheaper by 50% every 7 years. Batteries also seem to improve well.
The problem with the magic numbers in that case is that the resulting theory doesn’t tell us very much about the utility of reducing human population by 5%. I wrote more about another issues in other posts.
For the most part, my emphasis is not on limiting population directly. I do believe that charitable efforts have the responsibility to mitigate the risk of a demographic trap in the areas they serve. But I think getting anybody who matters to listen is a lost cause.
My emphasis is on being conscious of the fact that the reason we’re still alive and prospering is that we are continuously buying ourselves more time with technology and use this insight to motivate greater investment in research and development. This seems like an easier sell.
The Bill and Melinda Gates foundation accounts for a good share of charity spending. They started by being very focused on the issue of reducing population.
They spent millions on the issue and have seen the empiric effects of their projects. To the extend that they don’t listen anymore to the kind of arguments that you are making is that they updated in face of empiric evidence.
So, curious: am I getting downvoted here because I triggered your “ugh” field? Brought up something you don’t like to think about?
Because in some of my posts I’ve been kind of snippy, but I can’t find a single way in which I’m violating the rules of rational constructive discourse in the above post.
This is not because I care about my score. It’s because usually I understand what I did to earn an up-vote or down-vote. Here I’m genuinely curious what specific behavior you could possibly be trying to discourage? I mean, it couldn’t possible be simple disagreement with you, because this is LessWrong. So enlighten me—maybe it’s a behavior I’ll want to minimize too once I’m aware of it.
You’re asserting a highly nonobvious result (seven billion looks fine from here) as though it were obvious fact.
Thanks, fixed.
You make big claims with no backup.
Not sure if you were addressing me particularly but in case you did, I didn’t downvote you. I actually found your claim, that 4 billion is back in the safe zone, to be thought provoking because that idea is novel to me personally, so thanks for that, but I don’t have an opinion on it yet.
Notice that, after doing my homework and seeing that the range of estimates of carrying capacity were in the range of 4-16 billion with a median of 10 billion, I revised my own estimate upward from 2 billion. Although, being at carrying capacity doesn’t sound particularly safe either, just safer.
he can’t really notice it since you edited it away
Treating numbers that people with obvious biases pulled out of their ass as credible. Seriously, look at the history of carrying capacity estimates, they’re always just above (or just below) whatever the current population happens to be.
Right. What’s disturbing is that people who don’t share these biases don’t respond with estimates of their own. They respond with “too negligible to matter”.
So, what would be a rational way to update based on both the detailed numbers provided by sources biased toward believing that overpopulation is a threat and on vague numbers provided by sources biased against believing that overpopulation is a threat?
What do you think the nature of each of these biases might be? Perhaps that might shed some light on how to correct for them.
By the way, how is this any different from half a century of predictions that AI is just around the corner?
It’s not automatically given that the zero-to-negative population growth among post-industrial societies will be sufficient to mitigate the fact that the number of resources they use per capita can be an order of magnitude higher than pre-service economy societies.
Although I do agree that it does seem like getting everyone wealthy enough to snap out of high-birth-rate-mode as fast as possible is probably the best non-coercive solution for minimizing this sort of risk. (Which is actually part of the reason I speculate that effective altruism might be better placed in global intervention than in science research, though I remain uncertain)
Malthus will be counting the machines too.
Human numbers may decline during a memetic takeover, but machine numbers probably won’t.
the 10 billion topout is a very close-in result. Based on population distribution right now, a top at 10 followed by at least a little decline is baked in.
However, this says NOTHING about the population and its growth or shrinkage rate 100 years from now. The population distribution used to predict that population hasn’t been born yet.
If eugenics = Nazi, it’s time to re-evaluate all this talk of FAI and transhumanism.
Eugenics can be negative (breed out) or positive (breed in / maintain), and it can be state run or individual run. The line between birth control / family planning and eugenics is like the line between erotica and porn; the good things are good because they are good, not any quantifiale thing in the thing.
Your assumptions and questions point to a desire for future generations to be as or more healthy and happy as we are today, and there is a name for that. A name that is out of fashion, but what the name describes is older than 1930s-40s Germany and which is more practiced and discussed than ever. The power may be in the state (China’s child limits) or individuals (India’s use of abortion and birth control to have more boy children) but it’s here.
I favor access to birth control by individuals and I am against state decisions on family planning and health.
What’s the connection to re-evaluating FAI and transhumanism?
I didn’t say I think eugenics = Nazi. I just said Nazis advocated a particularly murderous and arbitrary form of eugenics, so now that’s all that comes to mind for most people today when they think about eugenics, if they do at all.
With a lot of work, though, we may eventually make that issue moot through in-vivo gene therapy.
I have encountered a severely limited ability in others to accurately understand that, when speaking on behalf of others, you are not speaking your own opinion. I recommend trying to be as explicit as possible in explaining public perception.
So do I. But, I bet I can come up with a demographic trend or two that would make the above position a difficult one to defend.
The word “eugenics” has generally been used for inventory attempts to “eliminate undesirable traits”, either by state-run top-down efforts, or in some cases by pressure from the medical community, in order to make general long-term changes in the human gene pool as a whole.
It really has nothing to do with individual making decisions that have an effect on the genetic health of their children (for example, women choosing sperm donars with college degrees in the hopes of having smarter children, people using pre-implantation genetic selection in IVF, ect.) Positive long-term effects on the human genome in general may be positive side effects of that, but they are not the main goal.
In any case, I think that eugenics (trying to make long-term changes in the human genotype through selective breeding or forced sterilization ect) is a foolish idea at this point. Even if you had some kind of species-wide eugenics program, it would take many, many generations for it to have any real effect, and long before then we should be selecting our genes directly (even without any kind of singularity or GAI, genetic science alone should do that quite soon.)
People who are in favor of transhumanism shouldn’t talk about it in terms of eugenics. Any eugenics effects (positive or negative) are unlikely to be significant in either the short run or the long run, and eugenics has a well-deserved reputation for totalitarianism, abuse, and taking away people’s fundamental freedoms.
So… a majority or at least a vocal plurality of us believe that technology is not necessary for preventing population from overshooting the planet’s carrying capacity?
Or, you are so vehemently opposed to the very concept of limiting conditions that it discredits any argument it is part of, regardless of what the rest of the argument?
Malthusian Crunch: Not Adjacent to This Complete Breakfast.
It seems like these discussions, even when they use biological terminology like “carrying capacity”, never seem to take biology into account as anything but a static force.
Malthus assumed that agriculture only increased production arithmetically, something that the Green Revolution disproves as it continues to increase crop yields and the percentage of arable land worldwide much faster than our population has grown. And it’s not exactly like we were in danger of hitting our upper limits before; even in the US you can see overgrown fallow fields with just a short drive out from a city (our tax dollars at work, courtesy of generous farm subsidies meant to keep food expensive), while most of the world’s farmers have been so thoroughly out-competed by food aid that they cannot afford the technology to use their fields efficiently. Even the fresh water crisis is only a temporary problem; we are even now developing plants and irrigation methods which can use salty and even contaminated water as effectively as old freshwater irrigation ever did.
Agriculture provides food raw materials (even plastics) and energy, with modern technology it is almost completely renewable, and our agricultural capability is expanding far faster than our population is. Obviously there are hard limits on how many people the earth can support, but it is a theoretical discussion on the level of how long we have until the sun collapses or the heat death of the universe occurs. The politics of population reduction are not, and never have been, about resource preservation.
“the Green Revolution disproves”
“the technology to use their fields efficiently”
“developing plants and irrigation methods”
“with modern technology it is almost completely renewable”
This illustrates precisely what I’m trying to say. The reason we haven’t experienced a Malthusian Crunch is not that the concept itself is impossible or absurd, but because we develop new technologies fast enough to continually postpone it.
This has some implications:
If technological development is derailed by cultural backlash, prolonged recession, or political lunacy, we may find ourselves having to cope with population overshoot on top of whatever the original problem was.
Responsible global citizens need to defend and promote technological progress with every bit of the same zeal they currently have for the natural environment.
Extrapolations of continued technological progress based on past performance are inherently unreliable. So if our extrapolations of not having to worry about overshoot are in effect extrapolations of extrapolations about technological progress, then those extrapolations are themselves not reliable and we cannot afford complacency.
I don’t really think this is true. Exponential growth can put up some very high numbers very fast. At 2010 growth rates, humanity should be in the quadrillions within mere centuries. In contrast, sun changes should not make earth uninhabitable for millions of years at least.
Yes, exactly.
Maybe the question to population deniers should be framed as:
What upper and lower bounds do you place on the hard limits of how many humans the planet can support indefinitely?
What upper and lower bounds do you place on the rate at which technological progress pushes the practically achievable limits toward the hard limits above?
What upper and lower bounds on future world population levels given that the current number is 7 billion?
From this we can then derive at least a self-consistent probability that overpopulation deniers should assign to Malthusian Crunch.
I think that probably the most effective means of population control, historically speaking has been (in no particular order):
-Increased education (especially of females)
-Improved access to birth control
-Feminism, increased women’s rights
-Creating a society where women are allowed and encouraged to work outside the home
-Improved economics; getting out of a third-world economic state is vital
-Lowered childhood mortality rates
-Longer life-spans in general
Top-down population controls (like China’s) have much more severe side effects, and are probably less effective in the long run.
You probably need to look back more than the last 50 years to get any kind of insight into the things that will effect human population over the next few hundred or thousand years.
I’m talking about things we can do right now to deal with the potential of population growth.
Obviously if we cure old age, or start uploading ourselves to computers, or genetically engineer ourselves into something different, or fundamentally change human reproduction technologically, we would be in a completely different situation and would need to come up with new solutions. But I’m not sure we can really plan for that until we actually see how that would unfold; and, in any case, in any of those scenarios, we would be better able to deal with the consequences with a world population of 9 billion then with a world population of 12 billion.