Cascades, Cycles, Insight...
Followup to: Surprised by Brains
Five sources of discontinuity: 1, 2, and 3...
Cascades are when one thing leads to another. Human brains are effectively discontinuous with chimpanzee brains due to a whole bag of design improvements, even though they and we share 95% genetic material and only a few million years have elapsed since the branch. Why this whole series of improvements in us, relative to chimpanzees? Why haven’t some of the same improvements occurred in other primates?
Well, this is not a question on which one may speak with authority (so far as I know). But I would venture an unoriginal guess that, in the hominid line, one thing led to another.
The chimp-level task of modeling others, in the hominid line, led to improved self-modeling which supported recursion which enabled language which birthed politics that increased the selection pressure for outwitting which led to sexual selection on wittiness...
...or something. It’s hard to tell by looking at the fossil record what happened in what order and why. The point being that it wasn’t one optimization that pushed humans ahead of chimps, but rather a cascade of optimizations that, in Pan, never got started.
We fell up the stairs, you might say. It’s not that the first stair ends the world, but if you fall up one stair, you’re more likely to fall up the second, the third, the fourth...
I will concede that farming was a watershed invention in the history of the human species, though it intrigues me for a different reason than Robin. Robin, presumably, is interested because the economy grew by two orders of magnitude, or something like that. But did having a hundred times as many humans, lead to a hundred times as much thought-optimization accumulating per unit time? It doesn’t seem likely, especially in the age before writing and telephones. But farming, because of its sedentary and repeatable nature, led to repeatable trade, which led to debt records. Aha! - now we have writing. There’s a significant invention, from the perspective of cumulative optimization by brains. Farming isn’t writing but it cascaded to writing.
Farming also cascaded (by way of surpluses and cities) to support professional specialization. I suspect that having someone spend their whole life thinking about topic X instead of a hundred farmers occasionally pondering it, is a more significant jump in cumulative optimization than the gap between a hundred farmers and one hunter-gatherer pondering something.
Farming is not the same trick as professional specialization or writing, but it cascaded to professional specialization and writing, and so the pace of human history picked up enormously after agriculture. Thus I would interpret the story.
From a zoomed-out perspective, cascades can lead to what look like discontinuities in the historical record, even given a steady optimization pressure in the background. It’s not that natural selection sped up during hominid evolution. But the search neighborhood contained a low-hanging fruit of high slope… that led to another fruit… which led to another fruit… and so, walking at a constant rate, we fell up the stairs. If you see what I’m saying.
Predicting what sort of things are likely to cascade, seems like a very difficult sort of problem.
But I will venture the observation that—with a sample size of one, and an optimization process very different from human thought—there was a cascade in the region of the transition from primate to human intelligence.
Cycles happen when you connect the output pipe to the input pipe in a repeatable transformation. You might think of them as a special case of cascades with very high regularity. (From which you’ll note that in the cases above, I talked about cascades through differing events: farming → writing.)
The notion of cycles as a source of discontinuity might seem counterintuitive, since it’s so regular. But consider this important lesson of history:
Once upon a time, in a squash court beneath Stagg Field at the University of Chicago, physicists were building a shape like a giant doorknob out of alternate layers of graphite and uranium...
The key number for the “pile” is the effective neutron multiplication factor. When a uranium atom splits, it releases neutrons—some right away, some after delay while byproducts decay further. Some neutrons escape the pile, some neutrons strike another uranium atom and cause an additional fission. The effective neutron multiplication factor, denoted k, is the average number of neutrons from a single fissioning uranium atom that cause another fission. At k less than 1, the pile is “subcritical”. At k >= 1, the pile is “critical”. Fermi calculates that the pile will reach k=1 between layers 56 and 57.
On December 2nd in 1942, with layer 57 completed, Fermi orders the final experiment to begin. All but one of the control rods (strips of wood covered with neutron-absorbing cadmium foil) are withdrawn. At 10:37am, Fermi orders the final control rod withdrawn about half-way out. The geiger counters click faster, and a graph pen moves upward. “This is not it,” says Fermi, “the trace will go to this point and level off,” indicating a spot on the graph. In a few minutes the graph pen comes to the indicated point, and does not go above it. Seven minutes later, Fermi orders the rod pulled out another foot. Again the radiation rises, then levels off. The rod is pulled out another six inches, then another, then another.
At 11:30, the slow rise of the graph pen is punctuated by an enormous CRASH—an emergency control rod, triggered by an ionization chamber, activates and shuts down the pile, which is still short of criticality.
Fermi orders the team to break for lunch.
At 2pm the team reconvenes, withdraws and locks the emergency control rod, and moves the control rod to its last setting. Fermi makes some measurements and calculations, then again begins the process of withdrawing the rod in slow increments. At 3:25pm, Fermi orders the rod withdrawn another twelve inches. “This is going to do it,” Fermi says. “Now it will become self-sustaining. The trace will climb and continue to climb. It will not level off.”
Herbert Anderson recounted (as told in Rhodes’s The Making of the Atomic Bomb):
“At first you could hear the sound of the neutron counter, clickety-clack, clickety-clack. Then the clicks came more and more rapidly, and after a while they began to merge into a roar; the counter couldn’t follow anymore. That was the moment to switch to the chart recorder. But when the switch was made, everyone watched in the sudden silence the mounting deflection of the recorder’s pen. It was an awesome silence. Everyone realized the significance of that switch; we were in the high intensity regime and the counters were unable to cope with the situation anymore. Again and again, the scale of the recorder had to be changed to accomodate the neutron intensity which was increasing more and more rapidly. Suddenly Fermi raised his hand. ‘The pile has gone critical,’ he announced. No one present had any doubt about it.”
Fermi kept the pile running for twenty-eight minutes, with the neutron intensity doubling every two minutes.
That first critical reaction had k of 1.0006.
It might seem that a cycle, with the same thing happening over and over again, ought to exhibit continuous behavior. In one sense it does. But if you pile on one more uranium brick, or pull out the control rod another twelve inches, there’s one hell of a big difference between k of 0.9994 and k of 1.0006.
If, rather than being able to calculate, rather than foreseeing and taking cautions, Fermi had just reasoned that 57 layers ought not to behave all that differently from 56 layers—well, it wouldn’t have been a good year to be a student at the University of Chicago.
The inexact analogy to the domain of self-improving AI is left as an exercise for the reader, at least for now.
Economists like to measure cycles because they happen repeatedly. You take a potato and an hour of labor and make a potato clock which you sell for two potatoes; and you do this over and over and over again, so an economist can come by and watch how you do it.
As I noted here at some length, economists are much less likely to go around measuring how many scientific discoveries it takes to produce a new scientific discovery. All the discoveries are individually dissimilar and it’s hard to come up with a common currency for them. The analogous problem will prevent a self-improving AI from being directly analogous to a uranium heap, with almost perfectly smooth exponential increase at a calculable rate. You can’t apply the same software improvement to the same line of code over and over again, you’ve got to invent a new improvement each time. But if self-improvements are triggering more self-improvements with great regularity, you might stand a long way back from the AI, blur your eyes a bit, and ask: What is the AI’s average neutron multiplication factor?
Economics seems to me to be largely the study of production cycles—highly regular repeatable value-adding actions. This doesn’t seem to me like a very deep abstraction so far as the study of optimization goes, because it leaves out the creation of novel knowledge and novel designs—further informational optimizations. Or rather, treats productivity improvements as a mostly exogenous factor produced by black-box engineers and scientists. (If I underestimate your power and merely parody your field, by all means inform me what kind of economic study has been done of such things.) (Answered: This literature goes by the name “endogenous growth”. See comments starting here.) So far as I can tell, economists do not venture into asking where discoveries come from, leaving the mysteries of the brain to cognitive scientists.
(Nor do I object to this division of labor—it just means that you may have to drag in some extra concepts from outside economics if you want an account of self-improving Artificial Intelligence. Would most economists even object to that statement? But if you think you can do the whole analysis using standard econ concepts, then I’m willing to see it...)
Insight is that mysterious thing humans do by grokking the search space, wherein one piece of highly abstract knowledge (e.g. Newton’s calculus) provides the master key to a huge set of problems. Since humans deal in the compressibility of compressible search spaces (at least the part we can compress) we can bite off huge chunks in one go. This is not mere cascading, where one solution leads to another:
Rather, an “insight” is a chunk of knowledge which, if you possess it, decreases the cost of solving a whole range of governed problems.
There’s a parable I once wrote—I forget what for, I think ev-bio—which dealt with creatures who’d evolved addition in response to some kind of environmental problem, and not with overly sophisticated brains—so they started with the ability to add 5 to things (which was a significant fitness advantage because it let them solve some of their problems), then accreted another adaptation to add 6 to odd numbers. Until, some time later, there wasn’t a reproductive advantage to “general addition”, because the set of special cases covered almost everything found in the environment.
There may be even be a real-world example of this. If you glance at a set, you should be able to instantly distinguish the numbers one, two, three, four, and five, but seven objects in an arbitrary (non-canonical pattern) will take at least one noticeable instant to count. IIRC, it’s been suggested that we have hardwired numerosity-detectors but only up to five.
I say all this, to note the difference between evolution nibbling bits off the immediate search neighborhood, versus the human ability to do things in one fell swoop.
Our compression of the search space is also responsible for ideas cascading much more easily than adaptations. We actively examine good ideas, looking for neighbors.
But an insight is higher-level than this; it consists of understanding what’s “good” about an idea in a way that divorces it from any single point in the search space. In this way you can crack whole volumes of the solution space in one swell foop. The insight of calculus apart from gravity is again a good example, or the insight of mathematical physics apart from calculus, or the insight of math apart from mathematical physics.
Evolution is not completely barred from making “discoveries” that decrease the cost of a very wide range of further discoveries. Consider e.g. the ribosome, which was capable of manufacturing a far wider range of proteins than whatever it was actually making at the time of its adaptation: this is a general cost-decreaser for a wide range of adaptations. It likewise seems likely that various types of neuron have reasonably-general learning paradigms built into them (gradient descent, Hebbian learning, more sophisticated optimizers) that have been reused for many more problems than they were originally invented for.
A ribosome is something like insight: an item of “knowledge” that tremendously decreases the cost of inventing a wide range of solutions. But even evolution’s best “insights” are not quite like the human kind. A sufficiently powerful human insight often approaches a closed form—it doesn’t feel like you’re exploring even a compressed search space. You just apply the insight-knowledge to whatever your problem, and out pops the now-obvious solution.
Insights have often cascaded, in human history—even major insights. But they don’t quite cycle—you can’t repeat the identical pattern Newton used originally to get a new kind of calculus that’s twice and then three times as powerful.
Human AI programmers who have insights into intelligence may acquire discontinuous advantages over others who lack those insights. AIs themselves will experience discontinuities in their growth trajectory associated with becoming able to do AI theory itself—a watershed moment in the FOOM.
- What I Think, If Not Why by 11 Dec 2008 17:41 UTC; 41 points) (
- The Correct Use of Analogy by 16 Jul 2014 21:07 UTC; 40 points) (
- Recursive Self-Improvement by 1 Dec 2008 20:49 UTC; 38 points) (
- Cascades, Cycles, Insight... by 24 Nov 2008 9:33 UTC; 35 points) (
- Intelligence Explosion vs. Co-operative Explosion by 16 Apr 2012 11:01 UTC; 34 points) (
- Hard Takeoff by 2 Dec 2008 20:44 UTC; 34 points) (
- Artificial Mysterious Intelligence by 7 Dec 2008 20:05 UTC; 32 points) (
- Why AI may not foom by 24 Mar 2013 8:11 UTC; 29 points) (
- ...Recursion, Magic by 25 Nov 2008 9:10 UTC; 27 points) (
- Engelbart: Insufficiently Recursive by 26 Nov 2008 8:31 UTC; 22 points) (
- Total Nano Domination by 27 Nov 2008 9:54 UTC; 21 points) (
- Sustained Strong Recursion by 5 Dec 2008 21:03 UTC; 19 points) (
- [fiction] Our Final Hour by 11 Dec 2022 5:49 UTC; 17 points) (
- Underconstrained Abstractions by 4 Dec 2008 13:58 UTC; 11 points) (
- 1 Nov 2010 20:19 UTC; 4 points) 's comment on What I would like the SIAI to publish by (
- 31 Oct 2010 8:15 UTC; 4 points) 's comment on Ben Goertzel: The Singularity Institute’s Scary Idea (and Why I Don’t Buy It) by (
- [SEQ RERUN] Cascades, Cycles, Insight... by 19 Nov 2012 4:58 UTC; 4 points) (
- 8 Jan 2010 7:52 UTC; 2 points) 's comment on Fictional Evidence vs. Fictional Insight by (
- 29 Dec 2012 15:49 UTC; 0 points) 's comment on Intelligence explosion in organizations, or why I’m not worried about the singularity by (
- 29 Dec 2012 19:33 UTC; -1 points) 's comment on Intelligence explosion in organizations, or why I’m not worried about the singularity by (
I’ve been following along and enjoying the exchange so far, but it doesn’t seem to be getting past the “talking past each other” phase.
For example, the Fermi story works as an example of a cycle as a source of discontinuity. But I don’t see how it establishes anything that Robin would have disputed. I guess that Eliezer would say that Robin has been inattentive to its lessons. But he should then point out where exactly Robin’s reasoning fails to take those lessons into account. Right now, he just seems to be pointing to an example of cycles and say, “Look, a cycle causing discontinuity. Does that maybe remind you of something that perhaps your theorizing has ignored?” I imagine that Robin’s response will just be to say, “No,” and no progress will have been made.
And, of course, once the Fermi story is told, I can’t help but think of how else it might be analogous to the current discussion. When I look at the Fermi story, what I see is this: Fermi took a powerful model of reality and made the precise prediction that something huge would happen between layers 56 and 57, whereas someone without that model would have just thought, “I don’t see how 57 is so different from 56.” What I see happening in this conversation is that Robin says, “Using a powerful model of reality, I predict that an event, which Eliezer thinks is very likely, will actually happen only with probability <10%.” (I haven’t yet seen a completely explicit consensus account of Robin and Eliezer’s disagreement, but I gather that it’s something like that.) And Eliezer’s replies seem to me to be of the form “You shouldn’t be so confident in your model. Previous black swans show how easily predictions based on past performance can be completely wrong.”
I concede that the analogy between the Fermi story and the current conversation is not the best fit. But if I pursue it, what I get is this: Robin is in a sense claiming to be the Fermi in this conversation. He says that he has a well-established body of theory that makes a certain prediction: that Eliezer’s scenario has very low probability of happening.
Eliezer, on the other hand, is more like someone who, when presented with Fermi’s predictions (before they’d been verified) might have said, “How can you be so confident in your theory? Don’t you realize that a black swan could come and upset it all? For example, maybe a game-changing event could happen between layers 32 and 33, preventing layer 57 from even occurring. Have you taken that possibility into account? In fact, I expect that something will happen at some point to totally upset your neat little calculations”
Such criticisms should be backed up with an account of where, exactly, Fermi is making a mistake by being so confident in his prediction about layer 57. Similarly, Eliezer should say where exactly he sees the flaws in Robin’s specific arguments. Instead, we get these general exhortations to be wary of black swans. Although such warnings are important, I don’t see how they cash out in this particular case as evidence that Robin is the one who is being too confident in his predictions.
In other words, Robin and Eliezer have a disagreement that (I hope) ultimately cashes out as a disagreement about how to distribute probability over the possible futures. But Eliezer’s criticisms of Robin’s methods are all very general; they point to how hard it is to make such predictions. He argues, in a vague and inexact way, that predictions based on similar methods would have gone wrong in the past. But Eliezer seems to dodge laying out exactly where Robin’s methods go wrong in this particular case and why Eliezer’s succeed.
Again, the kinds of general warnings that Eliezer gives are very important, and I enjoy reading them. It’s valuable to point out all the various quarters from which a black swan could arrive. But, for the purposes of this argument, he should point out how exactly Robin is failing to heed these warnings sufficiently. Of course, maybe Eliezer is getting to that, but some assurance of that would be nice. I have a large appetite for Eliezer’s posts, construed as general advice on how to think. But when I read them as part of this argument with Robin, I keep waiting for him to get to the point.
Economists do look at innovation. See my working paper “Teaching Innovation in principles of microeconomics classes.”
http://sophia.smith.edu/~jdmiller/teachinginnovation.pdf
Tyrrell: My impression is that you’re overstating Robin’s case. The main advantage of his model seems to be that it gives numbers, which is perhaps nice, but it’s not at all clear why those numbers should be correct. It seems like they assume a regularity between some rather uncomparable things, which one can draw parallels between using the abstractions of economics; but it’s not so very clear that they apply. Eliezer’s point with the Fermi thing isn’t “I’m Fermi!” or “you’re Fermi!”, but just that since powerful ideas have a tendency to cascade and open doors to more powerful ideas, it seems likely that not too long before a self-improving AI takes off as a result of a sufficiently powerful set of ideas, leading AI researchers will be still uncertain of whether such a thing will take months, years, or decades, and reasonably so. In other words, this accumulation of ideas is likely to explode at some point, but our abstractions (at least economic ones) are not a good enough fit to the problem to say when or how. But the point is that such an explosion of ideas would lead to the hard takeoff scenario.
James, this is a great intro for students but it doesn’t deal with an economy of ideas, it talks about ideas in the economy. There is no mention of how many units of scientific input it takes to produce one unit of scientific discovery. This is hard for economists to study because each discovery only happens once, and it won’t be clear what the important inputs were because key nonrival info won’t have been visibly purchased.
Which isn’t a critique of economics. You would have to be a cognitive scientist in order to discover the brain processes that repeat regularly to produce such discoveries. Economists study mostly repeatable processes, like “put a car together” or “pay a scientist $5″. That’s fine. They’re not supposed to be studying the scientist’s prefrontal cortex doing some kind of prefrontal thingy over and over again. But it means that you have to drag in concepts from outside standard econ to make sense of where ideas come from and what sort of optimization velocity they might have.
I feel bad making an unrelated comment because the topic is so interesting, but an interesting fact coincidentally related to two seperate points made is that it appears that chimps are actually better at subitizing (determining the size of a set without counting) than humans. Upon searching I didn’t find any large studies, but the numbers I did find point towards that chimps can subitize sets up to a size of either 6 or 7 (compared to probably 5 for humans).
Tyrrell, it seems to me that there’s a huge difference between Fermi’s model and the one Robin has presented. Fermi described a precise mechanism that made precise predictions that Fermi was able to state ahead of time and confirm experimentally. Robin is drawing a general analogy between several historical events and drawing a rough line connecting them. There are an enormous number of events that would match his prediction, and another enourmous number of non-events that Robin can respond to with “just wait and see.”
So I don’t really see Eli as just saying that black swans may upend Robin’s expected outcomes. In this case, Eli’s side of the argument is that he’s arguing for a force multiplier that will change the regime of progress, like Fermi’s. Unfortunately for Eli’s argument, he hasn’t yet produced the mathematical model or the detailed physical model that would let us put numbers on the predictions. So this particular little story just argues for plausibility of the model that says take off might happen at some point. Eli has been arguing for a little while that the regime change projection has more plausibility than Robin thinks, but Robin has already granted some plausibility, so he doesn’t have to cede any more ground (as you say) because of this argument. Robin can just say that this is the kind of effect that he was already taking into account, and we are still waiting for Eli to show likelihood.
As far as general models of repeated insight, the best I can do is point to Smolin’s model of the progress of fundamental physics as presented in “The Trouble with Physics.” He shows how breakthroughs from Copernicus, Galileo, Bacon, Newton, Maxwell, and Einstein were a continuous series of unifications. From my blog (linked above) “The focus was consistently on what pre-existing concepts were brought together in one of two ways. Sometimes the unification shows that two familiar things that are thought of as distinct are really the same thing, giving a deeper theory of both. (the Earth is one planet among several, the Sun is one star among many.) Other times, two phenomena that weren’t understood well are explained as one common thing (Bacon showed that heat is a kind of motion; Newton showed that gravity explained both planetary orbits and ballistic trajectories; Maxwell showed that electricity and magnetism are the different aspects of the same phenomenon.)”
Einstein seems to have consciously set out to produce another unification, and succeeded twice in finding other aspects of reality to fold together with a single model. AFAICT, it hasn’t been done again on this scale since QED and QCD.
Btw, I’m interested in “farming” first because growth rates suddenly increased by two orders of magnitude; by “farming” I mean whatever was the common local-in-time cause of that change. Writing was part of the cascade of changes, but it seems historically implausible to call writing the main cause of the increased growth rate. Professional specialization has more promise as a main cause, but it is still hard to see.
There is an extensive endogenous growth literature, albeit much of it quite recent.
anon and Chris Hibbert, I definitely didn’t mean to say that Robin is claiming to be working with as much certainty as Fermi could claim. I didn’t mean to be making any claim about the strength or content of Robin’s argument at all, other than that he’s assigning low probability to something to which Eliezer assigns high probability.
Like I said, the analogy with the Fermi story isn’t very good. My point was just that a critique of Fermi should have addressed his calculations, pointing out where exactly he went wrong (if such a point could be found). Eliezer, in contrast, isn’t really grappling with Robin’s theorizing in a direct way at all. I know that the analogy isn’t great for many reasons. One is that Robin’s argument is in a more informal language than mathematical physics. But still, I’d like to see Eliezer address it with more directness.
As it is, this exchange doesn’t really read like a conversation. Or, it reads like Robin wants to engage in a conversation. Eliezer, on the other hand, seems to think that he has identified flaws in Robin’s thinking, but the only way he can see to address them is by writing about how to think in general, or at least how to think about a very broad class of questions, of which this issue is only a very special case.
I gather that, in Eliezer’s view, Robin’s argument is so flawed that there’s no way for Eliezer to address it on its own terms. Rather, he needs to build a solid foundation for reasoning about these things from the ground up. The Proper Way to answer this question will then be manifest, and Robin’s arguments will fall by the wayside, clearly wrong simply by virtue of not being the Proper Way.
Eliezer may be right about that. Indeed, I think it’s a real possibility. Maybe that’s really the only way that these kinds of things can be settled. But it’s not a conversation. And maybe that will be the lesson that comes out of this. Maybe conversation is overrated.
None of this is supposed to be a criticism of either Eliezer’s or Robin’s side of this specific issue. It’s a criticism of how the conversation is being carried out. Or maybe just an expression of impatience.
Endogenous Growth theory, Economic Growth and Research Policy all seem to be building mathematical models that attempt to generalize over our experience of how much government funding leads to increased growth, how quickly human capital feeds back into societal or individual wealth, or what interventions have helped poor countries to develop faster. None of them, AFAICT, have been concrete enough to lead to solid policy prescription that have reliably led to anyone or any country to recreate the experiences that led to the models.
In order to have a model solid enough to use as a basis for theorizing about the effects on growth of a new crop of self-improving AGIs, we’d need to have a much more mechanistic model behind endogenous growth. Fermi’s model told him how to calculate how many neutrons would be released given a particular density of uranium of a particular purity, how much would be absorbed by a particular quantity of shielding, and therefore where the crossover would be from a k of less than 1 to greater than 1. None of those models gives numerical models that we can apply to human intelligence, much less any abstractions that we could extend to cover the case of intelligences learning faster than we do.
Chris, I wouldn’t claim growth theory models are anywhere near as precise as the nuclear fission model was. I’d just suggest they seem at least as well thought out and disciplined in contact with reality as Eliezer’s theories are. I suggest doubters browse the last few issues of Journal of Economic Growth.
″ at least as well thought out and disciplined in contact with reality as Eliezer’s theories are”
I’ll have to grant you that, Robin. Eliezer hasn’t given us much solid food to chew on yet. Lots of interesting models and evocative examples. But it’s hard to find solid arguments that this particular transition is imminent, that it will be fast, and that it will get out of control.
Eliezer, look particularly at Weitzman’s ’98 paper on Recombinant Growth and this ’06 extension.
Chris, I’m not sure that endogenous growth models are less precise than models of nuclear fission relative to the amount of noise in the available data.
By the way, when are Elie and Robin going to start talking in terms of precise probabilities of general AI scenarios? Or did I miss that? It just seems that they’ve mostly been adding detail to their predictions to make them seem more believable, which goes against everything I’ve read on this blog (No offense intended).
Robin and Jon have answered my challenge and I retract my words. Reading now.
Another interesting innovation of evolution is the Hox genes. Basically evolution invented modularity, it saved the fact that 6 legs were useful and allowed the space of different lengths of all the legs to be searched, rather than tweaking each individually. So I would say the Homeobox system would count as an insight, for genes.
Oddly, the comparitively tiny numbers of unspecialized nomads seem to have continued to make very significant contributions (iron, riding, high quality bows, specialized arrows, saddles and stirrups, firearms and cannon, imperialism/logistics/mercantilism, maybe monotheism, lots of elements of ethics aesthetics and music) to human culture. This doesn’t casually fit with Eliezer’s focus on specialization OR Robin’s focus on numbers.
Also, as far as I can tell, the population numbers that Robin is using when he says that growth rates suddenly increased by two orders of magnitue with farming are basically worthless. We don’t even have order of magnitude agreement on the population of the Americas in 1491, which is MUCH more recent than the origin of agriculture, and we DO have large disagreement between currently credible estimates and the estimates considered credible a few decades ago. Hell, we don’t know the population of Afghanistan to within 40% (credible standard sources disagree) or the GDP of Cuba to within a factor of three (ditto). I would take his estimates at least a little more seriously, though only a little, if he could convince me that even economists setimates of GDP growth in the 20th century were credible, as I don’t beleive them to be credible. One piece of information that would make me take these estimates more seriously would be a careful study of the ways in which Amish lifestyles have changed over the last century and data on what the per-capita Amish GDP and Amish birth rate were over the last century. Can anyone point me to some such studies? If the Amish were living very similarly 100 years ago and today but today they supposedly have a per-capita GDP of $20K, half the per-capita for the US and 100 years ago they had per capita GDP of $1K, equal to the par-capita of the US, this suggests that in at least one real and relevant sense US per-capita GDP has only doubled in a century (tripled possibly when one factors in some improvements in their standard of living).
Michael, I don’t know the details of the estimates, but even a factor of three would be relatively unimportant here. And do the nomads to which you refer, subsist largely by predating on farmers rather than by hunting-gathering? Then they would be specialized professional warriors.
For native American populations, standard estimates 50 years ago, and common estimates among experts 20 years ago, were for a few million. Now consensus is 10M-100M probably closer to 40M Pretty much ALL ancient world populations seem to have been traditionally underestimated by at least a factor of 2, leading to serious uncertainty as to whether total world population was ANY higher in 1650 than in 200. Some nomads subsist by raiding, others largely by herding sheep, horses, or cattle. In either case they are heavily occupied with defending their herds from other nomads, making them very martial in focus, but somewhat less so than knights are. Hunter Gatherers are also often specialized professional warriors to a significant extent, especially where nature is particularly bounteous as on many tropical islands. They usually do some fishing, nomads some animal husbandry.
Robin:
I don’t think that this is what Eliezer is saying (and correct me if I’m wrong). What Robin seems to be inferring is a claim that writing or professional specialization was the cause of the 100-fold population increase. What Eliezer is actually arguing is that writing and professional specialization are more interesting than the population increase from the point of view of optimization processes, and the 100-fold population increase is merely incidental, even if it was a prerequisite. From the original post, my emphasis added:
The disagreement between Robin and Eliezer shouldn’t be surprising, considering simple lapses in understanding such as these (and yes, I’ve seen others, but I’m not about to dig them up).
When I was reading about the spread of farming across Europe, starting about 7000 years ago, it was asserted that most European land could support 100 times as many farmers as hunters. I was left with the impression that that was determined by counting calories in the game on the land versus the calories in the crops that were grown back then. If farming was not able to support manyfold more people per acre, then we are without an explanation of why the hunters of Europe were unable to stop the spread of the farmers across Europe. The hunters would have stopped the spread if they could have because most of the time they were unable to switch to the farming lifestyle: I think we have genetic evidence that the new land put under farming was populated mostly by the descendants of farmers. Also, the steadiness of the rate of spread of farming over many generations suggests that the farmers never encountered effective resistance from the hunters despite the obvious fact that the hunters were specialized in skills that should have conferred military advantages.
Is the disagreement about the speed and scope of the power of intelligence recursively applied to intelligence improvement (whether ems or AIs) ?
By “speed”, I mean the equivalent of the neutron multiplication number in the Fermi analogy. Is Robin saying that whatever it is, it won’t be so large that that, if it’s higher than estimated, improvement will still be on a timescale that allows for human control (as if Fermi had been off a bit, there would still time to shove in the control rods). In particular, the improvement rate won’t be so large that it can’t be modelled with traditional economic tools. As opposed to Eliezer, who thinks that the situation is as if Fermi had actually put together an almost-bomb, and being off a bit would have resulted in a nuclear FOOM.
By “scope”, I mean that once it’s reached it’s limits, the eventual level of technology reachable. I guess in the Fermi analogy, this is the difference between the nuclear and electrochemical energy scales. Is there disagreement about what might eventually be achieved by very intelligent entities ?
My intuition is that the hard takeoff is unlikely, but the size of the potential catastrophe is so huge that Friendliness is a worthwhile study.
Well, one reason EY can’t address Hanson’s criticisms is … he doesn’t know economics.
Let’s wait for him to learn it and check in tomorrow!
Richard Hollerith, of the evidence you mention, the steadiness seems the best to me. But, as michael vassar worries, the data is poor quality and being read by people who want to tell a particular story.
Can you point to actual calorie-counting?
Richard Hollerith: Herds of bison on the American plains numbered around 50 million in the mid 19th century numbered 60 to 100 million http://en.wikipedia.org/wiki/American_Bison . With a conservative 800 lbs/bison of meat and 8 year replacement time, that suggests a mostly bison diet could feed 15M-25M hunters on just the US great plains if herds were used efficiently and more plausibly 5M-10M with more likely levels of efficiency. A single river with a salmon run can support a primitive fishing town with a few tens of thousands of people. Throw in Africa and Asia and I would expect a few tens of millions of human hunter-gatherers. This is still consistent with a 3 to 5 fold increase in population due to the shift to agriculture, which could easily be enough to cause the replacement of hunter-gatherers (though I have alternative speculations as to what may have happened in this case) and another 3 to 5 fold increase in population subsequent to the development of agriculture but prior to industrialization due to several thousand years of improved crops, irrigation, fertilization, pest control, and organizational institutions (broadly, capital improvements). I don’t know of ANY model for hunter gatherer populations which seriously suggests that they would have grown approximately smoothly and exponentially rather than expanding to carrying capacity with some overshoot like other animals. Intuitive models for agriculture suggest a roughly linear or geometric trend ultimately leading to diminising returns as good farm land is depleted from the dissemination of agriculture followed by an initially slower exponential trend from the multiplicative impact of innovations such as better tilling methods or hybrid grains.
Michael Vassar: excavation reveals that native Americans habitually stampeded bison herds over a cliff, yielding vastly more meat than they could use, so perhaps your estimate of the efficiency with which hunters were able to utilize bison meat is overoptimistic?
Douglas Knight: no, I cannot point to actual calorie counting, and maybe I misremember.
I already included a factor for 33% to 80% waste.
To summarize, Michael Vassar offers Bison on the Great Plains as evidence that maybe farming was not clearly superior to hunting (and gathering) in the number of humans a given piece of land could support. Well, here is a quote on the Bision issue:
Source
One man’s modus ponens is another’s modus tolens.
I don’t see that the stampede is consist with a lack of much use of buffalo. Stampedes are only inefficient if they have great variance. This might explain the conjunction of the stories of inefficient stampedes and efficient use of individual buffalo.
One theory is that farmers displace hunter-gatherers because HG have high variance yields, while farmers don’t. That still requires explanation of why HG don’t displace farmers in booms.
Height in the precolumbian great plains would give an easy to check to your source’s claim that they were on the margins of subsistence. But even if true, that only tells us that farmers displaced HG, which we know happens. It doesn’t address the question of what HG population could exist.
Side comment: Subitizing (the not-counting thing; see http://en.wikipedia.org/wiki/Subitizing_and_counting ) has been rather extensively studied. I can’t find good references, but it is apparently quite amenable to expertise. I have a friend who worked in inventory auditing (i.e. counting stuff in warehouses). He got into the 7-8 range. ISTR hearing of factory workings in my psych classes that got as high as 20 in their (hyper-specialized) domains.
The idea of something becoming “critical” seems to sum up this entire book this book:
http://www.amazon.com/Tipping-Point-Little-Things-Difference/dp/0316346624