excellent theory about cryonics, much more plausible than things like “people hate cryonics because they’re biased against cold” that have previously appeared on here.
willingness to acknowledge serious issue. Work is terrible, and the lives of many working people, even people with “decent” jobs in developed countries, are barely tolerable. It is currently socially unacceptable to mention this. Anyone who breaks that silence has done a good deed.
spark discussion on whether this will continue into the future. I was reading a prediction from fifty years ago or so that by 2000, people would only work a few hours a day or a few days a week, because most work would be computerized/roboticized and technology would create amazing wealth. Most work has been computerized/roboticized, technology has created amazing wealth, but working conditions are little better, and maybe worse, than they were fifty years ago. A Hansonian-style far future could lead to more of the same, and Hanson even defends this to a degree. In my mind, this is something futurologists should worry about.
summary of the article was much better than the article itself, which was cluttered with lots of quotes and pictures and lengthiness. Summaries that are better than the original articles are hard to do, hence, upvote.
I was reading a prediction from fifty years ago or so that by 2000, people would only work a few hours a day or a few days a week, because most work would be computerized/roboticized and technology would create amazing wealth. Most work has been computerized/roboticized, technology has created amazing wealth, but working conditions are little better, and maybe worse, than they were fifty years ago.
Technological advances can’t shorten the work hours because even in a society wealthy and technologically advanced enough that basic subsistence is available for free, people still struggle for zero-sum things, most notably land and status. Once a society is wealthy enough that basic subsistence is a non-issue, people probably won’t work as much as they would in a Malthusian trap where constant toil is required just to avoid starvation, but they will still work a lot because they’re locked in these zero-sum competitions.
What additionally complicates things is that habitable land is close to a zero-sum resource for all practical purposes, since to be useful, it must be near other people. Thus, however wealthy a society gets, for a typical person it always requires a whole lot of work to be able to afford decent lodging, and even though starvation is no longer a realistic danger for those less prudent and industrious in developed countries, homelessness remains so.
There is also the problem of the locked signaling equilibrium. Your work habits have a very strong signaling component, and refusing to work the usual expected hours strongly signals laziness, weirdness, and issues with authority, making you seem completely useless, or worse.
As for working conditions, in terms of safety, cleanliness, physical hardship, etc., typical working conditions in developed countries are clearly much better than fifty years ago. What arguably makes work nowadays worse is the present distribution of status and the increasing severity of the class system, which is a very complex issue tied to all sorts of social change that have occurred in the meantime. But this topic is probably too ideologically sensitive on multiple counts to discuss productively on a forum like LW.
I agree that even a post-scarcity society would need some form of employment to determine status and so on. But that seems irrelevant to the current problem: one where even people who are not interested in status need to work long hours in unpleasant conditions just to pay for food, housing, and medical costs, and where ease of access to these goods hasn’t kept pace with technological advantages.
And although I don’t think it quite related, I am less pessimistic than you abou the ability of a post-scarcity society to deal with land and status issues. Land is less zero-sum than the finitude of the earth would suggest because most people are looking not for literal tracts of land but for a house in which to live, preferably spacious—building upward, or downward as the case may be, can alleviate this pressure. I’m also not convinced that being near other people is as big a problem as you make it out to be: a wealthier society would have better transportation, and cities have enough space to expand outward (giving people access to other humans on at least one side) almost indefinitely. There will always be arbitrarily determined “best” neighborhoods that people can compete to get into, but again, this is a totally different beast from people having to struggle to have any home at all.
I think a genuinely post-work society would have its own ways of producing status based on hobbyist communities, social interaction, and excellence at arts/scholarship/sports/hobbies; the old European nobility was able to handle its internal status disputes in this way, though I don’t know how much fo that depended on them knowing in the back of their mind they were all superior to the peasantry anyway.
Agreed that the class system is an important and relevant issue here.
I agree that even a post-scarcity society would need some form of employment to determine status and so on. But that seems irrelevant to the current problem: one where even people who are not interested in status need to work long hours in unpleasant conditions just to pay for food, housing, and medical costs, and where ease of access to these goods hasn’t kept pace with technological advantages.
But that’s not the case in the modern developed world. If you are really indifferent to status, you can easily get enough food, housing, and medical care to survive by sheer freeloading. This is true even in the U.S., let alone in more extensive welfare states.
Of course, completely forsaking status would mean all sorts of unpleasantness for a typical person, but this is only because we hate to admit how much our lives revolve around zero-sum status competitions after all.
I think a genuinely post-work society would have its own ways of producing status based on hobbyist communities, social interaction, and excellence at arts/scholarship/sports/hobbies; the old European nobility was able to handle its internal status disputes in this way, though I don’t know how much fo that depended on them knowing in the back of their mind they were all superior to the peasantry anyway.
Don’t forget about the status obtained from having power over others. That’s one part of the human nature that’s always dangerous to ignore. (The old European nobility was certainly not indifferent to it, and not just towards the peasants.)
Also, there would always be losers in these post-work status games who could improve their status by engaging in some sort of paid work and saving up to trade for the coveted status markers. These tendencies would have to be forcibly suppressed to prevent a market economy with paid labor from reemerging. It’s roughly analogous to the present sexual customs and prostitution. Men are supposed to find sexual partners by excelling in various informal, non-monetary status-bearing personal attributes, but things being zero-sum, many losers in this game find it an attractive option to earn money and pay for sex instead, whether through out-and-out prostitution or various less explicit arrangements.
But that’s not the case in the modern developed world. If you are really indifferent to status, you can easily get enough food, housing, and medical care to survive by sheer freeloading. This is true even in the U.S., let alone in more extensive welfare states.
I’m not sure this is true; I know little about welfare politics, but I was under the impression there was a major shift over the last ten years toward limiting the amount of welfare benefits available to people who are “abusing the system” by not looking for work.
One could probably remain alive for long periods just by begging and being homeless, but this raises the question of what, exactly, is a “life worth living”, such that we could rest content that people were working because they enjoy status competitions and not because they can’t get a life worth living without doing so.
This is probably way too subjective to have an answer, but one thing that “sounds right” to me is that the state of nature provides a baseline. Back during hunter-gatherer times we had food, companionship, freedom, et cetera without working too hard for them (the average hunter-gatherer only hunted-gathered a few hours a day). Civilization made that kind of lifestyle impossible by killing all the megafauna and paving over their old habitat, but my totally subjective meaningless too-late-at-night-to-think-straight opinion is that we can’t say that people can opt-out of society and still have a “life worth living” unless they have it about as good as the hunter-gatherers they would be if society hadn’t come around and taken away that option.
The average unemployed person in a developed country has a lot of things better than hunter-gatherers, but just the psychological factors are so much worse that it’s no contest.
The specific situation in the U.S. or any other individual country doesn’t really matter for my point. Even if I’m wrong about how easy freeloading is in the U.S., it’s enough that we can point to some countries whose welfare systems are (or even just were at some point) generous enough to enable easy freeloading.
Ironically, in my opinion, in places where there exists a large underclass living off the welfare state, it is precisely their reversal to the forager lifestyle that the mainstream society sees as rampant social pathology and terrible deprivation of the benefits of civilized life. I think you’re committing the common error of idealizing the foragers. You imagine them as if you and a bunch of other highly intelligent and civilized people had the opportunity to live well with minimal work. In reality, however, the living examples of the forager lifestyle correctly strike us as frightfully chaotic, violent, and intellectually dead.
(Of course, it’s easy to idealize foragers from remote corners of the world or the distant prehistory. One is likely to develop a much more accurate picture about those who live close enough that one has to beware not to cross their path.)
You are not wrong about “freeloading,” though that term is probably (unnecessarily pejorative). The Developed world is so obscenely wasteful that it is not necessary to beg. You can get all the food you want, much of it very nice—often much nicer than you could afford to buy by simply going out and picking it up. Of course, you don’t get to pick and choose exactly what you want when you want it.
Clothing, with the exception of jeans, is all freely available. The same is true of appliances, bedding and consumer electronics of many kinds. The one commodity that is is very, very difficult to get at no cost is lodging. You can get books, MP3 players, CDs, printers, scanners, and often gourmet meals, but lodging is tough. The problem with housing and why it is qualitatively different that the other things I’ve cited is that while it is technically illegal to dustbin dive, in practice it is easy to do and extremely low risk. It is incredibly easy in the UK, if you get a dustbin key (easy to do).
However, the authorities take a very dim view of vagrancy, and they will usually ticket or arrest the person who has either “failure to account,” or is clearly living in a vehicle or on the street. This is less true in the UK than the US. However, get caught on the street as a vagrant AND as a foreigner in the UK (or in the US, or in any Developed country) and you are in a world of hurt—typically you will be deported with prejudice and be unable to renter the country either “indefinitely,” or for some fixed period of time.
If you can swing lodging, then the world is your oyster (for now). I travel with very little and within 2 weeks of settling on a spot in large city, I have cookware, flatware, clothing, a CD player, a large collection of classical CDs, and just about anything else I want to go looking for. There is an art to it, but the waste is so profligate that it is not hard to master, and absolutely no begging is required (except for lodging ;-))
Speaking from a lifetime of experience on welfare in the US (I’m disabled, and have gotten work from time to time but usually lost it due to factors stemming either from said disability, or the general life instability that poverty brings with it), your impressions are largely correct.
I’m not sure this is true; I know little about welfare politics, but I was under the impression there was a major
shift over the last ten years toward limiting the amount of welfare benefits available to people who are
“abusing the system” by not looking for work.
What I’d say is that the shift (and it’s been more like the last forty years, albeit the pace has picked up since Reagan) is towards “preventing abuse” as a generic goal of the system; the result has been that the ability to deliver the services that ostensibly form the terminal goal of welfare-granting organizations is significantly diminished—there’s a presumption of suspicion the moment you walk in the door. Right now, SSI applicants are auto-denied and have to appeal if they want to be considered at all, even if all their administrative ducks are otherwise in a row; this used to be common practice, but now it’s standard.
This also means that limits are fairly low. I can’t receive more than 40 dollars a month in food stamps right now because my apartment manager won’t fill out a form on my behalf stating the share of rent and other services I pay in my unit. He has an out; he’s not involved in the household finances. But without that in writing, from that person, the office presumes that since I have roommates declared, my share of the household expenses is zero, ergo I’m entitled to the minimum allowable (they can’t just deny me since I’m on SSDI).
And having been homeless for a little while (thankfully a friend helped me get the down payment on a place I could just barely afford), yeah...Vladimir_M’s comments are based more on rhetoric than substance. One thing I observe is that many people who are long-term impoverished or homeless (self included) will project a bit of being inured to status as a way of just securing ourselves some dignity in our interactions with others—but nobody in that situation could miss how deeply that status differential cuts whenever it’s used against us, even implicitly in the way people just ignore or dismiss them,
As luck would have it, I have some limited experience with living for periods of about a month at a time in a household where we gathered about 80 percent of the food we ate (no exaggeration). Rich in what the land around of offered, rich in the basic assets needed to make use of it, rich in ability to keep ourselves entertained and occupied during our copious free time.
I could easily see the typical hunter-gatherer experience being very, very good. Certainly, I’d rather be financially and material poor under the conditions I described above, than in my present circumstances.
Then what limited the growth of forager peoples so substantially? There had to be a mechanism to prevent them from exceeding their region’s carrying capacity. If a tribe of 50 people grew at a rate of 1% for 2000 years there would 24 billion people in it. Clearly that didn’t happen; in fact there have been massive die-offs from starvation due to cyclical climate change, or to resource warfare (sometimes fought to extinction) between neighboring tribes.
I could easily see the typical hunter-gatherer experience being very, very good. Certainly, I’d rather be financially and material poor under the conditions I described above, than in my present circumstances.
You cannot be considered financially and materially impoverished if you have access to abundant natural resources. Nevermind if you own that or can enforce the exclusive status of your rights to it—if you have those resources available to you they at least count as cash flow if not assets.
Limited access to limited resources is far more typical, and life is not so leisurelly when you spend every hour of daylight working to procure food that still isn’t enough to provide for you and your family. That is also the state of nature and was a situation that a great many people have found themselves in for the brief time that they managed to survive it.
Limited access to limited resources is far more typical, and life is not so leisurelly when you spend every hour of > daylight working to procure food that still isn’t enough to provide for you and your family. That is also the state of > nature and was a situation that a great many people have found themselves in for the brief time that they
managed to survive it.
That...actually doesn’t represent the human condition for most of our ancestral history, nor the current state of surviving forager peoples for the most part.
Resources are limited, but you only need about 15 hours of work a week per hunter-gatherer individual devoted to food-producing activities. Overdo that and you may well tax your ecosystem past carrying capacity. This is why foragers wander a migratory circuit (although they tend to keep to a known, fixed route) or live in areas where there’s sufficient ecological abundance to allowed for a sedentary lifestyle while still using hunter-gatherer strategies. It’s also why they tended to have small populations. Scarcity was something that could happen, but that’s why people developed food preservation technologies and techniques that you can assemble with nothing more than accumulated oral tradition and some common sense. Tie a haunch of meat down to some stones and toss it down to the bottom of a cold lake. That meat will keep for months, longer if the lake freezes over. It’ll be gamy as hell, but you won’t starve—and this is a pretty typical solution in the toolkit of prehistoric humans from Northern regions. Drying, salting (sometimes using methods that would squick you—one branch of my ancestors comes from a culture that used to preserve acorns by, kid you not, burying them in a corner of the home and urinating over the cache), chemical preservation, favoring foods that store long-term well in the first place, fermentation, and a flexible diet are all standard knowledge.
In the American Southwest (a hot, harsh, dry and ecologically-poor climate), Pueblo people and many others used to rely on the seasonal abundance of Mormon Crickets for protein. You can gather eighteen pounds of them an hour when they pass through, basically just by walking around and picking up bugs. The nutritional profile beats the hell out of any mammal meat, and they can be preserved like anything else. Think about that for a second—one person, in one hour, can provide enough of these bugs to feed an entire village for a day, or their own household for weeks (and that’s without preservation). It’s not desperation; it’s a sound food-gathering strategy, and a lot more palatable when you don’t come from a culture used to think of insects as a culinary taboo.
Starving to death is more of an issue for low-tech pastoralists and agriculturalists—people who use just a small fraction of the available edible resources to support populations that wouldn’t be able to forage on the available resources. The relationship of effiort to output for them is linear; work your farm harder, get more food in proportion—and you need to run a surplus every year in most cases because there is non-negotiable downtime during which it’s going to be hard to switch to another food source (and even if you do, you’ll be competing with your neighbors for it).
In my own case, I’ve taken part in of a family of five supplying themselves with only a few culturally-specific dietary staples (powdered milk, spices, flour, rice, things that we could easily have done without had they not been available) doing most of their food-production by just going out and getting it somewhere within a mile of home. Clams. squid and oysters were for storing (done with a freezer or by canning with water and salt) and cooking up into dishes we could eat for the rest of the month; small fish were gathered day-by-day, large fish stored (one salmon or sturgeon can feed five people for over a month when you have a freezer), crabs and similar gathered on a case-by-case basis. I personally wasn’t fond of frog legs, but a nearby pond kept up with a whole lot of demand for frogs in my family and others. We never bothered with anything like deer or bird hunting, but we’d gather berries, tree fruits (apple, plum, pear) and mushrooms, grow garden veggies and basically just keep ourselves supplied.
I’m not saying everyone on Earth could switch back today—heck no. A whole lot of people would starve to death after destroying the ecosystems they need. But my ancestors lived in that place for thousands of years and starving to death was not a common experience among them, because they weren’t used to the population densities that only come with intensive agriculture. And there are people descended from foragers of even more remote and desolate climes—some of them STILL living that way—who can say the same thing.
Then what limited the growth of forager peoples so substantially? There had to be a mechanism to prevent them from exceeding their region’s carrying capacity. If a tribe of 50 people grew at a rate of 1% a year for 2000 years there would 24 billion people in it. Clearly that didn’t happen; in fact there have been massive die-offs from starvation due to cyclical climate change, or to resource warfare (sometimes fought to extinction) between neighboring tribes.
Then what limited the growth of forager peoples so substantially?
I am so glad you asked, because the answer to your question reveals a fundamental misapprehension you have about forager societies and indeed, the structure and values of ancestral human cultures.
The fact is that forager populations don’t grow as fast as you think in the first place, and that across human cultures still living at or near forager methods of organization, there are many ways to directly and indirectly control population.
It starts with biology. Forager women reach menarche later, meaning they’re not fertile until later in life. Why? Largely, it’s that they tend to have much lower body fat percentages due to diet and the constant exercise of being on the move , and that’s critical for sustaining a pregnancy, or even ovulating in the first place once you’ve reached the (much higher) age where you can do that. Spontaneous abortions or resorption of the fetus are rather common. Women in an industrial-farming culture attain menarche quite a bit earlier and are more likely to be fertile throughout their active years—it only looks normal to you because it’s what you’re close to. So right out of the gate, forager women are less likely to get pregnant, and less likely to stay that way if they do.
Next biological filter: breastfeeding. Forager women don’t wean their children onto bottles and then onto solid food the way you experienced growing up. Breastfeeding is the sole means for a much longer period, and it’s undertaken constantly throughout the day—sleeping with the baby, carrying them around during the daily routine. It goes on for years at a time even after the child is eating solid food. This causes the body to suppress ovulation—meaning that long after you’re technically able to get pregnant again, the body won’t devote resources to it. All the hormonal and resource-delivery cues in your body point to an active child still very much in need of milk! Not only that, but it’s routine in many such societies for women to trade off breastfeeding duty with one another’s children—the more kids there are, the more likely it is that every woman in the proximate social group will have moderately suppressed fertility. It’s a weak effect, but it’s enough to lengthen the birth interval considerably. In the US, a woman can have a baby just about every year—for modern-day foragers, the birth interval is often two to five years wide. It’s harder to get pregnant, and once you do, the kids come more slowly.
The next layer is direct means of abortion. In the US that tends to be pretty traumatic if it’s not performed by a medical specialist. In some cases it still is for forager women—the toolkit of abortives across all human cultures is very wide. Midwives and herbalists often have access to minimally-invasive methods, but they also often have painful or dangerous ones. What you won’t find is many that are truly ineffective. Methods range from the unpleasant (direct insertion of some substance to cause vaginal bleeding and fetal rejection), to the taxing or dangerous (do hard work, lift heavy objects, jump from a high place) to fasting and ingestible drugs that can induce an abortion or just raise the likelihood of miscarriage.
The last layer is infanticide (and yes, we have this too, though it’s a deprecated behavior). In all cultures that practice it it’s considered a method of last resort, and it’s usually done by people other than the mother, quickly and quietly. Forager cultures are used to having to do this from time to time, but it’s still a rare event—certainly not a matter of routine expedience.
The point I’m making is that population growth unto itself is not a goal or a value of forager societies like those every human being on earth is descended from (and which some still occupy today). Growth, as an ideological goal, is a non-starter for people living this way. Too many mouths to feed means you undercut the abundance of your lifestyle (and yes, it truly is abundance most of the time, not desperate Malthusian war of all against all) -- and forager lives tend to be pretty good on the whole, filled with communitas and leisure and recreation aplenty as long as everybody meets a modest commitment to generating food and the supporting activities of everyday life. I’m not making it out to be paradise; this is just really what it’s like, day to day, to live in a small band of mostly close relatives and friends gathering food from what’s available in the environment.
I’ve heard claims like these several times, but this situation where individuals voluntarily limit their reproduction for the common good can’t possibly be a stable equilibrium. It faces a coordination problem, more specifically a tragedy of the commons. As soon as even a small minority of the forager population starts cheating and reproducing above the replacement rate (by evolving either cultural memes or hereditary philoprogenitive behaviors that motivate them to do so), in a few generations their exponential growth will completely swamp everyone else. The time scales on which forager societies have existed are certainly more than enough for this process to have taken place with certainty.
In order for such equilibrium to be stable, there would have to exist some fantastically powerful group selection mechanism that operates on the level of the whole species. I find this strikingly implausible, and to my knowledge, nobody has ever proposed how something like that might work.
It happened in the real world, ergo the issue lies with your understanding of the system we’re talking about and not with its inability to conform to your model.
I’ve heard claims like these several times, but this situation where individuals voluntarily limit their
reproduction for the common good can’t possibly be a stable equilibrium.
You’re looking at this backwards. This is the reproductive context in which humanity evolved, and the Malthus-driven upward spiral of population and competition is the result of comparitively recent cultural shifts brought on by changing lifestyles that made it viable to do that. You don’t need to invoke group selection in the form you’re thinking of—the cultural “mutations” you’re positing can’t gain a foothold until some branch of humanity has access to a lifestyle that makes it advantageous to defect like that. Forager societies don’t have that incentive because if they overtax their resource base here and now they have to move, and for most of human prehistory (and the modern history of hunter-gatherers) the population densities were low enough that this gave the affected area time to recover, so when someone came back, things were fine again. A long-term climatic shift alters the range of viable habitats near you, but it takes something pretty darn catastrophic (more than just a seasonal or decadal shift) to entirely render a region uninhabitable to a group of size n.
The biggest filters to population growth in this system are entirely passive ones dictated by biology and resources—the active ones are secondary measures, and they’re undertaken because in a system like this, the collective good and the individual good are inextricably linked. It was a stable equilibrium for most of our evolution, and it only broke when and where agriculture became a viable option that DIDN’T immediately overtax the environment.
That’s a state of affairs that took most of human existence to come into being.
It happened in the real world, ergo the issue lies with your understanding of the system we’re talking about and not with its inability to conform to your model.
You assert these things very confidently, but without any evidence. How exactly do we know that this state of affairs existed in human prehistory?
You say:
You don’t need to invoke group selection in the form you’re thinking of—the cultural “mutations” you’re positing can’t gain a foothold until some branch of humanity has access to a lifestyle that makes it advantageous to defect like that. Forager societies don’t have that incentive because if they overtax their resource base here and now they have to move, and for most of human prehistory (and the modern history of hunter-gatherers) the population densities were low enough that this gave the affected area time to recover, so when someone came back, things were fine again.
This, however, provides no answer to the question why individuals and small groups wouldn’t defect, regardless of the subsequent collective consequences of such defection. You deny that you postulate group selection, but you keep talking in a very strong language of group selection. Earlier you asserted that “population growth unto itself is not a goal or a value of forager societies,” and now you say that “[f]orager societies don’t have that incentive.” How can a society, i.e. a group, have “values” and “incentives,” if you’re not talking about group selection? And if you are, then you need to answer the standard objection to arguments from group selection, i.e. how such group “incentives” can stand against individual defection.
I have no problem with group selection in principle—if you think you have a valid group-selectionist argument that invalidates my objections, I’d be extremely curious to hear it. But you keep contradicting yourself when you deny that you’re making such an argument while at the same time making strong and explicit group-selectionist assertions.
You assert these things very confidently, but without any evidence. How exactly do we know that this state of affairs existed in human prehistory?
Archaeological evidence regarding the health and population density of human beings and their dietary habits. Inference from surviving examples. The null hypothesis, that we didn’t start with agriculture and therefore must have been hunter-gatherers for most of our existence as a species. The observatiion that the traits generally associated with the Malthusian trap are common experiences of agricultural societies and dependent upon conditions that don’t obtain in predominantly and purely hunter-gatherer societies.
This, however, provides no answer to the question why individuals and small groups wouldn’t defect,
regardless of the subsequent collective consequences of such defection.
They might defect, but it’d gain them nothing. Their cultural toolkits and food-gathering strategies were dependent upon group work at a set quota which it was maladaptive to under- or overreach. An individual can″t survive for long like this compared to a smallish group; a larger group will split when it gets too big for an area, a big group can’t sustainably form.
How can a society, i.e. a group, have “values” and “incentives,” if you’re not postulating group selection?
The answer to this lies in refuting the following:
As soon as even a small minority of the forager population starts cheating and reproducing above the
replacement rate (by evolving either cultural memes or hereditary philoprogenitive behaviors that motivate them
to do so), in a few generations their exponential growth will completely swamp everyone else.
“A small minority of the forager population” has to be taken in terms of each population group, and those are small. A small percentage of a given group might be just one or two people every handful of generations, here. A social umbrella-group of 150 scattered into bands of 10-50 throughout an area, versus just one or two people? Where’s the exponential payoff? The absolute numbers are too low to support it, and the defectors are stuck with the cultural biases and methodologies they know. They can decide to get greedy, but they’re outnumbered by the whole tribe, who are more than willing to provide censure or other forms of costly social signalling as a means of punishing defectors. They don’t even have to kill the defectors or drive them out; the defectors are critically dependent on the group for their lifestyle. The alternatiive will be unappealing in all but a vast majority of cases.
You need the kind of population densities agriculture allows to start getting a really noticeable effect. It’s not to say people don’t ever become tempted to defect, but it’s seldom a beneficial decision. And many cultures, such as the San ones in South Africa, have cultural mechanisms for ensuring nobody’s ego gets too big for their britches, so to speak. Teasing and ribbing in place of praise when someone gets a big head about their accomplishments, passive reminders that they need the group more than they individually benefit it.
This isn’t so much about group selection,as it is about all the individuals having their raft tied to the same ship—a group big enough to provide the necessities of life, which also provides a lot of hedonic reinforcement for maintaining that state of affairs, and a lot of non-coercive negative signalling for noncompliance, coupled with the much more coercive but morally neutral threat presented by trying to make a living in this place all by yourself.
If you break a leg in a small group, the medical practitioner splints it and everyone keeps feeding you. If you do that by yourself, it probably never heals right and the next leopard to come along finds you easy pickings. That’s what defection buys you in the ancestral environment.
a larger group will split when it gets too big for an area
Say there are two kinds of forager groups, one which limits reproduction of its members by various means, and another that does not limit reproduction and instead constantly grows and splits and invades other groups’ territories if needed. Naively I would expect that the latter kind of group would tend to drive the former kind out of existence. Why didn’t this happen?
Archaeological evidence regarding the health and population density of human beings and their dietary habits. Inference from surviving examples.
This isn’t necessarily evidence against a Malthusian equilibrium. It could be that the subsequent farmer lifestyle enabled survival for people with much poorer health and physical fitness, thus lowering the average health and fitness of those who managed to survive in the Malthusian equilibrium.
Can you give a reference that specifically discusses how a non-Malthusian situation of the foragers can be inferred from the existing archaeological evidence?
The observatiion that the traits generally associated with the Malthusian trap are common experiences of agricultural societies and dependent upon conditions that don’t obtain in predominantly and purely hunter-gatherer societies.
This is not true. Humans are (more or less) the only species that practices agriculture, but the Malthusian trap happens to non-human animals too. As long as reproduction above the replacement rate is possible, it will happen until the resource limit is reached. (Admittedly, for animals that aren’t apex predators, the situation is more complicated due to the predator-prey dynamics.)
Regarding the foragers’ supposed cooperation on keeping the population stable, I honestly don’t see how what you write makes sense, for at least two reasons:
The defectors would not need to reproduce in blatantly extraordinary numbers. It would be enough to reproduce just slightly above the replacement rate, so slightly that it might be unnoticeable for all practical purposes. The exponential growth would nevertheless explode their population in not very many generations and lead to them overwhelming others. So even if we assume that blatantly excessive reproduction would be punished, it would still leave them more than enough leeway for “cheating.”
How did this punishment mechanism evolve, and how did it remain stable? You can postulate any group selection mechanism by assuming altruistic punishment against individuals who deviate from the supposed group-optimal behavior. But you can’t just assert that such a mechanism must have existed because otherwise there would have been defection.
Moreover, you are now talking about group selection with altruistic punishment. There’s nothing inherently impossible or absurd about that, but these are very strong and highly controversial claims, which you are asserting in a confident and authoritative manner as if they were well-known or obvious.
I’d like to remind you that the ancestral environment was not completely stable, and no one is disputing that exponentially-expansive Malthusian agriculture happened. The question is why it took as long as it did, not why it was possible at all.
Essentially human for our first 2 million years of existence, human population worldwide went from about 10,000 to 4 million. Given that virtually all major models of long-run human population converge very closely, and they all assume a relatively steady growth rate, we’re talking a doubling period of 250,000 years.
Malthus’ estimates assume a doubling rate of 25 years, or a single human generation. The difference is a factor of 10,000. World population simply did not grow as fast as you’re assuming, and humanity did not start outstripping local carrying capacities in a major, systematic way until we’d developed technologies that allowed us to make those sorts of population growth leaps.
According to Michael Kremer in “Population Growth and Technological Change: One Million BC to 1990”, the base rate of technological change in human societies scales proportional to population—small population, slow technological change. This equals very long inferential distances to the sorts of techniques and behaviors that make agriculture a viable prospect.
You need intermediate steps, in the form of settled horticulture or nomadic pastoralism, to really concentrate the population enough to have a chance at developing agriculture in the intensive sense. Those sorts of cultural developments took a long time to come into being, and it was a gradual process at that.
So, yes, it’s true that if you grow certain grasses and just harvest their seeds reliably, grinding them into a fine powder and mixing that with water and then heating the whole mixture somehow without actually burning it in your fire directly, you can produce a food source that will unlock access to population-doubling intervals closer to the Malthusian assumption of one doubling per generation.
But that is a series of nested behaviors, NONE of which is intuitively obvious by itself from the perspective of a forager in a world full of nothing but other foragers. Which is why the entire chain took a long, long time to develop, and why agriculture was invented just a few times throughout human history.
This is not true. Humans are (more or less) the only
species that practices agriculture, but the Malthusian
trap happens to non-human animals too. As long as
reproduction above the replacement rate is possible,
it will happen until the resource limit is reached.
Termites, leafcutter ants, certain damselfish, ambrosia beetles, and certain marsh snails all practice agriculture. But yes, it’s certainly an uncommon behavior.
What if reproduction above the replacement rate isn’t possible for the period of human evolution we’re talking about? What if the human population simply isn’t reproducing fast enough for most of prehistory to reach the resource limit? Those are the conditions I’m suggesting here—that reaching local resource limits was not the norm for much of our evolution, due to our inherent long gestation times and strong k-selection, the inherent metabolic requirements for fertility taking a long time to satisfy compared to modern conditions, the birth interval being very wide compared to Malthusian assumptions, and the techniques of food acquisition being of necessity limited by the the ease of satisfying everybody’s requirements (if everyone has a fully tummy and all their kids do too, going out and gathering MORE food at the expense of one’s kinsmen won’t do you any good anyway).
What you get is abundance—there’s room to grow, but we can only do it so fast, and when we start to reach the point where we might overtax our resource base, we’ve moved on and there weren’t enough of us using it in the first place to compromise it.
The defectors would not need to reproduce in blatantly extraordinary numbers.
It would be enough to reproduce just slightly above the
replacement rate, so slightly that it might be
unnoticeable for all practical purposes.
That kind of statistical hackery might work in a large population, but not very well in a small one. In a group of 100 humans, ANY population gain is noticeable.
The exponential growth would nevertheless explode their
population in not very many generations and lead to them
overwhelming others
Except all evidence suggests it wasn’t possible to have a population explosion, if you assume humans must have reproduced at the fastest allowable rate. Populations doubled in a quarter-million years, not 25.
How did this punishment mechanism evolve, and how did it > remain stable?
It didn’t evolve genetically, it’s a cultural punishment I’m talking about. Ju/’hoansi hunters are taken down a notch whenever they make a kill. Certain Australian aboriginal groups have meat-sharing customs where one hunter goes out and gets a kangaroo (say), and his share of the meat is the intestines or penis—the choicer cuts get distributed according to a set of other rules. Except, then people invite the hunter over to dinner; he’s not forced to actually eat crow every time he succeeds, but he’s also socially aware that he depends upon the others for it (and he gets to receive a choicer share when some other hunter makes a kill).
World population simply did not grow as fast as you’re assuming, and humanity did not start outstripping local carrying capacities in a major, systematic way until we’d developed technologies that allowed us to make those sorts of population growth leaps.
I don’t understand your argument here at all. Earlier you said that growth to the Malthusian limit was prevented by a cooperative strategy of restraining reproduction. Now you say that lack of food production technology was limiting population growth. But if foragers did breed up to the limit where food became the limiting resource, that’s by definition a Malthusian equilibrium.
You are also presenting a strawman caricature of Malthus. His claim about a 25-year doubling period refers to agricultural societies with an ample supply of land, such as existed in North America of his day. He presents it as an empirical finding. When he discusses foragers, he notes that they’ll reproduce to the point where they run against the limited food supply available from foraging, which given the low supply of food relative to farming, means a much less dense population.
Some of his discussions of foragers are actually quite interesting. He notes that among the North American hunter-gatherers, resource limitations lead to constant disputes and warfare. He also cites accounts of European explorers’ contacts with forager peoples that seem to have been on the Malthusian limit.
It didn’t evolve genetically, it’s a cultural punishment I’m talking about.
It doesn’t matter—it still needs to be explained. Humans don’t just magically develop cultural norms that solve collective action problems.
Earlier you said that growth to the Malthusian limit was prevented by a cooperative strategy of restraining
reproduction.
What I said was that growth to the point of constant warfare, competition and struggle for enough food to subsist wasn’t an accurate picture of ancestral forager lifestyles.
Some of his discussions of foragers are actually quite interesting.
He notes that among the North American hunter-gatherers, resource limitations lead to constant disputes
and warfare.
He also says that smallpox was endemic among the Indians of all these cultures. Smallpox originated in Eurasia, thrived among farmers, and Native Americans had no immunity to it. His example of the squallor and disease these people live in is an example of the conditions they were subjected to at the hands of an invading power with novel biological agents their immune systems simply weren’t adapted to handle. The nastiest conflicts.
Warfare among Northwest Coast Natives, prior to colonization, was usually over petty disputes (that is, interpersonal ones) between peoples who had long-standing trade and treaty relationships, and only occasionally over resources (usually slaves, and the institution of slavery as it was practiced here does not compare readily with slavery as it was practiced by agriculturalists in Eurasia and Africa). The bloodier wars of the inland northwest are similarly a historical novelty, unparalleled in scope or stakes until the ravages of introduced diseases and the dislocation of various tribes by white invaders into territories they’d never been in competition for caused clashes that simply hadn’t occured at such a level of intensity prior to that point. The formation of reservations only exacerbated this—we’re talking about groups with age-old rivalries who had never seen fit to exterminate one another or conquer one another’s lands, but who would happily send a war canoe full of men to go steal things because of a petty vendetta between two people that started long ago.
This isn’t war of extermination. Don’t get me wrong, it’s violent, people die, the stakes are real, but it’s not a zero-sum, winner-take-all competition for survival. A direct translation out of Old Chinook from Franz Boas’ ethnography, regarding the rules of warfare should make this clearer:
“Before the people go to war they sing. If one of them sees blood, he will be killed in battle. When two see blood, they will be killed. They finish their singing. When they sing, two long planks are put down parallel to each other. All the warriors sing. They kneel [on the planks]. Now they go to war and fight. When people of both parties have been killed, they stop. After some time the two parties exchange presents and make peace. When a feud has not yet been settled, they marry a woman to a man of the other town and they make peace.”
The fight ends when both sides have taken casualties. The opposing sides exchange gifts and make peace. They resolve outstanding feuds by diplomatic marriage. This is the Chinook idea of war, the way it was practiced with all but their very worst enemies (who lived rather a long way from Chinook territory—the Quileute weren’t exactly next door given the pace of travel in those days, and even then the wars between them were not genocidal in intent). This is completely different from war as most Eurasian-descended cultures knew it. And it was typical of forager warfare in North America before Columbus showed up.
Malthus, in looking at the conditions of North American natives during the 19th century, reports on the dire conditions of a people devastated by introduced diseases, direct conquest by white settlers, and the disruption of their social fabric and ways of life. Whole culture groups pushed beyond the breaking point and very much outside their typical context, and most of their actual problems direct effects of colonization.
Malthus, in looking at the conditions of North American natives during the 19th century, reports on the dire conditions of a people devastated by introduced diseases, direct conquest by white settlers, and the disruption of their social fabric and ways of life.
Some of the accounts presented by Malthus were given by very early explorers and adventurers who ended up deep in unexplored territory, far ahead of European conquest and colonization. For example, the one by Cabeca de Vaca would be circa 1530.
The only way these societies could have already been devastated is if epidemics had ravaged the whole continent immediately in the first decades after the first Europeans landed, ahead of any European contact with the inland peoples. I don’t know enough about the relevant history to know how plausible this is, but even if it happened, there are two problems with your claim:
Diseases wouldn’t cause famine, at least in the long run. These early explorers describe peoples who had problems making ends meet during bad seasons due to insufficient food, and who fought bitterly over the existing limited supply. If the population had already been thinned down by disease by the time they came, we’d expect, if anything, the per capita food supply from foraging to be greater than before.
If even the earliest accounts are of devastated societies, then how do we know anything about the better life they led before that? Where does this information come from? You cite an ethnography by Boas, who was born in 1858, as authoritative, but dismiss a compilation of far older accounts compiled by Malthus in the early 19th century.
Smallpox emerged in the Old World around 10,000 BC and is believed to have originated via cattle farming. It reached very high concentrations in Europe and became a common plague there; it was spread around the world to peoples who had never encountered it by European exploration and conquest. It and other Old World disease spread very rapidly among American native populations, rendering whole cultures extinct and reducing others to scattered survivors often incapable of rebuilding. The total population of the Americas lost to European diseases after the arrival of Columbus and Cortez is estimated at 90 to 95 percent.
Given that many Native nations were at least modestly dependent on agriculture (the Iroquois, Navajo, Aztecs, Incas, Mississipians—indeed, most of the well-known groups), such population losses coming so quickly are nothing short of catastrophic. Most of your resource base collapses because one person is going to have to work MUCH harder to provide enough food for themselves—fields go unplanted, vegetables don’t get tended, wild game is much more dangerous to hunt by oneself, and one cannot expect any assistance with gathering. Even a small number of people used to an agriculture-enriched lifestyle are going to be hit much harder.
It’s also worth noting that Cabeza da Vaca actually described the Coahuiltic as a healthy and prosperous people—and ant eggs, lizards and so on were just normal parts of their diet. Ant eggs in particular are STILL a cultural delicacy among the Latino groups descended from the Coahuiltecs (escamole taco, anyone?). Diet adapts to local circumstances.
The only way these societies could have already been devastated is if epidemics had ravaged the whole
continent immediately in the first decades after the first Europeans landed, ahead of any European contact > with the inland peoples.
That is precisely what happened. One infected slave from Spanish-held Cuba is believed to be the Patient Zero that transmitted an infection which would go on to wipe out about fifty percent of the Aztec population. Hernando de Soto, exploring the southeast, encountered many towns and villages abandoned just two years prior when most of their inhabitants died of the plagues. Isolated survivors often just abandoned their homes outright, since in many cases a handful of people or even a single survivor were all that was left out of a village of hundreds or thousands. Neighbors who showed up, unaware of what happened, might contract disease from the corpses in some cases, or simply welcome in the survivors who’d start the cycle anew. North America had extensive trade routes linking all major regions, from coast to coast. Foot and boat traffic carried diseases quite far from their initial outbreak sites.
If even the earliest accounts are of devastated societies, then how do we know anything about the better life
they led before that?
Because they’re not all dead, and they left their own records of what happened and there are records of contact with them in much better conditions*, and there are still plenty of Native people alive today, who often know rather more about said records of their lives before than the typical Euro-American? And because it’s generally acknowledged within anthropological, archaeological and historical fields now that modern research bears out a picture of generally healthy, sustainable populations for most of the foragers of the Americas? And quite large, complex societies that were generally not recognized as such by early Anglo scholars into the matter?
(Malthus seriously* misrepresents Cabeza de Vaca’s case—the Floridians were in a bad way, but they were also right next door to Spanish early conquest—his accounts of the Coahuiltecs of coastal and inland Texas describe them as a healthy and prosperous people...and their descendents STILL enjoy ant eggs as a dietary item; you don’t have to be desperate to eat insects and many human groups actively enjoy it .
Where does this information come from? You cite an ethnography by Boas, who was born in 1858, as
authoritative, but dismiss a compilation of far older accounts compiled by Malthus in the early 19th century.
Boas actually travelled to the civilizations he wrote about, lived among them, recorded their oral traditions and analyzed their languages, investigated their history and their environmental circumstances. For many people, especially in the Northwest, far North and other relatively late-contacted areas, these events occured within the living memory of their elders.
Malthus wasn’t an expert on Native American civilizations or history, and basically went with the prevailing account available at the time. He relied on a consensus that wasn’t yet well-understood to be false. So I reject Malthus’ picture of pre-Columbian America for the same reason I reject Lysenko’s account of evolution. The difference is that Malthus was an influential thinker within the development of Western thought, and his role means that a lot of people who agree with what insights he did make are unwittingly buying into cached arguments about related subjects (often ones that don’t support his case) which hadn’t yet been discovered as such when Malthus wrote in the first place.
Scholarship in the field since Malthus’ time has seriously changed the outlook—Charles C. Mann and Jared Diamond are good, accessible sources for a summary overview (“1491” and “Guns, Germs and Steel”). If I seem to be vague, it’s mostly because this is domain-specific knowledge that’s not widely understood outside the domain, but as domain insider it’s fairly basic stuff.
And because it’s generally acknowledged within anthropological, archaeological and historical fields now that modern research bears out a picture of generally healthy, sustainable populations for most of the foragers of the Americas?
How exactly does this modern research reconstruct the life of American foragers centuries ago, and based on what evidence? Could you cite some of this work? (I’d like to see the original work that presumably explains its methodology rigorously, not popular summaries.)
Malthus *seriously misrepresents Cabeza de Vaca’s case—the Floridians were in a bad way, but they were also right next door to Spanish early conquest—his accounts of the Coahuiltecs of coastal and inland Texas describe them as a healthy and prosperous people...
On closer look, it turns out that de Vaca’s description cited by Malthus actually refers to a people from southeastern Texas, not Florida. So while Malthus apparently mixed up the location by accident, his summary is otherwise accurate. Your above claims are therefore completely incorrect—the description is in fact of a people from Texas, living very far from the boundary of Spanish conquest at the time.
For reference, I quote de Vaca’s account at length (all emphasis mine):
Castillo and Estevanico went inland to the Iguaces. [...] Their principal food are two or three kinds of roots, which they hunt for all over the land; they are very unhealthy, inflating, and it takes two days to roast them. Many are very bitter, and with all that they are gathered with difficulty. But those people are so much exposed to starvation that these roots are to them indispensable and they walk two and three leagues to obtain them. Now and then they kill deer and at times get a fish, but this is so little and their hunger so great that they eat spiders and ant eggs, worms, lizards and salamanders and serpents, also vipers the bite of which is deadly. They swallow earth and wood, and all they can get, the dung of deer and more things I do not mention; and I verily believe, from what I saw, that if there were any stones in the country they would eat them also. They preserve the bones of the fish they eat, of snakes and other animals, to pulverize them and eat the powder. [...] Their best times are when “tunas” (prickly pears) are ripe, because then they have plenty to eat and spend the time in dancing and eating day and night. [...] While with them it happened many times that we were three or four days without food. Then, in order to cheer us, they would tell us not to despair, since we would have tunas very soon and eat much and drink their juice and get big stomachs and be merry, contented and without hunger. But from the day they said it to the season of the tunas there would still elapse five or six months, and we had to wait that long.
Also, regarding this:
Boas actually travelled to the civilizations he wrote about, lived among them, recorded their oral traditions and analyzed their languages, investigated their history and their environmental circumstances. For many people, especially in the Northwest, far North and other relatively late-contacted areas, these events occured within the living memory of their elders.
Earlier you claimed that the native population of the entire American continent was devastated by epidemics immediately after the first European contacts in the late 15th/early 16th century, so that even the accounts of very early European explorers who traveled deep into the continent ahead of European colonization do not present an accurate picture of the native foragers’ good life they had lived before that. But now you claim that in the late 19th century, this good life was still within living memory for some of them.
It seems like you’re accepting or discounting evidence selectively. I can’t believe that all those accounts cited by Malthus refer to societies devastated by epidemics ahead of European contact, but on the other hand, the pre-epidemic good times were still within living memory for the people studied by Boaz centuries later.
I reject Malthus’ picture of pre-Columbian America for the same reason I reject Lysenko’s account of evolution.
Lysenko was motivated by politics. Baez was motivated by politics.
Physics improves, but history deteriorates. Those writers closest to events give us the most accurate picture, while later writers merely add political spin. Since 1830, history has suffered increasingly drastic, frequent, and outrageous politically motivated rewrites, has become more and more subject to a single monolithic political view, uniformly applied to all history books written in a particular period.
If you read old histories, they explain that they know such and such, because of such and such. If you read later histories, then when they disagree with older histories, check the evidence cited by older histories, you usually find that the newer histories are making stuff up. The older history says X said Y, and quotes him. The newer history say that X said B, and fails to quote him, or fails to quote him in context, or just simply asserts B, without any explanation as to how they can possibly know B.
Most of your resource base collapses because one person is going to have to work MUCH harder to provide enough food for themselves—fields go unplanted, vegetables don’t get tended, wild game is much more dangerous to hunt by oneself, and one cannot expect any assistance with gathering. Even a small number of people used to an agriculture-enriched lifestyle are going to be hit much harder.
Both Clark and Tainter (Collapse of Complex Civilizations) disagree with this claim as stated. A massive reduction in the population means that the survivors get increased per-capitas because the survivors move way back along the diminishing marginal returns curve and now have more low-hanging fruit (sometimes literally). In fact, Tainter argues that complexity often collapses because the collapse is the only way to increase per-capita wealth. Hunter-gatherers spend much less time per calorie than do advanced agriculturalists eg.
The surprise here is that while there is wild variation across forager and shifting cultivation societies, many of them had food production systems which yielded much larger numbers of calories per hour of labor than English agriculture in 1800, at a time when labor productivity in English agriculture was probably the highest in Europe. In 1800 the total value of output per man-hour in English agriculture was 6.6 pence, which would buy 3,600 kilocalories of flour but only 1,800 kilocalories of fats and 1,300 kilocalories of meat. Assuming English farm output was then half grains, onequarter fats, and one-quarter meat, this implies an output of 2,600 calories per worker-hour on average.32 Since the average person ate 2,300 kilocalories per day (table 3.6), each farm worker fed eleven people, so labor productivity was very high in England. Table 3.13 shows in comparison the energy yields of foraging and shifting cultivation societies per worker-hour. The range in labor productivities is huge, but the minimum average labor productivity, that for the Ache in Paraguay, is 1,985 kilocalories per hour, not much below England in 1800. The median yield per labor hour, 6,042 kilocalories, is more than double English labor productivity.
Or
...ranging from a modest 1,452 kilocalories per person per day for the Yanomamo of Brazil to a kingly 3,827 kilocalories per person per day for the Ache of Paraguay. Some of this is undoubtedly the result of errors in measuring food consumption. But the median is 2,340, implying that hunter-gatherers and subsistence agriculturalists ate as many calories as the median person in England or Belgium circa 1800. Primitive man ate well compared with one of the richest societies in the world in 1800. Indeed British farm laborers by 1863 had just reached the median consumption of these forager and subsistence societies.
(Quotes brought to you by my Evernote; it’s a pain in the ass to excerpt all the important bits from a book, but it certainly pays off later if you want to cite it for various assertions.)
Some quotes from Clark’s Farewell to Alms (he also covers the very high age of marriage in England as one way England held down population growth):
Fertility was also probably high among the precontact Polynesians. Sexual activity among women was early and universal. Why then was Tahiti such an apparent paradise to the visiting English sailors, rather than a society driven to the very subsistence margin of material income, as in Japan? The answer seems to be that infanticide was widely practiced...The estimates from the early nineteenth century are that between two-thirds and three-quarters of all children born were killed immediately.27...One sign of the practice of infanticide was the agreement by most visitors that there were more men than women on the islands. …In preindustrial China and Japan the gender ratio of the population shows that there was significant female infanticide. In these Malthusian economies infanticide did raise living standards.
An additional factor driving down birth rates (and also of course driving up death rates) was the Chinese practice of female infanticide. For example, based on the imbalance between recorded male and female births an estimated 20–25 percent of girls died from infanticide in Liaoning. Evidence that the cause was conscious female infanticide comes from the association between the gender imbalance of births and other factors. When grain prices were high, more girls are missing. First children were more likely to be female than later children. The chance of a female birth being recorded for later children also declined with the numbers of female births already recorded for the family. All this suggests female infanticide that was consciously and deliberately practiced.13
…
Female infanticide meant that, while nearly all women married, almost 20 percent of men never found brides. Thus the overall birth rate per person, which determines life expectancy, was reduced. The overall birth rate for the eighteenth century is unclear from the data given in this study, but by the 1860s, when the population was stationary, it was around 35 per thousand, about the same as in preindustrial Europe, and less than in many poor countries today. Earlier and more frequent marriage than in northwestern Europe was counteracted by lower marital fertility and by female infanticide, resulting in equivalent overall fertility rates.
Just to be clear, and so everyone knows where the goalposts are: as per the definition here: http://en.wikipedia.org/wiki/Hunter-gatherer , a forager society relies principally or entirely on wild-gathered food sources. Modern examples include the Pila Nguru, the Sentinelese of the Andaman Islands, the Pirahã, the Nukak, the Inuit until the mid-20th century, the Hadza and San of southern Africa, and others.
To those not deeply familiar with anthropology this can lead to some counterintuitive cases. The Yanomamo, who depend mainly on domesticated bananas supplemented by hunting and fishing, aren’t foragers in the strict sense. The modern Maya, and many Native American groups in general weren’t pure foragers. The Salish and Chinook peoples of the Pacific Northwest of the United States were sedentary foragers.
The Polynesians and Chinese of those periods were not foragers—both societies practiced extensive agriculture supplemented by hunting and gathering, as in preindustrial Europe.
My apologies—skimmed rather than read in detail and missed the purpose of your comment. Reply left up anyway since it may clarify terminology and definitions re: foragers for anyone who happens uipon the thread later. Thank you for clarifying!
Well that is certainly a lot for me to learn more about. Sorry I missed this post. How much of this has been directly observed in modern forager societies versus inferences from archaeology?
There’s a lot of other studies about different passive fertility in forager groups that bear out the cross-cultural applicability of the San studies as well. Forgot to add that.
The bits about breastfeeding and the other biological limiting factors (the indirect controls, basically) came to light during Richard Lee’s fieldwork with the San and Ju/’hoansi peoples of South Africa in the 1960s.
The bit about active measures is available if you peruse the anthropological literature on the subject (I don’t have a specific citation in mind), and the sort of thing covered in introductory classes to the field—it’s common knowledge within that domain.
As to resource warfare, it’s a non-starter for most foragers. You walk away, or you strike an agreement about the use of lands. There are conflicts anyway, but they’re infrequent—the incentive isn’t present to justify a bloody battle most of the time. And it doesn’t come up as often as you think, either, because as I’ve stated, forager populations don’t grow as quickly (they tend to stay around carrying capacity when different groups are summed over a given area) and indeed, devote active effort to keeping it that way, which supplements the tremendous passive biases in favor of slow growth.
Where it does come into prominence is with low-tech agriculturalists, pastoralists and horticulturalists. Those people have something to fight over (a stationary, vulnerable or scarce landbase, that rewards their effort with high population growth and gives incentive to expand or lock down an area for their exclusive use).
Sorry, I don’t see where you do. Food preservation techniques, migratory habits, gathering crabs or berries doesn’t tell me anything at all about how people avoided population growth.
If you are really indifferent to status, you can easily get enough food, housing, and medical care to survive by sheer freeloading. This is true even in the U.S.,
I don’t know how you’re using the word “easily”, then. Do you classify all forms of social interaction as easy?
Well, “easy” is clearly a subjective judgment, and admittedly, I have no relevant personal experience. However, it is evident that large numbers of people do manage to survive from charity and the welfare state without any employment, and many of them don’t seem to invest any special efforts or talents in this endeavor.
In any case, my original arguments hold even if we consider only rich countries with strong welfare states, in which it really is easy, in every reasonable sense of the term, to survive by freeloading. These certainly hold as examples of societies where no work is necessary to obtain food, housing, medical care, and even some discretionary income, and yet status concerns still motivate the overwhelming majority of people to work hard.
I don’t know about race, but I did read a piece by a young man who viewed homelessness as a sort of urban camping. He didn’t use drugs and he didn’t beg—he found enough odd jobs.
Ten years ago I read a “news of the wierd” story about a young homeless man in silicon valley. He earned something like $90K a year working as a junior programmer or some such occupation. He slept under a bridge, but had a bank account, mailbox, cell phone, laptop and gym subscription. He worked out and showered at the gym every morning before work. He socked away lots of money and spent a lot of his free time surfing the internet at a coffee shop or other hang out. The reason the story got picked up is that his parents or someone in his family was trying to get him committed for psychiatric treatment. Its more bold and daring than most people but that behavior in and of itself doesn’t really sound crazy to me.
Of course, completely forsaking status would mean all sorts of unpleasantness for a typical person, but this is only because we hate to admit how much of our lives revolves around zero-sum status competitions after all.
I agree that we hate to admit how much of our lives revolves around zero-sum status competitions. Here human modification via genetic engineering, supplements, & advanced technologies provides a potential way out, right? That we don’t like the fact that our lives revolve around zero-sum status competitions implies that there’s motivation to self-modify in the direction of deriving fulfillment from other things.
Of course there’s little historical precedent for technological self-modification and so such hypotheticals involve a necessary element of speculation, but it’s not necessarily the case that things will remain as they always have been.
Also, there would always be losers in these post-work status games who could improve their status by engaging in some sort of paid work and saving up to trade for the coveted status markers.
This is a very good point and one which I was thinking of bringing up in response to Yvain’s comment but had difficulty articulating; thanks.
That we don’t like the fact that our lives revolve around zero-sum status competitions implies that there’s motivation to self-modify in the direction of deriving fulfillment from other things.
Trouble is, once you go down that road, the ultimate destination is wireheading. This raises all sorts of difficult questions, to which I have no particularly interesting answers.
Though I know others feel differently (sometimes vehemently), aside from instrumental considerations (near guaranteed longevity & the welfare of others) I personally don’t mind being wireheaded.
My attitude is similar to the one that denisbider expresses here with some qualifications. In particular I don’t see the number of beings as so important and his last paragraph strikes me as sort of creepy.
I like this framing (I almost never thought on this topic): money as status as measure of socially enforced right to win competitions for resources, but with a baseline of fairness, where you can still get stuff, but less than high-status individuals (organisations). Right-based bargaining power rather than a measure of usefulness.
current problem: one where even people who are not interested in status need to work long hours in unpleasant conditions just to pay for food, housing, and medical costs, and where ease of access to these goods hasn’t kept pace with technological advantages.
This seems like a good place to point out the US centrism issue, as mentioned http://lesswrong.com/r/discussion/lw/6qr/lw_systemic_bias_us_centrism/ . Many countries do have safety nets that while not enough for actual comfort at the current tech level still makes plain survival a non-issue, and to some degree higher things through institutions like public libraries where you’ll often be able to access the internet.
What additionally complicates things is that habitable land is close to a zero-sum resource for all practical purposes, since to be useful, it must be near other people. Thus, however wealthy a society gets, for a typical person it always requires a whole lot of work to be able to afford decent lodging
Housing need not be as scarce as land, if regulatory permission for tall buildings and good transport networks exist. There is a lot of variation on this dimension already today. Automated mining, construction and cheap energy could make sizable individual apartments in tall buildings cheap, not to mention transport improvements like robocars.
I agree that the situation can be improved that way, though it’s arguable how much it runs against the problem that packing people tightly together has the effect of increasing discomfort and severely lowering status. But even with optimistic assumptions, I think it’s still the case that housing can never become non-scarce the way food and clothing could (and to a large degree already have). There is in principle no limit to how cheaply mass-produced stuff can be cranked out, Moore’s law-style, but this clearly can’t work anywhere as effectively for housing, even with very optimistic assumptions.
Another possibility that would reduce the effective cost of housing would be small scale distributed manufacturing (I’m thinking Drexler/Merkle nanotech here). That would mean that most goods would not need to travel, they would be “printed” locally. There are exceptions for goods which require uncommon atoms, which would still need transport. (To be a bit more explicit: I’m trying to weaken the “near other people” restriction. As is, a lot of what we exchange with other people is information, and we ship that around globally today. Goods are another major category, which I commented on above. Physical contact is a third category, but a lot of that is limited to family members in the same household anyway.)
One factor which hasn’t been directly discussed, is that housing, while partially designed to protect us from weather, is also partially to protect us from other people. The former function can be reduced in cost by better or cheaper materials. The latter is to some extent a zero-sum game. (There is a whole range of interacting social issues involved. Some of the protection is from thieves, some from obnoxious neighbors, some from intruding authorities—and these groups differ greatly in their ability to bring greater resources to bear, and also differ in their interest in doing so.)
No, not really. Opportunities for good and insightful discussion open up from time to time in all kinds of places, and sometimes particular forums can have especially good streaks, but all of this is transient. I don’t know any places that are particularly good these days.
Could this be solved by setting up a new forum and being sufficiently selective about whom to let in (e.g. only sufficiently high-quality and sufficiently non-ideological thinkers, as vetted by some local aristocracy based on comment history elsewhere), or is there some other limiting factor?
I would love there to be a place suitable for rational discussion of possibly outrageous political and otherwise ideologically charged ideas, even though I wouldn’t want it to be LessWrong and I wouldn’t want it to be directly associated with LessWrong.
I’d love to have such a place too, and based on my off-line conversations with some people here, I think there are also others who would. So maybe it wouldn’t be a bad idea to set up a new forum or mailing list, perhaps even one without public visibility. I have no idea how well this would work in practice—there are certainly many failure modes imaginable—but it might be worth trying.
Here’s one thing I’m worried about. What if discussion between those of widely varying ideological background assumptions is just intrinsically unproductive because they can never take basic concepts for granted? Even the best thinkers seem to mostly have strongly, stably different ideological outlooks. You could pre-select for ideology and possibly have multiple groups, but that has its own downsides.
Reddit. If you create your own subreddit I know you get to moderate it, but I don’t know how well it would work for a deliberately exclusive community.
Presumably it’d be easy to clone LW’s git code, change the logos, ask SingInst to host it, and put it behind a locked gate. https://github.com/tricycle/lesswrong . Louie would be the SingInst guy to ask I think. Besides that, no.
Maybe make it hidden to the public (like Koala Wallop’s Octagon) and invitation-only, with any given member limited to one invitation per hundred upvotes they’ve received?
Or… a nested thing, maybe. The onion has as many layers as the Grand High Administrator deigns to create, and said administrator can initiate anyone to any desired depth, or revoke such access, at will. A given user can issue one invite to their current layer per 100 net upvotes they have received on the layer in question; when someone receives 100 net downvotes on a given layer, they are banned from that layer until reinvited (at which point they start from scratch). Invites to a given layer can only be directed to users who have already been initiated to the layer immediately outside that one.
There might also be a branching structure; nobody knows for sure until they find parallel layers.
I am curious as to whether such a thing actually exists, do you or anyone else here end up producing an exclusive, private, invite only community in order to commune in high signal political discussion?
Technological advances can’t shorten the work hours because even in a society wealthy and technologically advanced enough that basic subsistence is available for free, people still struggle for zero-sum things, most notably land and status.
I agree that the zero-sum character of status makes it unlikely that technology will shorten work hours (barring modification of humans).
What additionally complicates things is that habitable land is close to a zero-sum resource for all practical purposes, since to be useful, it must be near other people. Thus, however wealthy a society gets, for a typical person it always requires a whole lot of work to be able to afford decent lodging, and even though starvation is no longer a realistic danger for those less prudent and industrious in developed countries, homelessness remains so.
I don’t see any reason why this should be true. Population levels in developed countries have leveled off and up to a point it’s easy to increase the amount of habitable space through the construction of skyscrapers. It’s not even clear to me that one needs to be industrious to avoid homelessness in contemporary America.
I don’t see any reason why this should be true. Population levels in developed countries have leveled off and up to a point it’s easy to increase the amount of habitable space through the construction of skyscrapers. It’s not even clear to me that one needs to be industrious to avoid homelessness in contemporary America.
You’re right, things are a bit more complicated than in my simplified account. Lodging can be obtained very cheaply, or even for free as a social service, in homeless shelters and public housing projects, but only in the form of densely packed space full of people of the very lowest status. This is indeed more than adequate for bare survival, but most people find the status hit and the associated troubles and discomforts unacceptably awful, to the point that they opt for either life in the street or working hard for better lodging. And to raise the quality of your lodging significantly above this level, you do need an amount that takes quite a bit of work to earn with the median wage.
This is in clear contrast with food and clothing, which were also precarious until relatively recent past, but are nowadays available in excellent quality for chump-change, as long as you don’t go for conspicuous consumption. This is because advanced technology can crank out tons of food and clothing with meager resources and little labor, which can be shipped to great distances at negligible cost, and the population is presently far from the Malthusian limit, so there is no zero-sum competition involved (except of course when it comes to their purely status-related aspects). In contrast, habitable land isn’t quite zero-sum, but it has a strong zero-sum aspect since it’s difficult to live very far from the centers of population, and wherever the population is dense, there is going to be (more or less) zero-sum competition for the nearby land.
Another striking recent phenomenon that illustrates this situation is that increasing numbers of homeless people have laptops or cell phones. Again we see the same pattern: advanced technology can crank out these things until they’re dirt-cheap, but acceptably good habitable land remains scarce no matter what.
Land is only a problem because of the dept of education. Competition wouldn’t be nearly so fierce if there wasn’t a monopoly on good schooling. Look at a heat map of property values. They are sharply discontinuous around school district borders.
How does one school district with good schools prevent its neighbor districts from also having good schools? There are certainly plenty of examples of contiguous districts with good schools.
As for working conditions, in terms of safety, cleanliness, physical hardship, etc., typical working conditions in developed countries are clearly much better than fifty years ago.
For many people’s psychological welfare, I think these may be lesser concerns than mobility, autonomy, and freedom from monotony.
I don’t think that typical jobs from 50 years ago were better in any of these regards. On the contrary, the well-paid blue collar manufacturing jobs that are associated with bygone better times in folk memory were quite bad by these measures. Just imagine working on an assembly line.
Focusing specifically on North America, where these trends appear to be the most pronounced, the key issue, in my opinion, is the distribution of status. Fifty years ago, it was possible for a person of average or even below-average abilities to have a job, lifestyle, and social status that was seen as nothing spectacular, but also respectable and nothing to scoff at. Nowadays, however, the class system has become far harsher and the distribution of status much more skewed. The better-off classes view those beneath them with frightful scorn and contempt, and the underclass has been dehumanized to a degree barely precedented in human history. Of course, these are not hereditary castes, and meritocracy and upward mobility are still very strong, but the point is that the great masses of people who are left behind in the status race are no longer looking towards a mundane but respectable existence, but towards the low status of despised losers.
Why and how the situation has developed in this direction is a complex question that touches on all sorts of ideologically charged issues. Also, some would perhaps disagree whether the trends really are as severe as I present them. But the general trend of the status distribution becoming more skewed seems to me pretty evident.
Nowadays, however, the class system has become far harsher and the distribution of status much more skewed. The better-off classes view those beneath them with frightful scorn and contempt, and the underclass has been dehumanized to a degree barely precedented in human history.
How do you measure this kind of thing? Do you have a citation?
I too would be interested in sources for this assertion. It goes contrary to what I would say if I were asked to guess about classes of today compared to classes of fifty years ago.
edit: Oops, I should have refreshed page before commenting, as I now see Vladimir_M responded. Leaving comment for content about my state of mind on this issue.
No, it’s a conclusion from common sense and observation, though I could find all kinds of easily cited corroboration. Unfortunately, as I said, a more detailed analysis of these trends and their components and causes would get us far into various controversial and politicized topics, which are probably best left alone here. I stated these opinions only because they seemed pertinent to the topic of the original post and the subsequent comments, i.e. the reasons for broad dissatisfaction with life in today’s developed world, and their specific relation to the issues of work.
Nowadays, however, the class system has become far harsher and the distribution of status much more skewed. The better-off classes view those beneath them with frightful scorn and contempt, and the underclass has been dehumanized to a degree barely precedented in human history.
There is no obviously appropriate way to measure this, even in theory.
What does one say about differences in solidarity between and church members, as it varies from Sunday to other days of the week, and from now to fifty years ago? Likewise for football fans in a city...What does one say about it as it varies from during the Olympics to during an election, within country, party, etc...During war? During strikes? And so on.
To make this claim one would have to establish a somewhat arbitrary “basket” of status markers and see how they varied (willingness to marry people from group X, willingness to trust random members of group X not to steal, willingness to make fun of people from group X for amusement, etc.) One would then have to integrate over time periods (war, etc.), and it’s not obvious how to do that. It’s also not obvious how to aggregate the statistics into a single measure expressible by a sentence like the above even if we have somehow established a score for how each individual thinks of and would think of each other individual. It’s not obvious what constitutes members of a class, nor how much the classes are to be judged by their worst members as against, say, their average or typical or idealized member.
What I most disagree with the connotation of is “the distribution of status much more skewed”. For status, each of us views others in certain ways, has representations of how we are viewed, has representations of how we view others, has representations of how others think they view us...status is not a thing for which the word “distributed” is at all apt.
There is no obviously appropriate way to measure this, even in theory.
It’s hard to discuss these things without getting into all sorts of overly controversial topics, but I definitely disagree that there are no obviously appropriate ways to establish whether this, so to say, skew of the status distribution is increasing.
Admittedly, these are fuzzy observations where it’s easy to fall prey to all kinds of biases, but there is still useful information all over the place. You can observe the level of contempt (either overt or more underhanded) that people express for those below their class, the amount of effort they invest just to make sure they’re insulated from the lower classes, the fear and disgust of mere proximity to anyone below a certain class, the media portrayals of people doing jobs at various percentiles of the income distribution, the reduction and uniformization of the status criteria and the disappearance of various sources of status available to those scoring low in wealth, fame, and bureaucratic rank, and so on. Of course, my observations and interpretations of all these trends may well be biased and inaccurate, but it’s certainly incorrect to claim that no conclusions could be drawn from them even in principle.
I basically agree with you—The U.S. has certainly been headed in the direction of a winner-take-all society over the last few decades.
I think some of this is measurable. The Gini coefficient certainly captures some of the economic aspects,
and it has gotten higher over time
“the underclass has been dehumanized to a degree barely precedented in human history” seem too strong. History includes slavery, including practices such as “seasoning”
History includes slavery, including practices such as “seasoning”
I agree that was probably a too hyperbolic statement. History certainly records much more extreme instances of domineering and oppression. However, “dehumanized” was not a very good choice of term for the exact attitudes I had in mind, which I think indeed have little historical precedent and, and which don’t really correspond to the traditional patterns of exercising crude power by higher-status groups and individuals, being a rather peculiar aspect of the present situation. But yes, in any case, I agree I exaggerated with the rhetoric on that point.
However, “dehumanized” was not a very good choice of term for the exact attitudes I had in mind, which I think indeed have little historical precedent and, and which don’t really correspond to the traditional patterns of exercising crude power by higher-status groups and individuals, being a rather peculiar aspect of the present situation.
Dear Vladimir, must as I hesitate to offer you any assistance in your presumably shady-looking intellectual enterprise (as frankly I’ve grown to dislike you quite a bit, period).… the term you might’ve been looking for is “http://en.wikipedia.org/wiki/Biopower”. Foucault, Arendt and Agamben have all pondered its significance in the 20th century.
it’s certainly incorrect to claim that no conclusions could be drawn from them even in principle.
I wouldn’t claim that, my claim is that there can’t be one formula specifying what you want to measure, so for reasonably similar societies like this one and that of fifty years ago, you can’t draw conclusions like that. If one looks at all the equally (in)appropriate ways to measure what you’re making claims about, the modern USA outperforms 18th century Russia in enough ways that we can draw conclusions. I’ll elaborate a bit on your examples.
fuzzy observations
The observations are the least fuzzy part.
the fear and disgust of mere proximity to anyone below a certain class
With something like this, you could perhaps quantify fear disgust of millions of people if in proximity to other people. You might find that in one society, 50% are extremely disgusted by the bottom 5%, and nonplussed by the others, and the top 10% of that is disgusted by the whole bottom 50%, while in another society, the top 20% is moderately disgusted by the bottom 80%, and the top 40% absolutely repulsed by the bottom 1%...etc.
What exactly, or even approximately, are your criteria, and how much do you think others on this site share them?
What our society has is an unprecedented tabooing of many overt scorning behaviors and thoughts. Perhaps you totally discount that? It has also tamed superstition enough that there is no system of ritual purity. People at least believe they believe in meritocracy. There is a rare disregard of bloodlines and heredity, compared to other times and places, including modern Japan.
the media portrayals of people doing jobs at various percentiles of the income distribution
What that brings to mind for me is the honest labor memes from the Puritans, and how so many were ready to identify with the common man, Joe the plumber, etc. One might say that this was primarily or only because he is white, and I think we all discount its value because of that to some extent, and if you idiosyncratically discount it more than others, you should be upfront about that by being more specific, and not make implicit claims that according to your readers’ values, what you say is true.
Were I trying to call out a certain statement as being sexist, I might quote the statement and tell people that the statement is sexist. That’s totally legitimate if I think that, would they reflect rationally and calmly, they would come to the same conclusion, according to their values. But if the reason I think that the statement is sexist is because it’s written in English, which has a long history of being used by sexists, it would be totally illegitimate for me to simply say to normal human beings that the statement is sexist, because the reason I think it sexist is its mere expression in English.
If you believe that your claims resonate with normal conceptions of fairness upon reflection by people, it’s fine for you to just make them. But this particular claim of yours is so, let’s say counter-intuitive, that I suspect you have very idiosyncratic values in which the worth of a great many things is reduced to zero where other people would think it worth something, perhaps a great deal. If so, please clarify that when you say “The better-off classes view those beneath them with frightful scorn and contempt, and the underclass has been dehumanized to a degree barely precedented in human history,” you just don’t mean “contempt” and “dehumanized” the way your readers do.
I think there may be some “rosy retrospection” going on here.
It seems like we have some essential misunderstandings on these points:
What our society has is an unprecedented tabooing of many overt scorning behaviors and thoughts. Perhaps you totally discount that? It has also tamed superstition enough that there is no system of ritual purity. People at least believe they believe in meritocracy. There is a rare disregard of bloodlines and heredity, compared to other times and places, including modern Japan.
The “status skew” I have in mind has nothing to do with the issues of fairness and meritocracy. In this discussion, I am not concerned about the way people obtain their status, only what its distribution looks like. (In fact, in my above comment, I already emphasized that the present society is indeed meritocratic to a very large degree, in contrast to the historical societies of prevailing hereditary privilege.)
What I’m interested in is the contrast between the sort of society where the great majority of people enjoy a moderate status and the elites a greater one, and the sort of society where those who fall outside an elite minority are consigned to the status of despised losers. This is a relevant distinction, insofar as it determines whether average people will feel like they live a modest but dignified and respectable life, or they’ll feel like low-status losers, with the resulting unhappiness and all sorts of social pathology (the latter mostly resulting from the lack of status incentives to engage in orderly and productive life).
My thesis is simply that many Western countries, and especially the U.S., have been moving towards the greater skew of the status distribution, i.e. a situation where despite all the increase in absolute wealth, an increasingly large percentage of the population feel like their prospects in life offer them unsatisfactory low status, and the higher classes confirm this by their scornful attitudes. (Of course, all sorts of partial exceptions can be pointed out, but the general trend seems clear.)
In fact, one provocative but certainly not implausible hypothesis is that meritocracy may even be exacerbating this situation. Elites who believe themselves to be meritocratic rather than hereditary or just lucky may well be even more arrogant and contemptuous because of that, even if they’re correct in this belief.
I think there may be some “rosy retrospection” going on here.
Well, I’m not that old, and I honestly can’t complain at all about how I’ve been treated by the present system—on the contrary. Of course, I allow for the possibility that I have formed a skewed perspective here, but the reasons for this would be more complex than just straightforward “rosy retrospection.”
Demonstrably the cost of housing has not dropped as much as the cost of a byte of hard drive storage, but that is not necessarily only because space is zero-sum. A lot of technologies have failed to advance at anywhere near the rate of computer technology, in particular housing-related technologies—the cost of building a structure, the cost of lighting it, air conditioning it, etc. I think that science fiction authors in the past tended to imagine that housing-related technologies would change much more rapidly than they actually did.
Transportation has also, in recent years, not changed all that much. That’s another one that science fiction writers were massively overoptimistic about. Transportation changes the value of proximity, and the changes that we did experience starting with steam powered vehicles probably did radically change the nature of what counts as proximity. I am, for example, an order of magnitude or so “closer” to the city center now than I would have been two hundred years ago, holding everything constant except for transportation.
Building construction and transportation are at a kind of plateau, at least compared with computers, possibly in part because they require a more or less fixed amount of energy in order to move stuff around. In order to transport a person you need enough power to move his body the required distance. In order to build a building, you need enough power to lift the materials into place. I had the misfortune of working next to a construction site and I recall that for weeks we could feel the thumping of the pile drivers.
… they require a more or less fixed amount of energy in order to move stuff around. In order to transport a person you need enough power to move his body the required distance. In order to build a building, you need enough power to lift the materials into place.
There’s no hard lower bound on the amount of energy needed to move something horizontally. Any expenditure in transportation is all friction, no work. Now, reducing friction turns out to be a harder engineering problem than making smaller transistors, but just saying “energy” doesn’t explain why.
And the gravitational potential energy in 1 ton of stuff lifted by 1 storey would cost all of .001$ if bought from the grid in the form of electricity. So clearly the energy requirement of lifting construction materials into place is not the primary cost of construction either.
So clearly the energy requirement of lifting construction materials into place is not the primary cost of construction either.
The cost of the fuel itself is not the only cost that increases when the amount of energy increases. When a large amount of energy is applied all at once, it becomes important to apply the energy correctly, because otherwise the results can be catastrophic. If you take the energy required to lift a ton one storey, and misapply it, then you could damage property or, worse, kill people.
We let children ride bikes but not drive cars. Why? One reason is that a typical moving car has a much larger amount of kinetic energy than a typical moving bicycle, so if the car is steered badly, the results can be much worse than if a bike is steered badly.
So the more more energy is applied, the more carefully it must be applied. And this extra care costs extra money.
In a controlled environment such as a factory, the application of energy can be automated, reducing costs. But in an uncontrolled environment such as we see in transportation or building, significant automation is not yet possible, which raises costs.
Other costs also rise with energy use. For instance, the machinery that employs the energy must be built to withstand the energy. A toy car can be built of cheap plastic, but a real car needs to be strong enough not to fly apart when you step on the gas. And the machine has to be built so that it doesn’t wear down quickly in reaction to the great stresses that it is being subjected to as it operates.
Technological advances can’t shorten the work hours because even in a society wealthy and technologically advanced enough that basic subsistence is available for free, people still struggle for zero-sum things, most notably land and status. Once a society is wealthy enough that basic subsistence is a non-issue, people probably won’t work as much as they would in a Malthusian trap where constant toil is required just to avoid starvation, but they will still work a lot because they’re locked in these zero-sum competitions.
That is the clearest explanation I’ve seen so far for this. (I’ve read a lot of SF, and asked myself the question.)
I don’t think that’s a complete explanation. I would say it’s more along the lines of “If you start with somebody working a three-day week, it’s much easier to employ them for another two days, than to hire a new person to work two days because that requires creating a whole new business relationship.” Then both corporations and governments, I think, tend to be as inefficient as they can possibly get away with without dying, or maybe a little more inefficient than that. Work expands to fill the time available...
I would have to sit down and write this out if I really wanted to think it through, but roughly I think that there are forces which tend to make people employed for a full workweek, everyone want to be employed, and society to become as inefficient as it can get away with. Combine these factors and it’s why increasing productivity doesn’t increase leisure.
The full work week makes sense, depending on what sort of job you’re talking about. Is it a job where a certain number of staff have to be working at a given time but it doesn’t really matter who, i.e. my job at the pool, etc, or is it a job where a certain amount of work has to get done and it’s simpler for one person to do a set of tasks because sharing the tasks between brains is complicated, i.e. my job at the research institute? For the former, it doesn’t really matter whether you have 20 staff working 40 hours a week or 40 staff working 20 hours a week. (In fact, at the pool we tend to flip between the two: in winter, when most employees are in school, there are a lot more staff and many of them have only 1 or 2 shifts a week. In summer, the number of staff drops and nearly everyone is full-time.) It doesn’t matter whether a given staffperson is there on a certain day; lifeguards and waitresses and grocery store cashiers (and nurses, to a lesser degree) are essentially interchangeable. For the latter, it makes a lot of sense for any one employee to be there every day, but why 8 hours a day? Why not 5? If the full-time employees at the research institute were each in charge of a single study, instead of 2 or 3, they could do all the required work in 5 hours a day plus occasionally overtime or on-call work.
I’m guessing that most work for corporations and governments is in the latter category. Most work in the former category is relatively low-paying, so adults in this jobs have to work full-time or more to make ends meet. I can see why right now, neither corporations nor the government are endorsing shorter work-days or work-weeks: they would have to hire more staff, spend more time on finding and interviewing qualified people, and providing these extra staff with the expected benefits (i.e. health insurance, vacation time) would be more complicated. The current state is stable and locked in place, because any business or organization that tried to change would be at a disadvantage. But in theory, if every workplace transitioned to more employees working fewer hours, I can’t see why that state wouldn’t be stable as well.
Yes but as Eliezer said the work expands to fill the time. So if you cut the time correctly, you just cut out the useless work and don’t give up any competitive advantage. This is how large corporations can lay-off 50,000 people without falling apart. Sometimes that means giving up products or markets, but more often it means a haircut across the organization—e.g. trimming the fat. At first the people left are paniced about how they will get everything done without all these resources, but what really happens is priorities get clarified and some people have to do more work during the day instead of reading Less Wrong. The same thing would happen if the work week were reduced, although management’s job would get harder as Eliezer points out.
If we accept the premise that most of this work is being spent on a zero-sum game of competing for status and land, then it’s a prisoner’s-dilemma situation like doping in competitive sports, and a reasonable solution is some kind of regulation limiting that competition. Mandatory six-week vacations, requirements to close shops during certain hours, and hefty overtime multipliers coupled with generous minimum wages are three examples that occur in the real world.
A market fundamentalist might seek to use tradable caps, as with sulfur dioxide emissions, instead of inflexible regulations. Maybe you’re born with the right to work 1000 hours per year, for example, but you have the right to sell those hours to someone else who wants to work more hours. Retirees and students could support themselves by getting paid for being unemployed, by some coal miner, soldier, or sailor. (Or their employer.) This would allow the (stipulated) zero-sum competition to go on and even allow people to compete by being willing to work more hours, but without increasing the average number of hours worked per person.
Ouch! “The more I find out, the less that I know”. This site gives extensive statistics, broken out nationally and by year from 2000-2010. According to their numbers, for 2010, Korea had the largest numbers of hours worked, with the U.S. 12th on the list and Japan 15th. It looks like the shifts across this decade are considerable (10%-20%, for many of the nations). Looking at a bunch of sites, there seems to be considerable differences in reported numbers as well—the definitions of what hours they include and who they include may differ...
As for working conditions, in terms of safety, cleanliness, physical hardship, etc., typical working conditions in developed countries are clearly much better than fifty years ago. What arguably makes work nowadays worse is the present distribution of status and the increasing severity of the class system, which is a very complex issue tied to all sorts of social change that have occurred in the meantime. But this topic is probably too ideologically sensitive on multiple counts to discuss productively on a forum like LW.
This sounds like such an interesting topic for discussion, though!
This sounds like such an interesting topic for discussion, though!
Trouble is, it touches on just about every controversial and ideologically charged issue imaginable. (Which is not surprising, considering that it concerns the fundamental questions about how status is distributed in the whole society.)
Work is terrible, and the lives of many working people, even people with “decent” jobs in developed countries, are barely tolerable. It is currently socially unacceptable to mention this. Anyone who breaks that silence has done a good deed.
I’ve wondered about this a lot myself. Note along with figure 3 of the quoted article, according to a Gallup poll the average self reported life satisfaction in America is around 7⁄10. Presumably this average includes even including the sick/elderly/poor. I believe that my own self reported life satisfaction would be considerably lower than that if I were living the life of an average American.
I would guess that the difference is mostly accounted for by my own affective response to a given situation diverging heavily from the affective response that members of the general population would have in the same situation.
How confident are you that this reflects the experience of working people rather than how you would feel if you were in their position?
Somewhat confident. I work at a medical clinic. The number of people who come in with physical complaints relating to their job, psychological/stress complaints relating to their job, or complaints completely unrelated to their job but they talk to the doctor about how much they hate their job anyway because he’s the only person who will listen—is pretty impressive.
But there’s a clear selection bias here; maybe the 10% of people who are most unhappy with their jobs visit medical clinics 5x as much as anybody else.
It’s entirely possible for working life to be awful and people living those lives to genuinely self-report an average of 7⁄10 on a happiness scale. This is likely due to facts about how humans set their baseline happiness, how they respond to happiness surveys, and what social norms have been inculcated.
Like, when given a scale out of 10, people might anchor 5 as the average life, and for social signaling and status purposes, reasons for them being different-better are more available to their conscious mind than reasons for them being different-worse, so they add a few points.
There are also other problems with the average happiness level being above average—it suggests some constant is at work.
I just realized that a link to an article by Angus Deaton about a Gallup poll that I meant to include in the comment above didn’t compile. I’ve since added it.
It’s entirely possible for working life to be awful and people living those lives to genuinely self-report an average of 7⁄10 on a happiness scale.
I agree. But I don’t see the considerations that you bring up as decisive. Several points:
•According to Angus Deaton’s article
I focus on the life satisfaction question about life at the present time, measured on an eleven-point scale from 0 (“the worst possible life”) to 10 (“the best possible life”)
If I were to give a response of 7⁄10 to this question it would indicate that my life is more good than it is bad. You’re right that my interpretation may not be the one used by the typical subject. But I disagree with:
There are also other problems with the average happiness level being above average—it suggests some constant is at work.
it could be that everyone finds their lives to be more good than bad or that everyone finds their lives more bad than good.
• You raise the hypothetical:
for social signaling and status purposes, reasons for them being different-better are more available to their conscious mind than reasons for them being different-worse, so they add a few points
but one could similarly raise ad hoc hypotheticals that point in the opposite direction. For example, maybe people function best when they’re feeling good and so they’re wired to feel good most of the time but grass-is-greener syndrome leads them to subtract a few points.
• Note that suicide rates are low all over the world. A low suicide rate is some sort of indication that members of a given population find their lives to be worth living.
• Note that according to Deaton’s article, life satisfaction scores by country vary from ~ 4 to ~ 7 in rough proportion to median income in a given country. This provides some indication that (a) life satisfaction scores pick up on a factor that transcends culture and that (b) Americans are distinctly more satisfied with their lives than sub-Saharan Africans are. But in line with my above point, sub-Saharan Africans seldom commit suicide. In juxtaposition with this, the data from Deaton’s article suggests that the average American’s life satisfaction is well above the point at which he or she would commit suicide.
Suicide rates could be low even when the average experience of the general population is worse than unconsciousness. People may apply scope insensitivity and discount large quantities of non-severe future suffering for themselves. Happiness reports can lead to different results than an hour-to-hour analysis would. Asking for each hour, “Would you rather experience an exact repeat of last hour, or else experience nothing for one hour, all other things exactly equal? How much would you value that diffence?” might lead to very different results if you integrate the quantities and qualities.
People with lives slightly not worth living may refrain from suicide because they fear death, feel obligated toward their friends and family, or are infected with memes about reward or punishment in an imaginary afterlife. A very significant reason is probably that bearably painless and reliable suicide methods are not universally within easy reach (are they in sub-Sahara Africa?). In fact, there is a de facto suicide prohibition in place in most contries, with more or less success. The majority of suicide attempts fail.
So continued existence can be either involuntary or irrational, and suicide rates can be low even when life generally feels more bad than good. If all sentient entities could become rational decision-makers whose conscious existence is universally voluntary, that would probably be the most significant improvement of life on earth since it evolved.
So continued existence can be either involuntary or irrational, and suicide rates can be low even when life generally feels more bad than good. If all sentient entities could become rational decision-makers whose conscious existence is universally voluntary, that would probably be the most significant improvement of life on earth since it evolved.
I agree. See also this comment and subsequent discussion. I consider low suicide rates to be weak evidence that people find their lives worth living, not definitive evidence. There’s other evidence, in particular if you ask random people if their lives are worth living they’ll say yes much more often than not. Yes they may be signaling and/or deluded, but it seems hubristic to have high confidence in one’s own assessment of their quality of lives over their stated assessment without strong evidence.
Yeah, re-reading my post it’s very handwave-y. However, a point you made about more good than bad / more bad than good stuck out to me. I wonder if a survey question “On a scale from 0 to 10, where 0 is every thing that happens to you is a bad thing, and 10 is every thing that happens to you is a good thing, what number would you give to your life?” would provide scores correlated with life satisfaction surveys. (Ideally we would simply track people and, every time a thing happened to them, ask them whether this thing was good or bad. Then we could collate the data and get a more accurate picture than self-reporting, but the gain doesn’t outweigh the sheer impracticality so I’ll be content with self-reported values).
I feel like if it correlated weakly, you would be right. And now that I think about the experiment, I’m fairly convinced it would come out correlated.
The prospect of an hansonain future does seem like a pretty good reason to delete all records of yourself, dispose of anyone with significant memories of you, and incinerate your brain in a large explosion enough to spread the ashes of your brain for miles around. At sea.
It should make you happy with the present, though, if you use the past and the future as the baseline for comparison. As John Derbyshire once said in a different context, “We are living in a golden age. The past was pretty awful; the future will be far worse. Enjoy!”
Well, if we (the present humans) are indeed extraordinarily fortunate to live in a brief and exceptional non-Malthusian period—what Hanson calls “the Dreamtime”—then you should be happy to be so lucky that you get to enjoy it. Yes, you could have been even luckier to be born as some overlord who gets to be wealthy and comfortable even in a Malthusian world, but even as a commoner in a non-Malthusian era, you were dealt an exceptionally good hand.
No, I’m UN-lucky. I’d prefer a different, counterfactual universe where EVERYONE is happy at all times, and given any set universe I see no reason how which entity in it is me should matter.
Yes, a hansonian future looks appalling. Anything that gets us back into a Malthusian trap is a future that I would not want to experience.
I’m not sure that active measures to prevent oneself from being revived in such a future are necessary. If extreme population growth makes human life of little value in what are currently the developed nations, who would revive us? Cryonics has been likened to a four-dimensional ambulance ride to a future emergency room. If the emergency rooms of the 22nd century turn out to only accept the rich, cryonicists will never get revived in such a world anyway.
I find it bizarre that Robin Hanson himself both endorses cryonics and actively endorses population growth—both in the near term (conventional overpopulation of humans) and in the long term (explosive growth of competing uploads/ems).
@2: Most of it was humour, indicating excessive paranoia. Under that was basically a mix of being humble (might have reasons we would never think of to do it), and the implication that it’s not only bad but so bad every little trace of probability must be pushed as close as possible to 0.
Work is terrible, and the lives of many working people, even people with “decent” jobs in developed countries, are barely tolerable. It is currently socially unacceptable to mention this.
I’ve been wondering why no one has yet broached this issue on LW, that I recall.
Ugh field? People don’t like to talk about this. I will say something like “my job is a soul-sucking vortex” and people think I’m only joking. I am joking, but like many jokes it is also true.
My job doesn’t make me hate life; much of what I value in life is supported by my job which is why I keep it.
summary of the article was much better than the article itself, which was cluttered with lots of quotes and pictures and lengthiness. Summaries that are better than the original articles are hard to do, hence, upvote.
It’s not very hard to do when the original author is Mike Darwin. The man really needs an editor. Consider “Doing the Time Wrap”, which looks for all the world like someone wrote 2 or 3 amazing, wonderful essays on completely different topics, and then decided to cut and paste random sections of them to form a single article, with random song lyrics thrown in for good measure.
Upvoted for several reasons:
excellent theory about cryonics, much more plausible than things like “people hate cryonics because they’re biased against cold” that have previously appeared on here.
willingness to acknowledge serious issue. Work is terrible, and the lives of many working people, even people with “decent” jobs in developed countries, are barely tolerable. It is currently socially unacceptable to mention this. Anyone who breaks that silence has done a good deed.
spark discussion on whether this will continue into the future. I was reading a prediction from fifty years ago or so that by 2000, people would only work a few hours a day or a few days a week, because most work would be computerized/roboticized and technology would create amazing wealth. Most work has been computerized/roboticized, technology has created amazing wealth, but working conditions are little better, and maybe worse, than they were fifty years ago. A Hansonian-style far future could lead to more of the same, and Hanson even defends this to a degree. In my mind, this is something futurologists should worry about.
summary of the article was much better than the article itself, which was cluttered with lots of quotes and pictures and lengthiness. Summaries that are better than the original articles are hard to do, hence, upvote.
Technological advances can’t shorten the work hours because even in a society wealthy and technologically advanced enough that basic subsistence is available for free, people still struggle for zero-sum things, most notably land and status. Once a society is wealthy enough that basic subsistence is a non-issue, people probably won’t work as much as they would in a Malthusian trap where constant toil is required just to avoid starvation, but they will still work a lot because they’re locked in these zero-sum competitions.
What additionally complicates things is that habitable land is close to a zero-sum resource for all practical purposes, since to be useful, it must be near other people. Thus, however wealthy a society gets, for a typical person it always requires a whole lot of work to be able to afford decent lodging, and even though starvation is no longer a realistic danger for those less prudent and industrious in developed countries, homelessness remains so.
There is also the problem of the locked signaling equilibrium. Your work habits have a very strong signaling component, and refusing to work the usual expected hours strongly signals laziness, weirdness, and issues with authority, making you seem completely useless, or worse.
As for working conditions, in terms of safety, cleanliness, physical hardship, etc., typical working conditions in developed countries are clearly much better than fifty years ago. What arguably makes work nowadays worse is the present distribution of status and the increasing severity of the class system, which is a very complex issue tied to all sorts of social change that have occurred in the meantime. But this topic is probably too ideologically sensitive on multiple counts to discuss productively on a forum like LW.
I agree that even a post-scarcity society would need some form of employment to determine status and so on. But that seems irrelevant to the current problem: one where even people who are not interested in status need to work long hours in unpleasant conditions just to pay for food, housing, and medical costs, and where ease of access to these goods hasn’t kept pace with technological advantages.
And although I don’t think it quite related, I am less pessimistic than you abou the ability of a post-scarcity society to deal with land and status issues. Land is less zero-sum than the finitude of the earth would suggest because most people are looking not for literal tracts of land but for a house in which to live, preferably spacious—building upward, or downward as the case may be, can alleviate this pressure. I’m also not convinced that being near other people is as big a problem as you make it out to be: a wealthier society would have better transportation, and cities have enough space to expand outward (giving people access to other humans on at least one side) almost indefinitely. There will always be arbitrarily determined “best” neighborhoods that people can compete to get into, but again, this is a totally different beast from people having to struggle to have any home at all.
I think a genuinely post-work society would have its own ways of producing status based on hobbyist communities, social interaction, and excellence at arts/scholarship/sports/hobbies; the old European nobility was able to handle its internal status disputes in this way, though I don’t know how much fo that depended on them knowing in the back of their mind they were all superior to the peasantry anyway.
Agreed that the class system is an important and relevant issue here.
But that’s not the case in the modern developed world. If you are really indifferent to status, you can easily get enough food, housing, and medical care to survive by sheer freeloading. This is true even in the U.S., let alone in more extensive welfare states.
Of course, completely forsaking status would mean all sorts of unpleasantness for a typical person, but this is only because we hate to admit how much our lives revolve around zero-sum status competitions after all.
Don’t forget about the status obtained from having power over others. That’s one part of the human nature that’s always dangerous to ignore. (The old European nobility was certainly not indifferent to it, and not just towards the peasants.)
Also, there would always be losers in these post-work status games who could improve their status by engaging in some sort of paid work and saving up to trade for the coveted status markers. These tendencies would have to be forcibly suppressed to prevent a market economy with paid labor from reemerging. It’s roughly analogous to the present sexual customs and prostitution. Men are supposed to find sexual partners by excelling in various informal, non-monetary status-bearing personal attributes, but things being zero-sum, many losers in this game find it an attractive option to earn money and pay for sex instead, whether through out-and-out prostitution or various less explicit arrangements.
I’m not sure this is true; I know little about welfare politics, but I was under the impression there was a major shift over the last ten years toward limiting the amount of welfare benefits available to people who are “abusing the system” by not looking for work.
One could probably remain alive for long periods just by begging and being homeless, but this raises the question of what, exactly, is a “life worth living”, such that we could rest content that people were working because they enjoy status competitions and not because they can’t get a life worth living without doing so.
This is probably way too subjective to have an answer, but one thing that “sounds right” to me is that the state of nature provides a baseline. Back during hunter-gatherer times we had food, companionship, freedom, et cetera without working too hard for them (the average hunter-gatherer only hunted-gathered a few hours a day). Civilization made that kind of lifestyle impossible by killing all the megafauna and paving over their old habitat, but my totally subjective meaningless too-late-at-night-to-think-straight opinion is that we can’t say that people can opt-out of society and still have a “life worth living” unless they have it about as good as the hunter-gatherers they would be if society hadn’t come around and taken away that option.
The average unemployed person in a developed country has a lot of things better than hunter-gatherers, but just the psychological factors are so much worse that it’s no contest.
The specific situation in the U.S. or any other individual country doesn’t really matter for my point. Even if I’m wrong about how easy freeloading is in the U.S., it’s enough that we can point to some countries whose welfare systems are (or even just were at some point) generous enough to enable easy freeloading.
Ironically, in my opinion, in places where there exists a large underclass living off the welfare state, it is precisely their reversal to the forager lifestyle that the mainstream society sees as rampant social pathology and terrible deprivation of the benefits of civilized life. I think you’re committing the common error of idealizing the foragers. You imagine them as if you and a bunch of other highly intelligent and civilized people had the opportunity to live well with minimal work. In reality, however, the living examples of the forager lifestyle correctly strike us as frightfully chaotic, violent, and intellectually dead.
(Of course, it’s easy to idealize foragers from remote corners of the world or the distant prehistory. One is likely to develop a much more accurate picture about those who live close enough that one has to beware not to cross their path.)
You are not wrong about “freeloading,” though that term is probably (unnecessarily pejorative). The Developed world is so obscenely wasteful that it is not necessary to beg. You can get all the food you want, much of it very nice—often much nicer than you could afford to buy by simply going out and picking it up. Of course, you don’t get to pick and choose exactly what you want when you want it.
Clothing, with the exception of jeans, is all freely available. The same is true of appliances, bedding and consumer electronics of many kinds. The one commodity that is is very, very difficult to get at no cost is lodging. You can get books, MP3 players, CDs, printers, scanners, and often gourmet meals, but lodging is tough. The problem with housing and why it is qualitatively different that the other things I’ve cited is that while it is technically illegal to dustbin dive, in practice it is easy to do and extremely low risk. It is incredibly easy in the UK, if you get a dustbin key (easy to do).
However, the authorities take a very dim view of vagrancy, and they will usually ticket or arrest the person who has either “failure to account,” or is clearly living in a vehicle or on the street. This is less true in the UK than the US. However, get caught on the street as a vagrant AND as a foreigner in the UK (or in the US, or in any Developed country) and you are in a world of hurt—typically you will be deported with prejudice and be unable to renter the country either “indefinitely,” or for some fixed period of time.
If you can swing lodging, then the world is your oyster (for now). I travel with very little and within 2 weeks of settling on a spot in large city, I have cookware, flatware, clothing, a CD player, a large collection of classical CDs, and just about anything else I want to go looking for. There is an art to it, but the waste is so profligate that it is not hard to master, and absolutely no begging is required (except for lodging ;-))
Speaking from a lifetime of experience on welfare in the US (I’m disabled, and have gotten work from time to time but usually lost it due to factors stemming either from said disability, or the general life instability that poverty brings with it), your impressions are largely correct.
What I’d say is that the shift (and it’s been more like the last forty years, albeit the pace has picked up since Reagan) is towards “preventing abuse” as a generic goal of the system; the result has been that the ability to deliver the services that ostensibly form the terminal goal of welfare-granting organizations is significantly diminished—there’s a presumption of suspicion the moment you walk in the door. Right now, SSI applicants are auto-denied and have to appeal if they want to be considered at all, even if all their administrative ducks are otherwise in a row; this used to be common practice, but now it’s standard.
This also means that limits are fairly low. I can’t receive more than 40 dollars a month in food stamps right now because my apartment manager won’t fill out a form on my behalf stating the share of rent and other services I pay in my unit. He has an out; he’s not involved in the household finances. But without that in writing, from that person, the office presumes that since I have roommates declared, my share of the household expenses is zero, ergo I’m entitled to the minimum allowable (they can’t just deny me since I’m on SSDI).
And having been homeless for a little while (thankfully a friend helped me get the down payment on a place I could just barely afford), yeah...Vladimir_M’s comments are based more on rhetoric than substance. One thing I observe is that many people who are long-term impoverished or homeless (self included) will project a bit of being inured to status as a way of just securing ourselves some dignity in our interactions with others—but nobody in that situation could miss how deeply that status differential cuts whenever it’s used against us, even implicitly in the way people just ignore or dismiss them,
As luck would have it, I have some limited experience with living for periods of about a month at a time in a household where we gathered about 80 percent of the food we ate (no exaggeration). Rich in what the land around of offered, rich in the basic assets needed to make use of it, rich in ability to keep ourselves entertained and occupied during our copious free time.
I could easily see the typical hunter-gatherer experience being very, very good. Certainly, I’d rather be financially and material poor under the conditions I described above, than in my present circumstances.
Then what limited the growth of forager peoples so substantially? There had to be a mechanism to prevent them from exceeding their region’s carrying capacity. If a tribe of 50 people grew at a rate of 1% for 2000 years there would 24 billion people in it. Clearly that didn’t happen; in fact there have been massive die-offs from starvation due to cyclical climate change, or to resource warfare (sometimes fought to extinction) between neighboring tribes.
You cannot be considered financially and materially impoverished if you have access to abundant natural resources. Nevermind if you own that or can enforce the exclusive status of your rights to it—if you have those resources available to you they at least count as cash flow if not assets.
Limited access to limited resources is far more typical, and life is not so leisurelly when you spend every hour of daylight working to procure food that still isn’t enough to provide for you and your family. That is also the state of nature and was a situation that a great many people have found themselves in for the brief time that they managed to survive it.
That...actually doesn’t represent the human condition for most of our ancestral history, nor the current state of surviving forager peoples for the most part.
Resources are limited, but you only need about 15 hours of work a week per hunter-gatherer individual devoted to food-producing activities. Overdo that and you may well tax your ecosystem past carrying capacity. This is why foragers wander a migratory circuit (although they tend to keep to a known, fixed route) or live in areas where there’s sufficient ecological abundance to allowed for a sedentary lifestyle while still using hunter-gatherer strategies. It’s also why they tended to have small populations. Scarcity was something that could happen, but that’s why people developed food preservation technologies and techniques that you can assemble with nothing more than accumulated oral tradition and some common sense. Tie a haunch of meat down to some stones and toss it down to the bottom of a cold lake. That meat will keep for months, longer if the lake freezes over. It’ll be gamy as hell, but you won’t starve—and this is a pretty typical solution in the toolkit of prehistoric humans from Northern regions. Drying, salting (sometimes using methods that would squick you—one branch of my ancestors comes from a culture that used to preserve acorns by, kid you not, burying them in a corner of the home and urinating over the cache), chemical preservation, favoring foods that store long-term well in the first place, fermentation, and a flexible diet are all standard knowledge.
In the American Southwest (a hot, harsh, dry and ecologically-poor climate), Pueblo people and many others used to rely on the seasonal abundance of Mormon Crickets for protein. You can gather eighteen pounds of them an hour when they pass through, basically just by walking around and picking up bugs. The nutritional profile beats the hell out of any mammal meat, and they can be preserved like anything else. Think about that for a second—one person, in one hour, can provide enough of these bugs to feed an entire village for a day, or their own household for weeks (and that’s without preservation). It’s not desperation; it’s a sound food-gathering strategy, and a lot more palatable when you don’t come from a culture used to think of insects as a culinary taboo.
Starving to death is more of an issue for low-tech pastoralists and agriculturalists—people who use just a small fraction of the available edible resources to support populations that wouldn’t be able to forage on the available resources. The relationship of effiort to output for them is linear; work your farm harder, get more food in proportion—and you need to run a surplus every year in most cases because there is non-negotiable downtime during which it’s going to be hard to switch to another food source (and even if you do, you’ll be competing with your neighbors for it).
In my own case, I’ve taken part in of a family of five supplying themselves with only a few culturally-specific dietary staples (powdered milk, spices, flour, rice, things that we could easily have done without had they not been available) doing most of their food-production by just going out and getting it somewhere within a mile of home. Clams. squid and oysters were for storing (done with a freezer or by canning with water and salt) and cooking up into dishes we could eat for the rest of the month; small fish were gathered day-by-day, large fish stored (one salmon or sturgeon can feed five people for over a month when you have a freezer), crabs and similar gathered on a case-by-case basis. I personally wasn’t fond of frog legs, but a nearby pond kept up with a whole lot of demand for frogs in my family and others. We never bothered with anything like deer or bird hunting, but we’d gather berries, tree fruits (apple, plum, pear) and mushrooms, grow garden veggies and basically just keep ourselves supplied.
I’m not saying everyone on Earth could switch back today—heck no. A whole lot of people would starve to death after destroying the ecosystems they need. But my ancestors lived in that place for thousands of years and starving to death was not a common experience among them, because they weren’t used to the population densities that only come with intensive agriculture. And there are people descended from foragers of even more remote and desolate climes—some of them STILL living that way—who can say the same thing.
Then what limited the growth of forager peoples so substantially? There had to be a mechanism to prevent them from exceeding their region’s carrying capacity. If a tribe of 50 people grew at a rate of 1% a year for 2000 years there would 24 billion people in it. Clearly that didn’t happen; in fact there have been massive die-offs from starvation due to cyclical climate change, or to resource warfare (sometimes fought to extinction) between neighboring tribes.
I am so glad you asked, because the answer to your question reveals a fundamental misapprehension you have about forager societies and indeed, the structure and values of ancestral human cultures.
The fact is that forager populations don’t grow as fast as you think in the first place, and that across human cultures still living at or near forager methods of organization, there are many ways to directly and indirectly control population.
It starts with biology. Forager women reach menarche later, meaning they’re not fertile until later in life. Why? Largely, it’s that they tend to have much lower body fat percentages due to diet and the constant exercise of being on the move , and that’s critical for sustaining a pregnancy, or even ovulating in the first place once you’ve reached the (much higher) age where you can do that. Spontaneous abortions or resorption of the fetus are rather common. Women in an industrial-farming culture attain menarche quite a bit earlier and are more likely to be fertile throughout their active years—it only looks normal to you because it’s what you’re close to. So right out of the gate, forager women are less likely to get pregnant, and less likely to stay that way if they do.
Next biological filter: breastfeeding. Forager women don’t wean their children onto bottles and then onto solid food the way you experienced growing up. Breastfeeding is the sole means for a much longer period, and it’s undertaken constantly throughout the day—sleeping with the baby, carrying them around during the daily routine. It goes on for years at a time even after the child is eating solid food. This causes the body to suppress ovulation—meaning that long after you’re technically able to get pregnant again, the body won’t devote resources to it. All the hormonal and resource-delivery cues in your body point to an active child still very much in need of milk! Not only that, but it’s routine in many such societies for women to trade off breastfeeding duty with one another’s children—the more kids there are, the more likely it is that every woman in the proximate social group will have moderately suppressed fertility. It’s a weak effect, but it’s enough to lengthen the birth interval considerably. In the US, a woman can have a baby just about every year—for modern-day foragers, the birth interval is often two to five years wide. It’s harder to get pregnant, and once you do, the kids come more slowly.
The next layer is direct means of abortion. In the US that tends to be pretty traumatic if it’s not performed by a medical specialist. In some cases it still is for forager women—the toolkit of abortives across all human cultures is very wide. Midwives and herbalists often have access to minimally-invasive methods, but they also often have painful or dangerous ones. What you won’t find is many that are truly ineffective. Methods range from the unpleasant (direct insertion of some substance to cause vaginal bleeding and fetal rejection), to the taxing or dangerous (do hard work, lift heavy objects, jump from a high place) to fasting and ingestible drugs that can induce an abortion or just raise the likelihood of miscarriage.
The last layer is infanticide (and yes, we have this too, though it’s a deprecated behavior). In all cultures that practice it it’s considered a method of last resort, and it’s usually done by people other than the mother, quickly and quietly. Forager cultures are used to having to do this from time to time, but it’s still a rare event—certainly not a matter of routine expedience.
The point I’m making is that population growth unto itself is not a goal or a value of forager societies like those every human being on earth is descended from (and which some still occupy today). Growth, as an ideological goal, is a non-starter for people living this way. Too many mouths to feed means you undercut the abundance of your lifestyle (and yes, it truly is abundance most of the time, not desperate Malthusian war of all against all) -- and forager lives tend to be pretty good on the whole, filled with communitas and leisure and recreation aplenty as long as everybody meets a modest commitment to generating food and the supporting activities of everyday life. I’m not making it out to be paradise; this is just really what it’s like, day to day, to live in a small band of mostly close relatives and friends gathering food from what’s available in the environment.
I’ve heard claims like these several times, but this situation where individuals voluntarily limit their reproduction for the common good can’t possibly be a stable equilibrium. It faces a coordination problem, more specifically a tragedy of the commons. As soon as even a small minority of the forager population starts cheating and reproducing above the replacement rate (by evolving either cultural memes or hereditary philoprogenitive behaviors that motivate them to do so), in a few generations their exponential growth will completely swamp everyone else. The time scales on which forager societies have existed are certainly more than enough for this process to have taken place with certainty.
In order for such equilibrium to be stable, there would have to exist some fantastically powerful group selection mechanism that operates on the level of the whole species. I find this strikingly implausible, and to my knowledge, nobody has ever proposed how something like that might work.
It happened in the real world, ergo the issue lies with your understanding of the system we’re talking about and not with its inability to conform to your model.
You’re looking at this backwards. This is the reproductive context in which humanity evolved, and the Malthus-driven upward spiral of population and competition is the result of comparitively recent cultural shifts brought on by changing lifestyles that made it viable to do that. You don’t need to invoke group selection in the form you’re thinking of—the cultural “mutations” you’re positing can’t gain a foothold until some branch of humanity has access to a lifestyle that makes it advantageous to defect like that. Forager societies don’t have that incentive because if they overtax their resource base here and now they have to move, and for most of human prehistory (and the modern history of hunter-gatherers) the population densities were low enough that this gave the affected area time to recover, so when someone came back, things were fine again. A long-term climatic shift alters the range of viable habitats near you, but it takes something pretty darn catastrophic (more than just a seasonal or decadal shift) to entirely render a region uninhabitable to a group of size n.
The biggest filters to population growth in this system are entirely passive ones dictated by biology and resources—the active ones are secondary measures, and they’re undertaken because in a system like this, the collective good and the individual good are inextricably linked. It was a stable equilibrium for most of our evolution, and it only broke when and where agriculture became a viable option that DIDN’T immediately overtax the environment.
That’s a state of affairs that took most of human existence to come into being.
You assert these things very confidently, but without any evidence. How exactly do we know that this state of affairs existed in human prehistory?
You say:
This, however, provides no answer to the question why individuals and small groups wouldn’t defect, regardless of the subsequent collective consequences of such defection. You deny that you postulate group selection, but you keep talking in a very strong language of group selection. Earlier you asserted that “population growth unto itself is not a goal or a value of forager societies,” and now you say that “[f]orager societies don’t have that incentive.” How can a society, i.e. a group, have “values” and “incentives,” if you’re not talking about group selection? And if you are, then you need to answer the standard objection to arguments from group selection, i.e. how such group “incentives” can stand against individual defection.
I have no problem with group selection in principle—if you think you have a valid group-selectionist argument that invalidates my objections, I’d be extremely curious to hear it. But you keep contradicting yourself when you deny that you’re making such an argument while at the same time making strong and explicit group-selectionist assertions.
Archaeological evidence regarding the health and population density of human beings and their dietary habits. Inference from surviving examples. The null hypothesis, that we didn’t start with agriculture and therefore must have been hunter-gatherers for most of our existence as a species. The observatiion that the traits generally associated with the Malthusian trap are common experiences of agricultural societies and dependent upon conditions that don’t obtain in predominantly and purely hunter-gatherer societies.
They might defect, but it’d gain them nothing. Their cultural toolkits and food-gathering strategies were dependent upon group work at a set quota which it was maladaptive to under- or overreach. An individual can″t survive for long like this compared to a smallish group; a larger group will split when it gets too big for an area, a big group can’t sustainably form.
The answer to this lies in refuting the following:
“A small minority of the forager population” has to be taken in terms of each population group, and those are small. A small percentage of a given group might be just one or two people every handful of generations, here. A social umbrella-group of 150 scattered into bands of 10-50 throughout an area, versus just one or two people? Where’s the exponential payoff? The absolute numbers are too low to support it, and the defectors are stuck with the cultural biases and methodologies they know. They can decide to get greedy, but they’re outnumbered by the whole tribe, who are more than willing to provide censure or other forms of costly social signalling as a means of punishing defectors. They don’t even have to kill the defectors or drive them out; the defectors are critically dependent on the group for their lifestyle. The alternatiive will be unappealing in all but a vast majority of cases.
You need the kind of population densities agriculture allows to start getting a really noticeable effect. It’s not to say people don’t ever become tempted to defect, but it’s seldom a beneficial decision. And many cultures, such as the San ones in South Africa, have cultural mechanisms for ensuring nobody’s ego gets too big for their britches, so to speak. Teasing and ribbing in place of praise when someone gets a big head about their accomplishments, passive reminders that they need the group more than they individually benefit it.
This isn’t so much about group selection,as it is about all the individuals having their raft tied to the same ship—a group big enough to provide the necessities of life, which also provides a lot of hedonic reinforcement for maintaining that state of affairs, and a lot of non-coercive negative signalling for noncompliance, coupled with the much more coercive but morally neutral threat presented by trying to make a living in this place all by yourself.
If you break a leg in a small group, the medical practitioner splints it and everyone keeps feeding you. If you do that by yourself, it probably never heals right and the next leopard to come along finds you easy pickings. That’s what defection buys you in the ancestral environment.
Say there are two kinds of forager groups, one which limits reproduction of its members by various means, and another that does not limit reproduction and instead constantly grows and splits and invades other groups’ territories if needed. Naively I would expect that the latter kind of group would tend to drive the former kind out of existence. Why didn’t this happen?
This isn’t necessarily evidence against a Malthusian equilibrium. It could be that the subsequent farmer lifestyle enabled survival for people with much poorer health and physical fitness, thus lowering the average health and fitness of those who managed to survive in the Malthusian equilibrium.
Can you give a reference that specifically discusses how a non-Malthusian situation of the foragers can be inferred from the existing archaeological evidence?
This is not true. Humans are (more or less) the only species that practices agriculture, but the Malthusian trap happens to non-human animals too. As long as reproduction above the replacement rate is possible, it will happen until the resource limit is reached. (Admittedly, for animals that aren’t apex predators, the situation is more complicated due to the predator-prey dynamics.)
Regarding the foragers’ supposed cooperation on keeping the population stable, I honestly don’t see how what you write makes sense, for at least two reasons:
The defectors would not need to reproduce in blatantly extraordinary numbers. It would be enough to reproduce just slightly above the replacement rate, so slightly that it might be unnoticeable for all practical purposes. The exponential growth would nevertheless explode their population in not very many generations and lead to them overwhelming others. So even if we assume that blatantly excessive reproduction would be punished, it would still leave them more than enough leeway for “cheating.”
How did this punishment mechanism evolve, and how did it remain stable? You can postulate any group selection mechanism by assuming altruistic punishment against individuals who deviate from the supposed group-optimal behavior. But you can’t just assert that such a mechanism must have existed because otherwise there would have been defection.
Moreover, you are now talking about group selection with altruistic punishment. There’s nothing inherently impossible or absurd about that, but these are very strong and highly controversial claims, which you are asserting in a confident and authoritative manner as if they were well-known or obvious.
I’d like to remind you that the ancestral environment was not completely stable, and no one is disputing that exponentially-expansive Malthusian agriculture happened. The question is why it took as long as it did, not why it was possible at all.
Estimates of world population growth come from:
http://faculty.plattsburgh.edu/david.curry/worldpop.htm
Essentially human for our first 2 million years of existence, human population worldwide went from about 10,000 to 4 million. Given that virtually all major models of long-run human population converge very closely, and they all assume a relatively steady growth rate, we’re talking a doubling period of 250,000 years.
Malthus’ estimates assume a doubling rate of 25 years, or a single human generation. The difference is a factor of 10,000. World population simply did not grow as fast as you’re assuming, and humanity did not start outstripping local carrying capacities in a major, systematic way until we’d developed technologies that allowed us to make those sorts of population growth leaps.
According to Michael Kremer in “Population Growth and Technological Change: One Million BC to 1990”, the base rate of technological change in human societies scales proportional to population—small population, slow technological change. This equals very long inferential distances to the sorts of techniques and behaviors that make agriculture a viable prospect.
You need intermediate steps, in the form of settled horticulture or nomadic pastoralism, to really concentrate the population enough to have a chance at developing agriculture in the intensive sense. Those sorts of cultural developments took a long time to come into being, and it was a gradual process at that.
So, yes, it’s true that if you grow certain grasses and just harvest their seeds reliably, grinding them into a fine powder and mixing that with water and then heating the whole mixture somehow without actually burning it in your fire directly, you can produce a food source that will unlock access to population-doubling intervals closer to the Malthusian assumption of one doubling per generation.
But that is a series of nested behaviors, NONE of which is intuitively obvious by itself from the perspective of a forager in a world full of nothing but other foragers. Which is why the entire chain took a long, long time to develop, and why agriculture was invented just a few times throughout human history.
Termites, leafcutter ants, certain damselfish, ambrosia beetles, and certain marsh snails all practice agriculture. But yes, it’s certainly an uncommon behavior.
What if reproduction above the replacement rate isn’t possible for the period of human evolution we’re talking about? What if the human population simply isn’t reproducing fast enough for most of prehistory to reach the resource limit? Those are the conditions I’m suggesting here—that reaching local resource limits was not the norm for much of our evolution, due to our inherent long gestation times and strong k-selection, the inherent metabolic requirements for fertility taking a long time to satisfy compared to modern conditions, the birth interval being very wide compared to Malthusian assumptions, and the techniques of food acquisition being of necessity limited by the the ease of satisfying everybody’s requirements (if everyone has a fully tummy and all their kids do too, going out and gathering MORE food at the expense of one’s kinsmen won’t do you any good anyway).
What you get is abundance—there’s room to grow, but we can only do it so fast, and when we start to reach the point where we might overtax our resource base, we’ve moved on and there weren’t enough of us using it in the first place to compromise it.
That kind of statistical hackery might work in a large population, but not very well in a small one. In a group of 100 humans, ANY population gain is noticeable.
Except all evidence suggests it wasn’t possible to have a population explosion, if you assume humans must have reproduced at the fastest allowable rate. Populations doubled in a quarter-million years, not 25.
It didn’t evolve genetically, it’s a cultural punishment I’m talking about. Ju/’hoansi hunters are taken down a notch whenever they make a kill. Certain Australian aboriginal groups have meat-sharing customs where one hunter goes out and gets a kangaroo (say), and his share of the meat is the intestines or penis—the choicer cuts get distributed according to a set of other rules. Except, then people invite the hunter over to dinner; he’s not forced to actually eat crow every time he succeeds, but he’s also socially aware that he depends upon the others for it (and he gets to receive a choicer share when some other hunter makes a kill).
I don’t understand your argument here at all. Earlier you said that growth to the Malthusian limit was prevented by a cooperative strategy of restraining reproduction. Now you say that lack of food production technology was limiting population growth. But if foragers did breed up to the limit where food became the limiting resource, that’s by definition a Malthusian equilibrium.
You are also presenting a strawman caricature of Malthus. His claim about a 25-year doubling period refers to agricultural societies with an ample supply of land, such as existed in North America of his day. He presents it as an empirical finding. When he discusses foragers, he notes that they’ll reproduce to the point where they run against the limited food supply available from foraging, which given the low supply of food relative to farming, means a much less dense population.
Some of his discussions of foragers are actually quite interesting. He notes that among the North American hunter-gatherers, resource limitations lead to constant disputes and warfare. He also cites accounts of European explorers’ contacts with forager peoples that seem to have been on the Malthusian limit.
It doesn’t matter—it still needs to be explained. Humans don’t just magically develop cultural norms that solve collective action problems.
What I said was that growth to the point of constant warfare, competition and struggle for enough food to subsist wasn’t an accurate picture of ancestral forager lifestyles.
He also says that smallpox was endemic among the Indians of all these cultures. Smallpox originated in Eurasia, thrived among farmers, and Native Americans had no immunity to it. His example of the squallor and disease these people live in is an example of the conditions they were subjected to at the hands of an invading power with novel biological agents their immune systems simply weren’t adapted to handle. The nastiest conflicts.
Warfare among Northwest Coast Natives, prior to colonization, was usually over petty disputes (that is, interpersonal ones) between peoples who had long-standing trade and treaty relationships, and only occasionally over resources (usually slaves, and the institution of slavery as it was practiced here does not compare readily with slavery as it was practiced by agriculturalists in Eurasia and Africa). The bloodier wars of the inland northwest are similarly a historical novelty, unparalleled in scope or stakes until the ravages of introduced diseases and the dislocation of various tribes by white invaders into territories they’d never been in competition for caused clashes that simply hadn’t occured at such a level of intensity prior to that point. The formation of reservations only exacerbated this—we’re talking about groups with age-old rivalries who had never seen fit to exterminate one another or conquer one another’s lands, but who would happily send a war canoe full of men to go steal things because of a petty vendetta between two people that started long ago.
This isn’t war of extermination. Don’t get me wrong, it’s violent, people die, the stakes are real, but it’s not a zero-sum, winner-take-all competition for survival. A direct translation out of Old Chinook from Franz Boas’ ethnography, regarding the rules of warfare should make this clearer:
“Before the people go to war they sing. If one of them sees blood, he will be killed in battle. When two see blood, they will be killed. They finish their singing. When they sing, two long planks are put down parallel to each other. All the warriors sing. They kneel [on the planks]. Now they go to war and fight. When people of both parties have been killed, they stop. After some time the two parties exchange presents and make peace. When a feud has not yet been settled, they marry a woman to a man of the other town and they make peace.”
The fight ends when both sides have taken casualties. The opposing sides exchange gifts and make peace. They resolve outstanding feuds by diplomatic marriage. This is the Chinook idea of war, the way it was practiced with all but their very worst enemies (who lived rather a long way from Chinook territory—the Quileute weren’t exactly next door given the pace of travel in those days, and even then the wars between them were not genocidal in intent). This is completely different from war as most Eurasian-descended cultures knew it. And it was typical of forager warfare in North America before Columbus showed up.
Malthus, in looking at the conditions of North American natives during the 19th century, reports on the dire conditions of a people devastated by introduced diseases, direct conquest by white settlers, and the disruption of their social fabric and ways of life. Whole culture groups pushed beyond the breaking point and very much outside their typical context, and most of their actual problems direct effects of colonization.
Some of the accounts presented by Malthus were given by very early explorers and adventurers who ended up deep in unexplored territory, far ahead of European conquest and colonization. For example, the one by Cabeca de Vaca would be circa 1530.
The only way these societies could have already been devastated is if epidemics had ravaged the whole continent immediately in the first decades after the first Europeans landed, ahead of any European contact with the inland peoples. I don’t know enough about the relevant history to know how plausible this is, but even if it happened, there are two problems with your claim:
Diseases wouldn’t cause famine, at least in the long run. These early explorers describe peoples who had problems making ends meet during bad seasons due to insufficient food, and who fought bitterly over the existing limited supply. If the population had already been thinned down by disease by the time they came, we’d expect, if anything, the per capita food supply from foraging to be greater than before.
If even the earliest accounts are of devastated societies, then how do we know anything about the better life they led before that? Where does this information come from? You cite an ethnography by Boas, who was born in 1858, as authoritative, but dismiss a compilation of far older accounts compiled by Malthus in the early 19th century.
Smallpox emerged in the Old World around 10,000 BC and is believed to have originated via cattle farming. It reached very high concentrations in Europe and became a common plague there; it was spread around the world to peoples who had never encountered it by European exploration and conquest. It and other Old World disease spread very rapidly among American native populations, rendering whole cultures extinct and reducing others to scattered survivors often incapable of rebuilding. The total population of the Americas lost to European diseases after the arrival of Columbus and Cortez is estimated at 90 to 95 percent.
Given that many Native nations were at least modestly dependent on agriculture (the Iroquois, Navajo, Aztecs, Incas, Mississipians—indeed, most of the well-known groups), such population losses coming so quickly are nothing short of catastrophic. Most of your resource base collapses because one person is going to have to work MUCH harder to provide enough food for themselves—fields go unplanted, vegetables don’t get tended, wild game is much more dangerous to hunt by oneself, and one cannot expect any assistance with gathering. Even a small number of people used to an agriculture-enriched lifestyle are going to be hit much harder.
It’s also worth noting that Cabeza da Vaca actually described the Coahuiltic as a healthy and prosperous people—and ant eggs, lizards and so on were just normal parts of their diet. Ant eggs in particular are STILL a cultural delicacy among the Latino groups descended from the Coahuiltecs (escamole taco, anyone?). Diet adapts to local circumstances.
That is precisely what happened. One infected slave from Spanish-held Cuba is believed to be the Patient Zero that transmitted an infection which would go on to wipe out about fifty percent of the Aztec population. Hernando de Soto, exploring the southeast, encountered many towns and villages abandoned just two years prior when most of their inhabitants died of the plagues. Isolated survivors often just abandoned their homes outright, since in many cases a handful of people or even a single survivor were all that was left out of a village of hundreds or thousands. Neighbors who showed up, unaware of what happened, might contract disease from the corpses in some cases, or simply welcome in the survivors who’d start the cycle anew. North America had extensive trade routes linking all major regions, from coast to coast. Foot and boat traffic carried diseases quite far from their initial outbreak sites.
Because they’re not all dead, and they left their own records of what happened and there are records of contact with them in much better conditions*, and there are still plenty of Native people alive today, who often know rather more about said records of their lives before than the typical Euro-American? And because it’s generally acknowledged within anthropological, archaeological and historical fields now that modern research bears out a picture of generally healthy, sustainable populations for most of the foragers of the Americas? And quite large, complex societies that were generally not recognized as such by early Anglo scholars into the matter?
(Malthus seriously* misrepresents Cabeza de Vaca’s case—the Floridians were in a bad way, but they were also right next door to Spanish early conquest—his accounts of the Coahuiltecs of coastal and inland Texas describe them as a healthy and prosperous people...and their descendents STILL enjoy ant eggs as a dietary item; you don’t have to be desperate to eat insects and many human groups actively enjoy it .
Boas actually travelled to the civilizations he wrote about, lived among them, recorded their oral traditions and analyzed their languages, investigated their history and their environmental circumstances. For many people, especially in the Northwest, far North and other relatively late-contacted areas, these events occured within the living memory of their elders.
Malthus wasn’t an expert on Native American civilizations or history, and basically went with the prevailing account available at the time. He relied on a consensus that wasn’t yet well-understood to be false. So I reject Malthus’ picture of pre-Columbian America for the same reason I reject Lysenko’s account of evolution. The difference is that Malthus was an influential thinker within the development of Western thought, and his role means that a lot of people who agree with what insights he did make are unwittingly buying into cached arguments about related subjects (often ones that don’t support his case) which hadn’t yet been discovered as such when Malthus wrote in the first place.
Scholarship in the field since Malthus’ time has seriously changed the outlook—Charles C. Mann and Jared Diamond are good, accessible sources for a summary overview (“1491” and “Guns, Germs and Steel”). If I seem to be vague, it’s mostly because this is domain-specific knowledge that’s not widely understood outside the domain, but as domain insider it’s fairly basic stuff.
How exactly does this modern research reconstruct the life of American foragers centuries ago, and based on what evidence? Could you cite some of this work? (I’d like to see the original work that presumably explains its methodology rigorously, not popular summaries.)
I also note that you haven’t answered Wei Dai’s question.
Regarding Malthus and de Vaca, you say:
Here is a translation of de Vaca’s original account:
http://www.pbs.org/weta/thewest/resources/archives/one/cabeza.htm
On closer look, it turns out that de Vaca’s description cited by Malthus actually refers to a people from southeastern Texas, not Florida. So while Malthus apparently mixed up the location by accident, his summary is otherwise accurate. Your above claims are therefore completely incorrect—the description is in fact of a people from Texas, living very far from the boundary of Spanish conquest at the time.
For reference, I quote de Vaca’s account at length (all emphasis mine):
Castillo and Estevanico went inland to the Iguaces. [...] Their principal food are two or three kinds of roots, which they hunt for all over the land; they are very unhealthy, inflating, and it takes two days to roast them. Many are very bitter, and with all that they are gathered with difficulty. But those people are so much exposed to starvation that these roots are to them indispensable and they walk two and three leagues to obtain them. Now and then they kill deer and at times get a fish, but this is so little and their hunger so great that they eat spiders and ant eggs, worms, lizards and salamanders and serpents, also vipers the bite of which is deadly. They swallow earth and wood, and all they can get, the dung of deer and more things I do not mention; and I verily believe, from what I saw, that if there were any stones in the country they would eat them also. They preserve the bones of the fish they eat, of snakes and other animals, to pulverize them and eat the powder. [...] Their best times are when “tunas” (prickly pears) are ripe, because then they have plenty to eat and spend the time in dancing and eating day and night. [...] While with them it happened many times that we were three or four days without food. Then, in order to cheer us, they would tell us not to despair, since we would have tunas very soon and eat much and drink their juice and get big stomachs and be merry, contented and without hunger. But from the day they said it to the season of the tunas there would still elapse five or six months, and we had to wait that long.
Also, regarding this:
Earlier you claimed that the native population of the entire American continent was devastated by epidemics immediately after the first European contacts in the late 15th/early 16th century, so that even the accounts of very early European explorers who traveled deep into the continent ahead of European colonization do not present an accurate picture of the native foragers’ good life they had lived before that. But now you claim that in the late 19th century, this good life was still within living memory for some of them.
It seems like you’re accepting or discounting evidence selectively. I can’t believe that all those accounts cited by Malthus refer to societies devastated by epidemics ahead of European contact, but on the other hand, the pre-epidemic good times were still within living memory for the people studied by Boaz centuries later.
Lysenko was motivated by politics. Baez was motivated by politics.
Physics improves, but history deteriorates. Those writers closest to events give us the most accurate picture, while later writers merely add political spin. Since 1830, history has suffered increasingly drastic, frequent, and outrageous politically motivated rewrites, has become more and more subject to a single monolithic political view, uniformly applied to all history books written in a particular period.
If you read old histories, they explain that they know such and such, because of such and such. If you read later histories, then when they disagree with older histories, check the evidence cited by older histories, you usually find that the newer histories are making stuff up. The older history says X said Y, and quotes him. The newer history say that X said B, and fails to quote him, or fails to quote him in context, or just simply asserts B, without any explanation as to how they can possibly know B.
Both Clark and Tainter (Collapse of Complex Civilizations) disagree with this claim as stated. A massive reduction in the population means that the survivors get increased per-capitas because the survivors move way back along the diminishing marginal returns curve and now have more low-hanging fruit (sometimes literally). In fact, Tainter argues that complexity often collapses because the collapse is the only way to increase per-capita wealth. Hunter-gatherers spend much less time per calorie than do advanced agriculturalists eg.
Or
(Quotes brought to you by my Evernote; it’s a pain in the ass to excerpt all the important bits from a book, but it certainly pays off later if you want to cite it for various assertions.)
Some quotes from Clark’s Farewell to Alms (he also covers the very high age of marriage in England as one way England held down population growth):
Just to be clear, and so everyone knows where the goalposts are: as per the definition here: http://en.wikipedia.org/wiki/Hunter-gatherer , a forager society relies principally or entirely on wild-gathered food sources. Modern examples include the Pila Nguru, the Sentinelese of the Andaman Islands, the Pirahã, the Nukak, the Inuit until the mid-20th century, the Hadza and San of southern Africa, and others.
To those not deeply familiar with anthropology this can lead to some counterintuitive cases. The Yanomamo, who depend mainly on domesticated bananas supplemented by hunting and fishing, aren’t foragers in the strict sense. The modern Maya, and many Native American groups in general weren’t pure foragers. The Salish and Chinook peoples of the Pacific Northwest of the United States were sedentary foragers.
The Polynesians and Chinese of those periods were not foragers—both societies practiced extensive agriculture supplemented by hunting and gathering, as in preindustrial Europe.
I never said they were foragers; I thought the quotes were interesting from the controlling population perspective.
My apologies—skimmed rather than read in detail and missed the purpose of your comment. Reply left up anyway since it may clarify terminology and definitions re: foragers for anyone who happens uipon the thread later. Thank you for clarifying!
Well that is certainly a lot for me to learn more about. Sorry I missed this post. How much of this has been directly observed in modern forager societies versus inferences from archaeology?
There’s a lot of other studies about different passive fertility in forager groups that bear out the cross-cultural applicability of the San studies as well. Forgot to add that.
Studies of forager groups on several continents have come to the same basic conclusions around that. Some of those findings are summarized here: http://books.google.com/books?id=grrA421tRNkC&pg=PA431&lpg=PA431&dq=foragers+and+menarche&source=bl&ots=WNuoQO-gYV&sig=h1ahBo5ApBv4Q9uYxD47pM_whNM&hl=en&ei=NtBNTpzkFeOssALYip3rBg&sa=X&oi=book_result&ct=result&resnum=3&ved=0CDAQ6AEwAg#v=onepage&q=foragers%20and%20menarche&f=false
The bits about breastfeeding and the other biological limiting factors (the indirect controls, basically) came to light during Richard Lee’s fieldwork with the San and Ju/’hoansi peoples of South Africa in the 1960s.
The bit about active measures is available if you peruse the anthropological literature on the subject (I don’t have a specific citation in mind), and the sort of thing covered in introductory classes to the field—it’s common knowledge within that domain.
As to resource warfare, it’s a non-starter for most foragers. You walk away, or you strike an agreement about the use of lands. There are conflicts anyway, but they’re infrequent—the incentive isn’t present to justify a bloody battle most of the time. And it doesn’t come up as often as you think, either, because as I’ve stated, forager populations don’t grow as quickly (they tend to stay around carrying capacity when different groups are summed over a given area) and indeed, devote active effort to keeping it that way, which supplements the tremendous passive biases in favor of slow growth.
Where it does come into prominence is with low-tech agriculturalists, pastoralists and horticulturalists. Those people have something to fight over (a stationary, vulnerable or scarce landbase, that rewards their effort with high population growth and gives incentive to expand or lock down an area for their exclusive use).
So in a forager society, population growth is managed how, specifically? Abstinence?
See my other reply, the long one, which goes into some detail answering that question.
Sorry, I don’t see where you do. Food preservation techniques, migratory habits, gathering crabs or berries doesn’t tell me anything at all about how people avoided population growth.
http://lesswrong.com/lw/6vq/on_the_unpopularity_of_cryonics_life_sucks_but_at/4ny0 Right here.
It turns out that homelessness, in and of itself, approximately quadruples one’s mortality risk: study pointer:
I don’t know how you’re using the word “easily”, then. Do you classify all forms of social interaction as easy?
Well, “easy” is clearly a subjective judgment, and admittedly, I have no relevant personal experience. However, it is evident that large numbers of people do manage to survive from charity and the welfare state without any employment, and many of them don’t seem to invest any special efforts or talents in this endeavor.
In any case, my original arguments hold even if we consider only rich countries with strong welfare states, in which it really is easy, in every reasonable sense of the term, to survive by freeloading. These certainly hold as examples of societies where no work is necessary to obtain food, housing, medical care, and even some discretionary income, and yet status concerns still motivate the overwhelming majority of people to work hard.
this seems very difficult if you aren’t a member of a protected class. can a young white healthy male freeload easily?
I don’t know about race, but I did read a piece by a young man who viewed homelessness as a sort of urban camping. He didn’t use drugs and he didn’t beg—he found enough odd jobs.
Ten years ago I read a “news of the wierd” story about a young homeless man in silicon valley. He earned something like $90K a year working as a junior programmer or some such occupation. He slept under a bridge, but had a bank account, mailbox, cell phone, laptop and gym subscription. He worked out and showered at the gym every morning before work. He socked away lots of money and spent a lot of his free time surfing the internet at a coffee shop or other hang out. The reason the story got picked up is that his parents or someone in his family was trying to get him committed for psychiatric treatment. Its more bold and daring than most people but that behavior in and of itself doesn’t really sound crazy to me.
A long time friend of mine wrote an article for the New York Times about her boyfriend’s decision to become homeless.
I agree that we hate to admit how much of our lives revolves around zero-sum status competitions. Here human modification via genetic engineering, supplements, & advanced technologies provides a potential way out, right? That we don’t like the fact that our lives revolve around zero-sum status competitions implies that there’s motivation to self-modify in the direction of deriving fulfillment from other things.
Of course there’s little historical precedent for technological self-modification and so such hypotheticals involve a necessary element of speculation, but it’s not necessarily the case that things will remain as they always have been.
This is a very good point and one which I was thinking of bringing up in response to Yvain’s comment but had difficulty articulating; thanks.
Trouble is, once you go down that road, the ultimate destination is wireheading. This raises all sorts of difficult questions, to which I have no particularly interesting answers.
Though I know others feel differently (sometimes vehemently), aside from instrumental considerations (near guaranteed longevity & the welfare of others) I personally don’t mind being wireheaded.
My attitude is similar to the one that denisbider expresses here with some qualifications. In particular I don’t see the number of beings as so important and his last paragraph strikes me as sort of creepy.
I like this framing (I almost never thought on this topic): money as status as measure of socially enforced right to win competitions for resources, but with a baseline of fairness, where you can still get stuff, but less than high-status individuals (organisations). Right-based bargaining power rather than a measure of usefulness.
Wah. Neat conceptualization, and much easier for me to wrap my head around than my previous non-models. Thanks!
This seems like a good place to point out the US centrism issue, as mentioned http://lesswrong.com/r/discussion/lw/6qr/lw_systemic_bias_us_centrism/ . Many countries do have safety nets that while not enough for actual comfort at the current tech level still makes plain survival a non-issue, and to some degree higher things through institutions like public libraries where you’ll often be able to access the internet.
Housing need not be as scarce as land, if regulatory permission for tall buildings and good transport networks exist. There is a lot of variation on this dimension already today. Automated mining, construction and cheap energy could make sizable individual apartments in tall buildings cheap, not to mention transport improvements like robocars.
I agree that the situation can be improved that way, though it’s arguable how much it runs against the problem that packing people tightly together has the effect of increasing discomfort and severely lowering status. But even with optimistic assumptions, I think it’s still the case that housing can never become non-scarce the way food and clothing could (and to a large degree already have). There is in principle no limit to how cheaply mass-produced stuff can be cranked out, Moore’s law-style, but this clearly can’t work anywhere as effectively for housing, even with very optimistic assumptions.
I basically agree, and don’t mean to nitpick, but there’s also in the long run virtual environments/augmented reality.
Another possibility that would reduce the effective cost of housing would be small scale distributed manufacturing (I’m thinking Drexler/Merkle nanotech here). That would mean that most goods would not need to travel, they would be “printed” locally. There are exceptions for goods which require uncommon atoms, which would still need transport. (To be a bit more explicit: I’m trying to weaken the “near other people” restriction. As is, a lot of what we exchange with other people is information, and we ship that around globally today. Goods are another major category, which I commented on above. Physical contact is a third category, but a lot of that is limited to family members in the same household anyway.)
One factor which hasn’t been directly discussed, is that housing, while partially designed to protect us from weather, is also partially to protect us from other people. The former function can be reduced in cost by better or cheaper materials. The latter is to some extent a zero-sum game. (There is a whole range of interacting social issues involved. Some of the protection is from thieves, some from obnoxious neighbors, some from intruding authorities—and these groups differ greatly in their ability to bring greater resources to bear, and also differ in their interest in doing so.)
Do you know of a forum where this could be discussed productively?
No, not really. Opportunities for good and insightful discussion open up from time to time in all kinds of places, and sometimes particular forums can have especially good streaks, but all of this is transient. I don’t know any places that are particularly good these days.
Could this be solved by setting up a new forum and being sufficiently selective about whom to let in (e.g. only sufficiently high-quality and sufficiently non-ideological thinkers, as vetted by some local aristocracy based on comment history elsewhere), or is there some other limiting factor?
I would love there to be a place suitable for rational discussion of possibly outrageous political and otherwise ideologically charged ideas, even though I wouldn’t want it to be LessWrong and I wouldn’t want it to be directly associated with LessWrong.
I’d love to have such a place too, and based on my off-line conversations with some people here, I think there are also others who would. So maybe it wouldn’t be a bad idea to set up a new forum or mailing list, perhaps even one without public visibility. I have no idea how well this would work in practice—there are certainly many failure modes imaginable—but it might be worth trying.
Here’s one thing I’m worried about. What if discussion between those of widely varying ideological background assumptions is just intrinsically unproductive because they can never take basic concepts for granted? Even the best thinkers seem to mostly have strongly, stably different ideological outlooks. You could pre-select for ideology and possibly have multiple groups, but that has its own downsides.
That would mean a collection of echo chambers which are worse than useless.
Strongly agree. Mailing lists are easy but damn have I become addicted to nested comments and upvoting/downvoting (automatic moderation!).
I know what you mean. I follow along with the decision theory list but is almost painful being limited to email format!
(Yup. Can’t downvote Stuart. Frustrating. And the impact of saying so aloud without anonymity is not quite what I want to enact.)
Do you or does anyone else know of free online services, analogous to mailing lists, that allow such nesting and upvoting/downvoting?
Reddit. If you create your own subreddit I know you get to moderate it, but I don’t know how well it would work for a deliberately exclusive community.
Presumably it’d be easy to clone LW’s git code, change the logos, ask SingInst to host it, and put it behind a locked gate. https://github.com/tricycle/lesswrong . Louie would be the SingInst guy to ask I think. Besides that, no.
Maybe make it hidden to the public (like Koala Wallop’s Octagon) and invitation-only, with any given member limited to one invitation per hundred upvotes they’ve received?
Or… a nested thing, maybe. The onion has as many layers as the Grand High Administrator deigns to create, and said administrator can initiate anyone to any desired depth, or revoke such access, at will. A given user can issue one invite to their current layer per 100 net upvotes they have received on the layer in question; when someone receives 100 net downvotes on a given layer, they are banned from that layer until reinvited (at which point they start from scratch). Invites to a given layer can only be directed to users who have already been initiated to the layer immediately outside that one.
There might also be a branching structure; nobody knows for sure until they find parallel layers.
I am curious as to whether such a thing actually exists, do you or anyone else here end up producing an exclusive, private, invite only community in order to commune in high signal political discussion?
I agree that the zero-sum character of status makes it unlikely that technology will shorten work hours (barring modification of humans).
I don’t see any reason why this should be true. Population levels in developed countries have leveled off and up to a point it’s easy to increase the amount of habitable space through the construction of skyscrapers. It’s not even clear to me that one needs to be industrious to avoid homelessness in contemporary America.
You’re right, things are a bit more complicated than in my simplified account. Lodging can be obtained very cheaply, or even for free as a social service, in homeless shelters and public housing projects, but only in the form of densely packed space full of people of the very lowest status. This is indeed more than adequate for bare survival, but most people find the status hit and the associated troubles and discomforts unacceptably awful, to the point that they opt for either life in the street or working hard for better lodging. And to raise the quality of your lodging significantly above this level, you do need an amount that takes quite a bit of work to earn with the median wage.
This is in clear contrast with food and clothing, which were also precarious until relatively recent past, but are nowadays available in excellent quality for chump-change, as long as you don’t go for conspicuous consumption. This is because advanced technology can crank out tons of food and clothing with meager resources and little labor, which can be shipped to great distances at negligible cost, and the population is presently far from the Malthusian limit, so there is no zero-sum competition involved (except of course when it comes to their purely status-related aspects). In contrast, habitable land isn’t quite zero-sum, but it has a strong zero-sum aspect since it’s difficult to live very far from the centers of population, and wherever the population is dense, there is going to be (more or less) zero-sum competition for the nearby land.
Another striking recent phenomenon that illustrates this situation is that increasing numbers of homeless people have laptops or cell phones. Again we see the same pattern: advanced technology can crank out these things until they’re dirt-cheap, but acceptably good habitable land remains scarce no matter what.
Land is only a problem because of the dept of education. Competition wouldn’t be nearly so fierce if there wasn’t a monopoly on good schooling. Look at a heat map of property values. They are sharply discontinuous around school district borders.
How does one school district with good schools prevent its neighbor districts from also having good schools? There are certainly plenty of examples of contiguous districts with good schools.
For many people’s psychological welfare, I think these may be lesser concerns than mobility, autonomy, and freedom from monotony.
I don’t think that typical jobs from 50 years ago were better in any of these regards. On the contrary, the well-paid blue collar manufacturing jobs that are associated with bygone better times in folk memory were quite bad by these measures. Just imagine working on an assembly line.
Focusing specifically on North America, where these trends appear to be the most pronounced, the key issue, in my opinion, is the distribution of status. Fifty years ago, it was possible for a person of average or even below-average abilities to have a job, lifestyle, and social status that was seen as nothing spectacular, but also respectable and nothing to scoff at. Nowadays, however, the class system has become far harsher and the distribution of status much more skewed. The better-off classes view those beneath them with frightful scorn and contempt, and the underclass has been dehumanized to a degree barely precedented in human history. Of course, these are not hereditary castes, and meritocracy and upward mobility are still very strong, but the point is that the great masses of people who are left behind in the status race are no longer looking towards a mundane but respectable existence, but towards the low status of despised losers.
Why and how the situation has developed in this direction is a complex question that touches on all sorts of ideologically charged issues. Also, some would perhaps disagree whether the trends really are as severe as I present them. But the general trend of the status distribution becoming more skewed seems to me pretty evident.
How do you measure this kind of thing? Do you have a citation?
I too would be interested in sources for this assertion. It goes contrary to what I would say if I were asked to guess about classes of today compared to classes of fifty years ago.
edit: Oops, I should have refreshed page before commenting, as I now see Vladimir_M responded. Leaving comment for content about my state of mind on this issue.
No, it’s a conclusion from common sense and observation, though I could find all kinds of easily cited corroboration. Unfortunately, as I said, a more detailed analysis of these trends and their components and causes would get us far into various controversial and politicized topics, which are probably best left alone here. I stated these opinions only because they seemed pertinent to the topic of the original post and the subsequent comments, i.e. the reasons for broad dissatisfaction with life in today’s developed world, and their specific relation to the issues of work.
There is no obviously appropriate way to measure this, even in theory.
What does one say about differences in solidarity between and church members, as it varies from Sunday to other days of the week, and from now to fifty years ago? Likewise for football fans in a city...What does one say about it as it varies from during the Olympics to during an election, within country, party, etc...During war? During strikes? And so on.
To make this claim one would have to establish a somewhat arbitrary “basket” of status markers and see how they varied (willingness to marry people from group X, willingness to trust random members of group X not to steal, willingness to make fun of people from group X for amusement, etc.) One would then have to integrate over time periods (war, etc.), and it’s not obvious how to do that. It’s also not obvious how to aggregate the statistics into a single measure expressible by a sentence like the above even if we have somehow established a score for how each individual thinks of and would think of each other individual. It’s not obvious what constitutes members of a class, nor how much the classes are to be judged by their worst members as against, say, their average or typical or idealized member.
What I most disagree with the connotation of is “the distribution of status much more skewed”. For status, each of us views others in certain ways, has representations of how we are viewed, has representations of how we view others, has representations of how others think they view us...status is not a thing for which the word “distributed” is at all apt.
It’s hard to discuss these things without getting into all sorts of overly controversial topics, but I definitely disagree that there are no obviously appropriate ways to establish whether this, so to say, skew of the status distribution is increasing.
Admittedly, these are fuzzy observations where it’s easy to fall prey to all kinds of biases, but there is still useful information all over the place. You can observe the level of contempt (either overt or more underhanded) that people express for those below their class, the amount of effort they invest just to make sure they’re insulated from the lower classes, the fear and disgust of mere proximity to anyone below a certain class, the media portrayals of people doing jobs at various percentiles of the income distribution, the reduction and uniformization of the status criteria and the disappearance of various sources of status available to those scoring low in wealth, fame, and bureaucratic rank, and so on. Of course, my observations and interpretations of all these trends may well be biased and inaccurate, but it’s certainly incorrect to claim that no conclusions could be drawn from them even in principle.
I basically agree with you—The U.S. has certainly been headed in the direction of a winner-take-all society over the last few decades.
I think some of this is measurable. The Gini coefficient certainly captures some of the economic aspects, and it has gotten higher over time
“the underclass has been dehumanized to a degree barely precedented in human history” seem too strong. History includes slavery, including practices such as “seasoning”
I agree that was probably a too hyperbolic statement. History certainly records much more extreme instances of domineering and oppression. However, “dehumanized” was not a very good choice of term for the exact attitudes I had in mind, which I think indeed have little historical precedent and, and which don’t really correspond to the traditional patterns of exercising crude power by higher-status groups and individuals, being a rather peculiar aspect of the present situation. But yes, in any case, I agree I exaggerated with the rhetoric on that point.
Dear Vladimir, must as I hesitate to offer you any assistance in your presumably shady-looking intellectual enterprise (as frankly I’ve grown to dislike you quite a bit, period).… the term you might’ve been looking for is “http://en.wikipedia.org/wiki/Biopower”. Foucault, Arendt and Agamben have all pondered its significance in the 20th century.
I wouldn’t claim that, my claim is that there can’t be one formula specifying what you want to measure, so for reasonably similar societies like this one and that of fifty years ago, you can’t draw conclusions like that. If one looks at all the equally (in)appropriate ways to measure what you’re making claims about, the modern USA outperforms 18th century Russia in enough ways that we can draw conclusions. I’ll elaborate a bit on your examples.
The observations are the least fuzzy part.
With something like this, you could perhaps quantify fear disgust of millions of people if in proximity to other people. You might find that in one society, 50% are extremely disgusted by the bottom 5%, and nonplussed by the others, and the top 10% of that is disgusted by the whole bottom 50%, while in another society, the top 20% is moderately disgusted by the bottom 80%, and the top 40% absolutely repulsed by the bottom 1%...etc.
What exactly, or even approximately, are your criteria, and how much do you think others on this site share them?
What our society has is an unprecedented tabooing of many overt scorning behaviors and thoughts. Perhaps you totally discount that? It has also tamed superstition enough that there is no system of ritual purity. People at least believe they believe in meritocracy. There is a rare disregard of bloodlines and heredity, compared to other times and places, including modern Japan.
What that brings to mind for me is the honest labor memes from the Puritans, and how so many were ready to identify with the common man, Joe the plumber, etc. One might say that this was primarily or only because he is white, and I think we all discount its value because of that to some extent, and if you idiosyncratically discount it more than others, you should be upfront about that by being more specific, and not make implicit claims that according to your readers’ values, what you say is true.
Were I trying to call out a certain statement as being sexist, I might quote the statement and tell people that the statement is sexist. That’s totally legitimate if I think that, would they reflect rationally and calmly, they would come to the same conclusion, according to their values. But if the reason I think that the statement is sexist is because it’s written in English, which has a long history of being used by sexists, it would be totally illegitimate for me to simply say to normal human beings that the statement is sexist, because the reason I think it sexist is its mere expression in English.
If you believe that your claims resonate with normal conceptions of fairness upon reflection by people, it’s fine for you to just make them. But this particular claim of yours is so, let’s say counter-intuitive, that I suspect you have very idiosyncratic values in which the worth of a great many things is reduced to zero where other people would think it worth something, perhaps a great deal. If so, please clarify that when you say “The better-off classes view those beneath them with frightful scorn and contempt, and the underclass has been dehumanized to a degree barely precedented in human history,” you just don’t mean “contempt” and “dehumanized” the way your readers do.
I think there may be some “rosy retrospection” going on here.
It seems like we have some essential misunderstandings on these points:
The “status skew” I have in mind has nothing to do with the issues of fairness and meritocracy. In this discussion, I am not concerned about the way people obtain their status, only what its distribution looks like. (In fact, in my above comment, I already emphasized that the present society is indeed meritocratic to a very large degree, in contrast to the historical societies of prevailing hereditary privilege.)
What I’m interested in is the contrast between the sort of society where the great majority of people enjoy a moderate status and the elites a greater one, and the sort of society where those who fall outside an elite minority are consigned to the status of despised losers. This is a relevant distinction, insofar as it determines whether average people will feel like they live a modest but dignified and respectable life, or they’ll feel like low-status losers, with the resulting unhappiness and all sorts of social pathology (the latter mostly resulting from the lack of status incentives to engage in orderly and productive life).
My thesis is simply that many Western countries, and especially the U.S., have been moving towards the greater skew of the status distribution, i.e. a situation where despite all the increase in absolute wealth, an increasingly large percentage of the population feel like their prospects in life offer them unsatisfactory low status, and the higher classes confirm this by their scornful attitudes. (Of course, all sorts of partial exceptions can be pointed out, but the general trend seems clear.)
In fact, one provocative but certainly not implausible hypothesis is that meritocracy may even be exacerbating this situation. Elites who believe themselves to be meritocratic rather than hereditary or just lucky may well be even more arrogant and contemptuous because of that, even if they’re correct in this belief.
Well, I’m not that old, and I honestly can’t complain at all about how I’ve been treated by the present system—on the contrary. Of course, I allow for the possibility that I have formed a skewed perspective here, but the reasons for this would be more complex than just straightforward “rosy retrospection.”
Demonstrably the cost of housing has not dropped as much as the cost of a byte of hard drive storage, but that is not necessarily only because space is zero-sum. A lot of technologies have failed to advance at anywhere near the rate of computer technology, in particular housing-related technologies—the cost of building a structure, the cost of lighting it, air conditioning it, etc. I think that science fiction authors in the past tended to imagine that housing-related technologies would change much more rapidly than they actually did.
Transportation has also, in recent years, not changed all that much. That’s another one that science fiction writers were massively overoptimistic about. Transportation changes the value of proximity, and the changes that we did experience starting with steam powered vehicles probably did radically change the nature of what counts as proximity. I am, for example, an order of magnitude or so “closer” to the city center now than I would have been two hundred years ago, holding everything constant except for transportation.
Building construction and transportation are at a kind of plateau, at least compared with computers, possibly in part because they require a more or less fixed amount of energy in order to move stuff around. In order to transport a person you need enough power to move his body the required distance. In order to build a building, you need enough power to lift the materials into place. I had the misfortune of working next to a construction site and I recall that for weeks we could feel the thumping of the pile drivers.
There’s no hard lower bound on the amount of energy needed to move something horizontally. Any expenditure in transportation is all friction, no work. Now, reducing friction turns out to be a harder engineering problem than making smaller transistors, but just saying “energy” doesn’t explain why.
And the gravitational potential energy in 1 ton of stuff lifted by 1 storey would cost all of .001$ if bought from the grid in the form of electricity. So clearly the energy requirement of lifting construction materials into place is not the primary cost of construction either.
The cost of the fuel itself is not the only cost that increases when the amount of energy increases. When a large amount of energy is applied all at once, it becomes important to apply the energy correctly, because otherwise the results can be catastrophic. If you take the energy required to lift a ton one storey, and misapply it, then you could damage property or, worse, kill people.
We let children ride bikes but not drive cars. Why? One reason is that a typical moving car has a much larger amount of kinetic energy than a typical moving bicycle, so if the car is steered badly, the results can be much worse than if a bike is steered badly.
So the more more energy is applied, the more carefully it must be applied. And this extra care costs extra money.
In a controlled environment such as a factory, the application of energy can be automated, reducing costs. But in an uncontrolled environment such as we see in transportation or building, significant automation is not yet possible, which raises costs.
Other costs also rise with energy use. For instance, the machinery that employs the energy must be built to withstand the energy. A toy car can be built of cheap plastic, but a real car needs to be strong enough not to fly apart when you step on the gas. And the machine has to be built so that it doesn’t wear down quickly in reaction to the great stresses that it is being subjected to as it operates.
That is the clearest explanation I’ve seen so far for this. (I’ve read a lot of SF, and asked myself the question.)
I don’t think that’s a complete explanation. I would say it’s more along the lines of “If you start with somebody working a three-day week, it’s much easier to employ them for another two days, than to hire a new person to work two days because that requires creating a whole new business relationship.” Then both corporations and governments, I think, tend to be as inefficient as they can possibly get away with without dying, or maybe a little more inefficient than that. Work expands to fill the time available...
I would have to sit down and write this out if I really wanted to think it through, but roughly I think that there are forces which tend to make people employed for a full workweek, everyone want to be employed, and society to become as inefficient as it can get away with. Combine these factors and it’s why increasing productivity doesn’t increase leisure.
The full work week makes sense, depending on what sort of job you’re talking about. Is it a job where a certain number of staff have to be working at a given time but it doesn’t really matter who, i.e. my job at the pool, etc, or is it a job where a certain amount of work has to get done and it’s simpler for one person to do a set of tasks because sharing the tasks between brains is complicated, i.e. my job at the research institute? For the former, it doesn’t really matter whether you have 20 staff working 40 hours a week or 40 staff working 20 hours a week. (In fact, at the pool we tend to flip between the two: in winter, when most employees are in school, there are a lot more staff and many of them have only 1 or 2 shifts a week. In summer, the number of staff drops and nearly everyone is full-time.) It doesn’t matter whether a given staffperson is there on a certain day; lifeguards and waitresses and grocery store cashiers (and nurses, to a lesser degree) are essentially interchangeable. For the latter, it makes a lot of sense for any one employee to be there every day, but why 8 hours a day? Why not 5? If the full-time employees at the research institute were each in charge of a single study, instead of 2 or 3, they could do all the required work in 5 hours a day plus occasionally overtime or on-call work.
I’m guessing that most work for corporations and governments is in the latter category. Most work in the former category is relatively low-paying, so adults in this jobs have to work full-time or more to make ends meet. I can see why right now, neither corporations nor the government are endorsing shorter work-days or work-weeks: they would have to hire more staff, spend more time on finding and interviewing qualified people, and providing these extra staff with the expected benefits (i.e. health insurance, vacation time) would be more complicated. The current state is stable and locked in place, because any business or organization that tried to change would be at a disadvantage. But in theory, if every workplace transitioned to more employees working fewer hours, I can’t see why that state wouldn’t be stable as well.
Yes but as Eliezer said the work expands to fill the time. So if you cut the time correctly, you just cut out the useless work and don’t give up any competitive advantage. This is how large corporations can lay-off 50,000 people without falling apart. Sometimes that means giving up products or markets, but more often it means a haircut across the organization—e.g. trimming the fat. At first the people left are paniced about how they will get everything done without all these resources, but what really happens is priorities get clarified and some people have to do more work during the day instead of reading Less Wrong. The same thing would happen if the work week were reduced, although management’s job would get harder as Eliezer points out.
It is a plausible argument, but it seems at least partially incompatible with known international differences within the wealthy industrialized world. “Using the most recently available data, the ILO has determined that the average Australian, Canadian, Japanese or Mexican worker was on the job roughly 100 hours less than the average American in a year—that’s almost two-and-a-half weeks less. Brazilians and British employees worked some 250 hours, or more than five weeks, less than Americans.”. I’d expect very similar zero sum competitions to exist in all of these nations, yet the work hours have substantial differences.
If we accept the premise that most of this work is being spent on a zero-sum game of competing for status and land, then it’s a prisoner’s-dilemma situation like doping in competitive sports, and a reasonable solution is some kind of regulation limiting that competition. Mandatory six-week vacations, requirements to close shops during certain hours, and hefty overtime multipliers coupled with generous minimum wages are three examples that occur in the real world.
A market fundamentalist might seek to use tradable caps, as with sulfur dioxide emissions, instead of inflexible regulations. Maybe you’re born with the right to work 1000 hours per year, for example, but you have the right to sell those hours to someone else who wants to work more hours. Retirees and students could support themselves by getting paid for being unemployed, by some coal miner, soldier, or sailor. (Or their employer.) This would allow the (stipulated) zero-sum competition to go on and even allow people to compete by being willing to work more hours, but without increasing the average number of hours worked per person.
Japan‽ That can’t be right. This study says indeed it isn’t. What’s going on?
Edit: What’s going on is that it’s a recent change. Thanks, soreff.
Ouch! “The more I find out, the less that I know”. This site gives extensive statistics, broken out nationally and by year from 2000-2010. According to their numbers, for 2010, Korea had the largest numbers of hours worked, with the U.S. 12th on the list and Japan 15th. It looks like the shifts across this decade are considerable (10%-20%, for many of the nations). Looking at a bunch of sites, there seems to be considerable differences in reported numbers as well—the definitions of what hours they include and who they include may differ...
This sounds like such an interesting topic for discussion, though!
Trouble is, it touches on just about every controversial and ideologically charged issue imaginable. (Which is not surprising, considering that it concerns the fundamental questions about how status is distributed in the whole society.)
How confident are you that this reflects the experience of working people rather than how you would feel if you were in their position?
I’ve wondered about this a lot myself. Note along with figure 3 of the quoted article, according to a Gallup poll the average self reported life satisfaction in America is around 7⁄10. Presumably this average includes even including the sick/elderly/poor. I believe that my own self reported life satisfaction would be considerably lower than that if I were living the life of an average American.
I would guess that the difference is mostly accounted for by my own affective response to a given situation diverging heavily from the affective response that members of the general population would have in the same situation.
Somewhat confident. I work at a medical clinic. The number of people who come in with physical complaints relating to their job, psychological/stress complaints relating to their job, or complaints completely unrelated to their job but they talk to the doctor about how much they hate their job anyway because he’s the only person who will listen—is pretty impressive.
But there’s a clear selection bias here; maybe the 10% of people who are most unhappy with their jobs visit medical clinics 5x as much as anybody else.
In any case, thanks for the info.
It’s entirely possible for working life to be awful and people living those lives to genuinely self-report an average of 7⁄10 on a happiness scale. This is likely due to facts about how humans set their baseline happiness, how they respond to happiness surveys, and what social norms have been inculcated.
Like, when given a scale out of 10, people might anchor 5 as the average life, and for social signaling and status purposes, reasons for them being different-better are more available to their conscious mind than reasons for them being different-worse, so they add a few points.
There are also other problems with the average happiness level being above average—it suggests some constant is at work.
I just realized that a link to an article by Angus Deaton about a Gallup poll that I meant to include in the comment above didn’t compile. I’ve since added it.
I agree. But I don’t see the considerations that you bring up as decisive. Several points:
•According to Angus Deaton’s article
If I were to give a response of 7⁄10 to this question it would indicate that my life is more good than it is bad. You’re right that my interpretation may not be the one used by the typical subject. But I disagree with:
it could be that everyone finds their lives to be more good than bad or that everyone finds their lives more bad than good.
• You raise the hypothetical:
but one could similarly raise ad hoc hypotheticals that point in the opposite direction. For example, maybe people function best when they’re feeling good and so they’re wired to feel good most of the time but grass-is-greener syndrome leads them to subtract a few points.
• Note that suicide rates are low all over the world. A low suicide rate is some sort of indication that members of a given population find their lives to be worth living.
• Note that according to Deaton’s article, life satisfaction scores by country vary from ~ 4 to ~ 7 in rough proportion to median income in a given country. This provides some indication that (a) life satisfaction scores pick up on a factor that transcends culture and that (b) Americans are distinctly more satisfied with their lives than sub-Saharan Africans are. But in line with my above point, sub-Saharan Africans seldom commit suicide. In juxtaposition with this, the data from Deaton’s article suggests that the average American’s life satisfaction is well above the point at which he or she would commit suicide.
Suicide rates could be low even when the average experience of the general population is worse than unconsciousness. People may apply scope insensitivity and discount large quantities of non-severe future suffering for themselves. Happiness reports can lead to different results than an hour-to-hour analysis would. Asking for each hour, “Would you rather experience an exact repeat of last hour, or else experience nothing for one hour, all other things exactly equal? How much would you value that diffence?” might lead to very different results if you integrate the quantities and qualities.
People with lives slightly not worth living may refrain from suicide because they fear death, feel obligated toward their friends and family, or are infected with memes about reward or punishment in an imaginary afterlife. A very significant reason is probably that bearably painless and reliable suicide methods are not universally within easy reach (are they in sub-Sahara Africa?). In fact, there is a de facto suicide prohibition in place in most contries, with more or less success. The majority of suicide attempts fail.
So continued existence can be either involuntary or irrational, and suicide rates can be low even when life generally feels more bad than good. If all sentient entities could become rational decision-makers whose conscious existence is universally voluntary, that would probably be the most significant improvement of life on earth since it evolved.
I agree. See also this comment and subsequent discussion. I consider low suicide rates to be weak evidence that people find their lives worth living, not definitive evidence. There’s other evidence, in particular if you ask random people if their lives are worth living they’ll say yes much more often than not. Yes they may be signaling and/or deluded, but it seems hubristic to have high confidence in one’s own assessment of their quality of lives over their stated assessment without strong evidence.
Yeah, re-reading my post it’s very handwave-y. However, a point you made about more good than bad / more bad than good stuck out to me. I wonder if a survey question “On a scale from 0 to 10, where 0 is every thing that happens to you is a bad thing, and 10 is every thing that happens to you is a good thing, what number would you give to your life?” would provide scores correlated with life satisfaction surveys. (Ideally we would simply track people and, every time a thing happened to them, ask them whether this thing was good or bad. Then we could collate the data and get a more accurate picture than self-reporting, but the gain doesn’t outweigh the sheer impracticality so I’ll be content with self-reported values).
I feel like if it correlated weakly, you would be right. And now that I think about the experiment, I’m fairly convinced it would come out correlated.
The prospect of an hansonain future does seem like a pretty good reason to delete all records of yourself, dispose of anyone with significant memories of you, and incinerate your brain in a large explosion enough to spread the ashes of your brain for miles around. At sea.
It should make you happy with the present, though, if you use the past and the future as the baseline for comparison. As John Derbyshire once said in a different context, “We are living in a golden age. The past was pretty awful; the future will be far worse. Enjoy!”
Now I’m confused, how’s other people being even worse of supposed to make me feel better?
Well, if we (the present humans) are indeed extraordinarily fortunate to live in a brief and exceptional non-Malthusian period—what Hanson calls “the Dreamtime”—then you should be happy to be so lucky that you get to enjoy it. Yes, you could have been even luckier to be born as some overlord who gets to be wealthy and comfortable even in a Malthusian world, but even as a commoner in a non-Malthusian era, you were dealt an exceptionally good hand.
No, I’m UN-lucky. I’d prefer a different, counterfactual universe where EVERYONE is happy at all times, and given any set universe I see no reason how which entity in it is me should matter.
A couple of comments:
Yes, a hansonian future looks appalling. Anything that gets us back into a Malthusian trap is a future that I would not want to experience.
I’m not sure that active measures to prevent oneself from being revived in such a future are necessary. If extreme population growth makes human life of little value in what are currently the developed nations, who would revive us? Cryonics has been likened to a four-dimensional ambulance ride to a future emergency room. If the emergency rooms of the 22nd century turn out to only accept the rich, cryonicists will never get revived in such a world anyway.
I find it bizarre that Robin Hanson himself both endorses cryonics and actively endorses population growth—both in the near term (conventional overpopulation of humans) and in the long term (explosive growth of competing uploads/ems).
@2: Most of it was humour, indicating excessive paranoia. Under that was basically a mix of being humble (might have reasons we would never think of to do it), and the implication that it’s not only bad but so bad every little trace of probability must be pushed as close as possible to 0.
Hey now, the poor also smile!
I’ve been wondering why no one has yet broached this issue on LW, that I recall.
Ugh field? People don’t like to talk about this. I will say something like “my job is a soul-sucking vortex” and people think I’m only joking. I am joking, but like many jokes it is also true.
My job doesn’t make me hate life; much of what I value in life is supported by my job which is why I keep it.
It’s not very hard to do when the original author is Mike Darwin. The man really needs an editor. Consider “Doing the Time Wrap”, which looks for all the world like someone wrote 2 or 3 amazing, wonderful essays on completely different topics, and then decided to cut and paste random sections of them to form a single article, with random song lyrics thrown in for good measure.