Douglas,
It’s $1000 per life not per net, because in most cases nets or treatment won’t avert a death.
Douglas,
It’s $1000 per life not per net, because in most cases nets or treatment won’t avert a death.
g,
There’s plenty of room to work on vaccines and drugs for tropical diseases, improved strains of African crops like cassava, drip irrigation devices, charcoal technology, etc.
http://en.wikipedia.org/wiki/Amy_Smith http://web.mit.edu/newsoffice/2008/lemelson-sustainability-0423.html
kebko,
The best interventions today seem to cost $1000 per life saved. Much of the trillion dollars was Cold War payoffs, bribing African leaders not go Communist, so the fact that it was stolen/wasted wasn’t that much of a concern.
I tend to prefer spending money on developing cheaper treatments and Africa-suitable technologies, then putting them in the public domain. That produces value but nothing to steal.
Regarding g’s point, I note that there’s a well-established market niche for this sort of thing: it’s like the popularity of Ward Connerly among conservatives as an opponent of affirmative action, or Ayaan Hirsi Ali (not to downplay the murderous persecution she has suffered, or necessarily to attack her views) among advocates of war against Muslim countries. She’ll probably sell a fair number of books, get support from conservative foundations, and some nice speaking engagements.
Steven,
Information value.
g,
This is based on the diavlog with Tyler Cowen, who did explicitly say that decision theory and other standard methodologies doesn’t apply well to Pascalian cases.
Pablo,
Vagueness might leave you unable to subjectively distinguish probabilities, but you would still expect that an idealized reasoner using Solomonoff induction with unbounded computing power and your sensory info would not view the probabilities as exactly balancing, which would give infinite information value to further study of the question.
The idea that further study wouldn’t unbalance estimates in humans is both empirically false in the cases of a number of smart people who have undertaken it, and looks like another rationalization.
The fallacious arguments against Pascal’s Wager are usually followed by motivated stopping.
“that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God).” Utilitarian would rightly attack this, since the probabilities almost certainly won’t wind up exactly balancing. A better argument is that wasting time thinking about Christianity will distract you from more probable weird-physics and Simulation Hypothesis Wagers.
A more important criticism is that humans just physiologically don’t have any emotions that scale linearly. To the extent that we approximate utility functions, we approximate ones with bounded utility, although utilitarians have a bounded concern with acting or aspiring to act or believing that they aspire to act as though they have concern with good consequences that is close to linear with the consequences, i.e. they have a bounded interest in ‘shutting up and multiplying.’
Robin,
What standard do you use to identify “good tastes and values” to be open to?
This looks like a relatively clear case of excessive narrative-to-signal.
And again, babyeating norms need to invade in a similar fashion, and without norms other than baby-eating, the communal feeding pen selects for zero provisioning effort.
“If most of the total cost of growing a child lies in feeding it past the rapid growth stage, rather than birthing 50 infants and feeding them up to that point,”
From their visibility in the transmitted images it seems the disproportion isn’t absurdly great. Also, if the scaling issues with their brains were so extreme, why didn’t they become dwarfs? One big tool-using crystal being versus 500 tool-using dwarfs of equal intelligence seems like bad news for the giant.
“You’re also postulating that a whole group gets this mutation in one shot—but even if you say “genetic drift”, it seems pretty disadvantageous to a single invader.”
Altruistic punishers don’t need to be common, one or two can coordinate a group (the altruistic punisher recruits with the credible threat of punishment, and then imposes the norm on the whole group), and an allele for increased provisioning wouldn’t directly conflict with babyeating instincts.
“I fear that you have not managed to convince me of this. If the general idiom of children in pens is stable, then the adults contributing lots and lots of children (as many as possible) is also evolutionarily stable.”
I have a tribe of Babyeaters that each put 90% of their effort into reproducing, and 10% into contributing to the common food supply of the pen. This winds up producing 5000 offspring, 30 of which are not eaten, and are just adequately fed by the 10% of total resources allocated to the food supply. Now consider an allele, X, that disposes carriers to engage in altruistic punishment, and punishment of non-punishers, in support of a norm that adults spend most of their effort on contributing to the food supply (redirecting energy previously spent on offspring to be devoured with thermodynamic losses to the production and maintenance of offspring that will grow into adults). Every individual in the tribe will tend to have more surviving offspring, and the group will tend to be victorious in intertribal extermination warfare. Group selection will thus favor the spread of X, probably quite a bit more strongly than it would favor the spread of an allele for support of the babyeating norm (X achieves the benefits of babyeating while reallocating metabolic waste on devoured babies). The more closely X aligns offspring production and food contribution, the more it will be spread by group selection and the more it will reduce babyeating.
In a world with many groups, all engaging in winnowing-level babyeating, allele X can enter, spread, and vastly reduce babyeating. What is unconvincing about that argument?
“Suppose that all Babyeaters make equal contributions to the food pen; their leftover (variance in) food resources could be used to grow their own bodies, bribe desirable mates (those of good genetic material as witnessed by their large food contributions), or create larger numbers of offspring.”
Different alleles might drive altruistic punishment (including of non-punishers) in support of many different levels of demand on tribe members. Group selection would support alleles supporting norms such that the mean contribution to the pen food supply was well-matched with the mean number of offspring contributed to the pen. Variance doesn’t invalidate that conclusion.
Michael,
I guess it depends on whether the fantastic element can adequately stand in for whatever it is supposed to represent. Magic starship physics can be used to create a Prisoner’s Dilemma without trouble, since PDs are well understood, and it’s fairly obvious that we will face them in the future. No-Singularity and FTL, so that we can have human characters, are also understandable as translation tools. If Babyeaters are a stand-in for ‘abhorrent alien evolved morality’ to an audience that already grasps the topic, then the details of their evolution don’t matter. If, however, they are supposed to make the possibility of a nasty evolved morality come alive to cosmopolitan optimistic science fictions fans or transhumanists, then they should be relatively probable.
Eliezer,
On the other hand, since you’ve already written the story, using one of your favorite examples of the nonanthropomorphic nature of evolution as inspiration for the Babyeaters, and have no authorial line of retreat available at this time, we can probably leave this horse for dead.
Eliezer, you’re right that the coordination mechanisms would be imperfect, so it’s an overstatement to say NO babyeating would occur, I meant that you wouldn’t have the ‘winnowing’ sort of babyeating with consistent orders-of-magnitude disproportions between pre- and post-babyeating offspring populations.
Nits. I’d say there are probably lots of at-least-Babyeater-level-abhorrent evolutionary paths (not that Babyeaters are that bad, I’d rather have a Babyeater world than paperclips) making up a big share of evolved civilizations (it looks like the great majority, but it’s very tough to be confident). Any lack of calm is irritation at the use of a dubious example of abhorrent evolved morality when you could have used one that was both more probable AND more abhorrent.
I wonder about the psychological mechanisms and intuitions at work in the Babyeaters. After all, human babies don’t look like Babyeater babies, they’re less intelligent, etc. Their intellectual extension of strong intuitions to exotic cases might well be much more flexible than their applications to situations from the EEA, e.g. satisfying them by drinking cocktails containing millions of blastocysts. Similarly, human intuitions start to go haywire in exotic sci-fi thought experiments and strange modern situations.
“I don’t understand why you think that provisioning your own offspring is a group advantage.” If parents could selectively provision their own offspring in the common pen, then the group would not be wracked by intense commons-problem selective pressures driving provisioning towards zero and reproduction towards the maximum (thus resulting in extermination by more numerous tribes).
Actually, babyeating in the common pen isn’t even internally stable. Let’s take the assumptions of the situation as given:
There is intertribal extermination warfare. Larger tribes tend to win and grow. Even division of food among excessive numbers of offspring results in fewer surviving adults, and thus slower tribal population growth and more likely extermination.
All offspring are placed in a common pen.
Food placed in the common pen is automatically equally divided among those in the pen and adults cannot selectively provision.
Group selection has resulted in collective enforced babyeating to reduce offspring numbers (without regard for parentage of the offspring) in the common pen to the level that will maximize the number of surviving adults given the availability of food resources.
Individuals vary genetically in ways that affect their relative investment in producing offspring and in agricultural production to place into the common pen.
Under these circumstances, there will be intense selective pressure for individuals that put all their energy (after survival) into producing more offspring (which directly increase their reproductive fitness) rather than agricultural production (which is divided between their offspring and the offspring of the rest of the tribe). As more and more offspring are produced (in metabolically wasteful fashion) and less and less food is available, the tribe is on the path to extinction.
Groups that survive will be those in which social intelligence is used to punish (by death, devouring of offspring before they are placed in the pen, etc) those making low food contributions relative to offspring production. Remembering offspring production would be cognitively demanding, and only one side of the tradeoff needs to be measured, so we can guess that punishment of those making small food contributions would develop. This would force a homogenous level of reproductive effort, and group selection would push this level to the optimal tradeoff between agriculture and offspring production for group population growth, with just enough offspring to make optimal use of the food supply. This group is internally stable, and has much higher population growth than one wracked by commons problems, but it will also have no babyeating in the common pen.
I.e. I agree with your analysis that they (and artemisinin treatment) are great and worth doing if the local governments don’t tax or steal them (in various ways) too intensively.