Why do you link to a blog, rather than an introduction or a summary? Is this to test whether we find it so silly that we don’t look for their best arguments?
My impression is that antinatalists are highly verbal people who base their idea of morality on how people speak about morality, ignoring how people act. They get the idea that morality is about assigning blame and so feel compelled only to worry about bad acts, thus becoming strict negative utilitarians or rights-deontologists with very strict and uncommon rights. I am not moved by such moralities.
Maybe some make more factual claims, eg, that most lives are net negative or that reflective life would regret itself. These seem obviously false, but I don’t see that they matter. These arguments should not have much impact on the actions of the utilitarians that they seem aimed at. They should build a superhuman intelligence to answer these questions and implement the best course of action. If human lives are not worth living, then other lives may be. If no lives are worth living, then a superintelligence can arrange for no lives to be lead, while people evangelizing antinatalism aren’t going to make a difference.
Incidentally, Eliezer sometimes seems to be an anti-human-natalist.
Even if antinatalism is true at present (I have no major opinion on the issue yet) it need not be true in all possible future scenarios.
In fact, should the human race shrink significantly [due to antinatalism perhaps], without societal collapse, the average utility of a human life should increase. I find it highly unlikely that even the maximum average utility is still less than zero.
In fact, should the human race shrink significantly [due to antinatalism perhaps], without societal collapse, the average utility of a human life should increase.
Why shouldn’t having a higher population lead to greater specialization of labor, economies of scale, greater gains from trade, and thus greater average utility?
There is only a limited amount of any given resource available. Decreasing the number of people therefore increases the amount of resource available per person.
There is a point at which decreasing the population will begin decreasing average utility, but to me it seems nigh certain that that point is significantly below the current population. I could be wrong, and if I am wrong I would like to know.
Do you feel that the current population is optimum, below optimum, or above optimum?
Because of the law of diminishing returns (marginal utility). If you have a billion humans one more (less) results in a bigger increase (decrease) in utility than if you have a trillion.
I have long wrestled with the idea of antinatalism, so I should have something to say here. Certainly there were periods in my life in which I thought that the creation of life is the supreme folly.
We all know that terrible things happen, that should never happen to anyone. The simplest antinatalist argument of all is, that any life you create will be at risk of such intolerably bad outcomes; and so, if you care, the very least you can do is not create new life. No new life, no possibility of awful outcomes in it, problem avoided! And it is very easy to elaborate this into a stinging critique of anyone who proposes that nonetheless one shouldn’t take this seriously or absolutely (because most people are happy, most people don’t commit suicide, etc). You intend to gamble with this new life you propose to create, simply because you hope that it won’t turn out terribly? And this gamble you propose appears to be completely unnecessary—it’s not as if people have children for the greater good. Etc.
A crude utilitarian way to moderate the absoluteness of this conclusion would be to say, well, surely some lives are worth creating, and it would make a lot of people sad to never have children, so we reluctantly say to the ones who would be really upset to forego reproduction, OK, if you insist… but for people who can take it, we could say: There is always something better that you could do with your life. Have the courage not to hide from the facts of your own existence in the boisterous distraction of naive new lives.
It is probably true that philanthropic antinatalists, like the ones at the blog to which you link, are people who have personally experienced some profound awfulness, and that is why they take human suffering with such deadly seriousness. It’s not just an abstraction to them. For example, Jim Crawford (who runs that blog) was once almost killed in a sword attack, had his chest sliced open, and after they stitched him up, literally every breath was agonizing for a long time thereafter. An experience like that would sensitize you to the reality of things which luckier people would prefer not to think about.
You intend to gamble with this new life you propose to create, simply because you hope that it won’t turn out terribly?
Seems like loss aversion bias.
Sure, bad things happen, but so do good things. You need to do an expected utility calculation for the person you’re about to create: P(Bad)U(Bad) + P(Good)U(Good)
I think that for you, a student of the singularity concept, to arrive at a considered and consistent opinion regarding antinatalism, you need to make some judgments regarding the quality of human life as it is right now, “pre-singularity”.
Suppose there is no possibility of a singularity. Suppose the only option for humanity is life more or less as it is now—ageing, death, war, economic drudgery, etc, with the future the same as the past. Everyone who lives will die; most of them will drudge to stay alive. Do you still consider the creation of a human life justifiable?
Do you have any personal hopes attached to the singularity? Do you think, yes, it could be very bad, it could destroy us, that makes me anxious and affects what I do; but nonetheless, it could also be fantastic, and I derive meaning and hope from that fact?
If you are going to affirm the creation of human life under present conditions, but if you are also deriving hope from the anticipation of much better future conditions, then you may need to ask yourself how much of your toleration of the present derives from the background expectation of a better future.
It would be possible to have the attitude that life is already great and a good singularity would just make it better; or that the serious possibility of a bad singularity is enough for the idea to urgently command our attention; but it’s also clear that there are people who either use singularity hope to sustain them in the present, or who have simply grown up with the concept and haven’t yet run into difficulty.
I think the combination of transhumanism and antinatalism is actually a very natural one. Not at all an inevitable one; biotechnology, for example, is all about creating life. But if you think, for example, that the natural ageing process is intolerable, something no-one should have to experience, then probably you should be an antinatalist.
you may need to ask yourself how much of your toleration of the present derives from the background expectation of a better future.
I personally would still want to have been born even if a glorious posthuman future were not possible, but the margin of victory for life over death becomes maybe a factor of 100 thinner.
The antinatalist argument goes that humans suffer more than they have fun, therefore not living is better than living. Why don’t they convert their loved ones to the same view and commit suicide together, then? Or seek out small isolated communities and bomb them for moral good.
I believe the answer to antinatalism is that pleasure != utility. Your life (and the lives of your hypothetical kids) could create net positive utility despite containing more suffering than joy. The “utility functions” or whatever else determines our actions contain terms that don’t correspond to feelings of joy and sorrow, or are out of proportion with those feelings.
The suicide challenge is a non sequitur, because death is not equivalent to never having existed, unless you invent a method of timeless, all-Everett-branch suicide.
If the utility of the first ten or fifteen years of life is extremely negative, and the utility of the rest slightly positive, then it can be logical to believe that not being born is better than being born, but suicide (after a certain age) is worse than either.
If the utility of the first ten or fifteen years of life is extremely negative
I think that’s getting at a non-silly defense of antinatalism: what if the average experience of middle school and high school years is absolutely terrible, outweighing other large chunks of life experience, and adults have simply forgotten for the sake of their sanity?
I don’t buy this, but it’s not completely silly. (However, it suggests a better Third Alternative exists: applying the Geneva Convention to school social life.)
adults have simply forgotten for the sake of their sanity?
not completely silly.
Quite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don’t they? Suicide is, I think, a good indicator that someone is having a bad life.
(Also, I’ve seen mentions on LW of studies that people raising kids are unhappier than if they were childless, but once the kids are older, they retrospectively think they were much happier than they actually were.)
Quite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don’t they? Suicide is, I think, a good indicator that someone is having a bad life.
Interesting. From page 30, suicide rates increase monotonically in the 5 age groups up to and including 45-54 (peaking at 17.2 per 100,000), but then drops by 3 to 14.5 (age 55-64) and drops another 2 for the 65-74 age bracket (12.6), and then rises again after 75 (15.9).
So, I was right that the rates increase again in old age, but wrong about when the first spike was.
So, I was right that the rates increase again in old age, but wrong about when the first spike was.
Unfortunately, the age brackets don’t really tell you if there’s a teenage spike, except that if there is one, it happens after age 14. That 9.9 could actually be a much higher level concentrated within a few years, if I understand correctly.
Suicide rates may be higher in adolescence than at certain other times, but absolutely speaking, they remain very low, showing that most people are having a good life, and therefore refuting antinatalism.
My counterpoint to the above would be that if suicide rates are such a good metric, then why can they go up with affluence? (I believe this applies not just to wealthy nations (ie. Japan, Scandinavia), but to individuals as well, but I wouldn’t hang my hat on the latter.)
Yes yes, this is an argument for suicide rates never going to zero—but again, the basic theory that suicide is inversely correlated, even partially, with quality of life would seem to be disproved by this point.
I think the misconception is that what is generally considered “quality of life” is not correlated with things like affluence. People like to believe (pretend?) that it is, and by ever striving for more affluence feel that they are somehow improving their “quality of life”.
When someone is depressed, their “quality of life” is quite low. That “quality of life” can only be improved by resolving the depression, not by adding the bells and whistles of affluence.
How to resolve depression is not well understood. A large part of the problem is people who have never experienced depression, don’t understand what it is and believe that things like more affluence will resolve it.
I don’t buy this, but it’s not completely silly. (However, it suggests a better Third Alternative exists: applying the Geneva Convention to school social life.)
Whenever anyone mentions how much it sucks to be a kid, I plug this article. It does suck, of course, but the suckage is a function of what our society is like, and not of something inherent about being thirteen years old.
By the standard you propose, “never having existed” is also inadequate unless you invent a method of timeless, all-Everett-branch means of never having existed. Whatever kids an antinatalist can stop from existing in this branch may still exist in other branches.
The reason I ask is that antinatalism is a contrarian position we think is silly, but has some smart supporters.
Do people here really think that antinatalism is silly? I disagree with the position (very strongly) but it isn’t a view that I consider to be silly in the same way that I would consider say, most religious beliefs to be silly.
But keep in mind that having smart supporters is by no means a strong indication that a viewpoint is not silly. For example, Jonathan Sarfati is a prominent young earth creationist who before he became a YEC proponent was a productive chemist. He’s also a highly ranked chess master. He’s clearly a bright individual. Now, you might be able to argue that YECism has a higher proportion of people who aren’t smart (There’s some evidence to back this up. See for example this breakdown of GSS data and also this analysis. Note that the metric used in the first one, the GSS WORDSUM, is surprisingly robust under education levels by some measures so the first isn’t just measuring a proxy for education.) That might function as a better indicator of silliness. But simply having smart supporters seems insufficient to conclude that a position is not silly.
It does however seem that on LW there’s a common tendency to label beliefs silly when they mean “I assign a very low probability to this belief being correct.” Or “I don’t understand how someone’s mind could be so warped as to have this belief.” Both of these are problematic, the second more so than the first because different humans have different value systems. In this particular example, value systems that put harm to others as more bad are more likely to be able to make a coherent antinatalist position. In that regard, note that people are able to discuss things like paperclippers but seem to have more difficulty discussing value systems which are in many ways closer to their own. This may be simply because paperclipping is a simple moral system. It may also be because it is so far removed from their own moral systems that it becomes easier to map out in a consistent fashion where something like antinatalism is close enough to their own moral system that people conflate some of their own moral/ethical/value conclusions with those of the antinatalist, and that this occurs subtly enough for people not to notice.
Do people here really think that antinatalism is silly?
A data point: I don’t think antinatalism (as defined by Roko above - ‘it is a bad thing to create people’) is silly under every set of circumstances, but neither is it obviously true under all circumstances. If my standard of living is phenomenally awful, and I knew my child’s life would be equally bad, it’d be bad to have a child. But if I were living it up, knew I could be a good parent, and wanted a kid, what would be so awful about having one?
Let me restate what I mean more formally. Conditional on high living standards, high-quality parenting, and desire to raise a child, one can reasonably calculate that the expected utility (to myself, to the potential child and to others) of having the child is higher than the expected utility of not having a child. In which case I wouldn’t think the antinatalism position has legs.
one can reasonably calculate that the expected utility (to myself, to the potential child and to others) of having the child is higher than the expected utility of not having a child.
I’m not sure about this. It’s most likely that anything your kid does in life will get done by someone else instead. There is also some evidence that having children decreases your happiness (though there may be other reasons to have kids).
But even if this is true, it’s still not enough for antinatalism. Increasing total utility is not enough justification to create a life. The act of creation makes you responsible for the utility of the individual created, and you have a duty not to create an entity you have reason to think may have negative personal utility. (Strict utilitarians will disagree.)
I’m not sure about this. It’s most likely that anything your kid does in life will get done by someone else instead.
True—we might call the expected utility strangers get a wash because of this substitution effect. If we say the expected value most people get from me having a child is nil, it doesn’t contribute to the net expected value, but nor does it make it less positive.
There is also some evidence that having children decreases your happiness (though there may be other reasons to have kids).
It sounds as though that data’s based on samples of all types of parents, so it may not have much bearing on the subset of parents who (a) have stable (thanks NL!) high living standards, (b) are good at being parents, and (c) wanted their children. (Of course this just means the evidence is weak, not completely irrelevant.)
But even if this is true, it’s still not enough for antinatalism. Increasing total utility is not enough justification to create a life.
That’s a good point, I know of nothing in utilitarianism that says whose utility I should care about.
The act of creation makes you responsible for the utility of the individual created, and you have a duty not to create an entity you have reason to think may have negative personal utility. (Strict utilitarians will disagree.)
Whether or not someone agrees with this is going to depend on how much they care about risk aversion in addition to expected utility. (Prediction: antinatalists are more risk averse.) I think my personal level of risk aversion is too low for me to agree that I shouldn’t make any entity that has a chance of suffering negative personal utility.
I still think that it’s silly, because the common justification given for the position is highly suspect and borderline sophistry, and is, I suspect, not the causal reason for the values it purports to justify.
Yet because of moral antirealism, the mistake is subtle. And I have yet to find a critique of antinatalism that actually gives the correct (in my view) rebuttal. Most people who try to rebut it seem to also offer arguments that are tantamount sophistry, i.e. they are not the causal reason for the person disagreeing with the view.
And I worry, an I making a similarly subtle mistake? And as a contrarian with few good critics, would anyone present me with the correct counterargument?
I still think that it’s silly, because the common justification given for the position is highly suspect and borderline sophistry, and is, I suspect, not the causal reason for the values it purports to justify.
I’m curious what you think the causal justification is. I’m not a fan of imputing motives to people I disagree with rather than dealing with their arguments but one can’t help but notice that Heinrich Heine was paralyzed, blind and in constant pain for the last decade of his life. Moreover, his religious beliefs prevented him from committing suicide. In that context, antinatalism just in regards to one’s own life seems to make some sense. Thus one might think of antinatalism as arising in part from Other Optimizing
but one can’t help but notice that Heinrich Heine was paralyzed, blind and in constant pain for the last decade of his life. Moreover, his religious beliefs prevented him from committing suicide.
I promise that I genuinely did not know that when I wrote “I suspect, not the causal reason for the values it purports to justify.” and thought “these people were just born with low happiness set points and they’re rationalizing”
I don’t think antinatalism is silly, although I have not really tried to find problems with it yet. My current, not-fully-reflected position is that I would have prefer to not have existed (if that’s indeed possible) but, given that I in fact exist, I do not want to die. I don’t, right now, see screaming incoherency here, although I’m suspicious.
I would very much appreciate anyone who can point out faultlines for me to investigate. I may be missing something very obvious.
If our contrarian position was as wrong as we think antinatalism is, would we realize?
If there was an argument for antinatalism that was capable of moving us, would we have seen it? Maybe not. A LessWrong post summarizing all of the good arguments for antinatalism would be a good idea.
If our contrarian position was as wrong as we think antinatalism is, would we realize?
We have many contrarian positions, but antinatalism is one position. Personally, I think that some of the contrarian positions that some people advocate here are indeed silly.
I knew someone would ask. :-) Ok, I’ll list some of my silliness verdicts, but bear in mind that I’m not interested in arguing for my assessments of silliness, because I think they’re too silly for me to bother with, and metadiscussion escalates silliness levels. Life is short (however long it may extend), and there are plenty of non-silly matters to think about. I generally don’t post on matters I’ve consigned to the not-even-wrong category,or vote them down for it.
Non-silly: cryonics, advanced nano, AGI, FAI, Bayesian superintelligence. (“Non-silly” doesn’t mean I agree with all of these, just that I think there are serious arguments in favour, whether or not I’m persuaded of them.)
Silly: we’re living in a simulation, there are infinitely many identical copies of all of us, “status” as a number on an FRP character sheet, any Omega conundrum that depends on Omega being absolutely known to be absolutely reliable.
Does anyone else think that some of the recurrent ideas here are silly?
Silly: we’re living in a simulation, there are infinitely many identical copies of all of us, “status” as a number on an FRP character sheet, any Omega conundrum that depends on Omega being absolutely known to be absolutely reliable....Utilitarianism of all types.
There’s an odd inconsistency in how you labeled these. The last is identified by name and the first seems similarly neutral, but the third and fourth (and maybe the second—there are a lot of things that could be referring to) are phrased to make it clear what you think is silly about them. This seems tactically poor, if you want to avoid discussion of these issues. (or maybe the first and are the mistake, but tactical diversity seems weird to me)
Moreover, it seems hard for me to imagine that you pay so little attention to these topics that you believe that many people here support them as you’ve phrased them. Not that I have anything to say about the difference in what one should do in the two situations of encountering people who (1) endorse your silly summary of their position; vs (2) seem to make a silly claim, but also claim to distinguish it from your silly summary. Of course, most of the time silly claims are far away and you never find out whether the people endorse your summary.
What probability would you assign then to a well respected, oft-televised, senior scientist and establishment figure arguing in favour of the simulation hypothesis? (And I don’t mean Nick Bostrom. I mean someone who heads government committees and has tea with the queen)
What probability would you assign to a well respected, oft-televised, senior scientist and establishment figure arguing in favor of an incompatibilist theory of free will?
I don’t think that incompatibilism is so silly it’s not worth talking about. In fact its not actually wrong, it is simply a matter of how you define the term “free will”.
Close to 1 as makes no difference, since I don’t think you would ask this unless there was such a person. (Tea with the queen? Does that correlate positively or negatively with eccentricity, I wonder?)
Before anyone gets offended at my silliness verdicts (presuming you don’t find them too silly to get offended by), these are my judgements on the ideas, not on the people holding them.
Ok, but the point of the question is to try to arrive at true beliefs. So imagine forgetting that I’d asked the question. What does your model of the world, which says that simulation is silly, say for the probability that a major establishment scientist who is in no way a transhumanist, believes that we could be in a simulation? If it assigns too low a probability, maybe you should consider assigning some probability to alternative models?
I would not be at all surprised. No speculation is too silly to have been seriously propounded by some philosopher or other, and lofty state gives no immunity to silliness.
[ETA: And of course, I’m talking about ideas that I’ve judged silly despite their being seriously propounded by (some) folks here on LessWrong that I think are really smart, and after reading a whole lot of their stuff before arriving at that conclusion. So one more smart person, however prestigious, isn’t going to make a difference.]
But you changed it to “could be”. Sure, could be, but that’s like Descartes’ speculations about a trickster demon faking all our sensations. It’s unfalsifiable unless you deliberately put something into the speculation to let the denizens discover their true state, but at that point you’re just writing speculative fiction.
But if this person is arguing that we probably are in a simulation, then no, I just tune that out.
So the bottom line of your reasoning is quite safe from any evidential threats?
In one sense, yes, but in another sense....yes.
First sense: I have a high probability for speculations on whether we are living in a simulation (or any of the other ideas I dismiss) not being worth my while outside of entertaining fictions. As a result, evidence to the contrary is unlikely to reach my notice, and even if it does, it has a lot of convincing to do. In that sense, it is as safe as any confidently held belief is from evidential threats.
Second sense: Any evidential threats at all? Now we’re into unproductive navel-gazing. If, as a proper Bayesian, I make sure that my probabilities are never quite equal to 1, and therefore answer that my belief must be threatened by some sort of evidence, the next thing is you’ll ask what that evidence might be. But why should anyone have to be able to answer that question? If I choose to question some idea I have, then, yes, I must decide what possible observations I might make that would tell either way. This may be a non-trivial task. (Perhaps for reasons relating to the small world/large world controversy in Bayesian reasoning, but I haven’t worked that out.) But I have other things to do—I cannot be questioning everything all the time. The “silly” ideas are the ones I can’t be bothered spending any time on at all even if people are talking about them on my favorite blog, and if that means I miss getting in on the ground floor of the revelation of the age, well, that’s the risk I accept in hitting the Ignore button.
So in practice, yes, my bottom line on this matter (which was not written down in advance, but reached after having read a bunch of stuff of the sort I don’t read any more) is indeed quite safe. I don’t see anything wrong with that.
Besides that, I am always suspicious of this question, “what would convince you that you are wrong?” It’s the sort of thing that creationists arguing against evolution end up saying. After vigorously debating the evidence and making no headway, the creationist asks, “well, what would convince you?”, to which the answer is that to start with, all of the evidence that has just been gone over would have to go away. But in the creationist’s mind, the greater their failure to convince someone, the greater the proof that they’re right and the other wrong. “Consider it possible that you are mistaken” is the sound of a firing pin clicking on an empty chamber.
“what would convince you that you are wrong?” It’s the sort of thing that creationists arguing against evolution
But a proponent of evolution can easily answer this, for example if they went to the fossil record and found it showed that all and only existing creatures’ skeletons appeared 6000 years ago, and that radiocarbon dating showed that the earth was 6000 years old.
I’m baffled at the idea that the simulation hypothesis is silly. It can be rephrased “We are not at the top level of reality.” Given that we know of lower levels of reality (works of fiction, artificial life programs, dreams) it seems unlikely we’re at the top.
I’m baffled at the idea that the simulation hypothesis is silly. It can be rephrased “We are not at the top level of reality.” Given that we know of lower levels of reality (works of fiction, artificial life programs, dreams) it seems unlikely we’re at the top.
Do you have any evidence that any of those levels have anything remotely approximating observers? (I’ll add the tiny data point that I’ve had dreams where characters have explicitly claimed to be aware. In one dream I and everyone around was aware that it was a dream and that it was my dream. They wanted me to not go on a mission to defeat a villain since if I died I’d wake up and their world would cease to exist. I’m willing to put very high confidence on the hypothesis that no observers actually existed.)
I agree that the simulationist hypothesis is not silly but this is primarily due to the apparently high probability that we will at some point be able to simulate intelligence beings with great accuracy.
Reality isn’t stratified. A simulated world constitutes a concept of its own, apart from being referenced by the enclosing worlds. Two worlds can simulate each other to an equal degree.
I mostly agree with your list of silly ideas, though I’m not entirely sure what an FRP character sheet is and I do think status explanations are quite important so probably disagree on that one. I’d add utilitarianism to the list of silly ideas as well.
FRP = fantasy role-playing, i.e. Dungeons & Dragons and the like. A character sheet is a list of the attributes of the character you’re playing, things like Strength=10, Wisdom=8, Charisma=16, etc. (each number obtained by rolling three dice and adding them together). There are rules about what these attributes mean (e.g. on attempting some task requiring especial Charisma, roll a 20-sided die and if the number is less than your Charisma you succeed). Then there are circumstances that will give you additional points for an attribute or take them away, e.g. wearing a certain enchanted ring might give you +2 to Charisma.
Discussions of “status” here and on OB sometimes sound like D&D geeks arguing about the rules for a Status attribute.
Discussions of “status” here and on OB sometimes sound like D&D geeks arguing about the rules for a Status attribute.
Sometimes, yes. However, in many situations, the mere recognition that status considerations play an important role—even if stated in the crudest possible character-sheet sort of way—can be a tremendous first step in dispelling widespread, deeply entrenched naive and misguided views of human behavior and institutions.
Unfortunately, since a precise technical terminology for discussing the details of human status dynamic doesn’t (yet?) exist, often it’s very difficult to do any better.
I’m not entirely opposed to the idea. 6 billion is enough for now. Make more when we expand and distance makes it infeasible to concentrate neg-entropy on the available individuals. This is quite different from the Robin-Hanson ‘make as many humans as physically possible and have them living in squalor’ (exaggerated) position but probably also in in complete dissagreement with arguments used for Anti-natalism.
Either antinatalism is futile in long run, or it is existential threat.
If we assume that antinatalism is rational, then in long run it will lead to reduction of part of human population, that is capable/trained to do rational decisions, thus making antinatalists’ efforts futile. As we can see, people that should be most susceptible to antinatalism don’t even consider this option (en masse at least). And given their circumstances they have clear reason for that: every extra child makes it less likely for them to starve to death in old age, as more children more chances for family to control more resources. It is big prisoner dilemma, where defectors win.
Edit: Post-humans are not considered. They will have other means to acquire resources.
Edit: My point: antinatalism can be rational for individuals, but it cannot be rational for humankind to accept (even if it is universally true as antinatalists claim).
Antinatalism is the argument that it is a bad thing to create people.
What arguments do people have against this position?
Why do you link to a blog, rather than an introduction or a summary? Is this to test whether we find it so silly that we don’t look for their best arguments?
My impression is that antinatalists are highly verbal people who base their idea of morality on how people speak about morality, ignoring how people act. They get the idea that morality is about assigning blame and so feel compelled only to worry about bad acts, thus becoming strict negative utilitarians or rights-deontologists with very strict and uncommon rights. I am not moved by such moralities.
Maybe some make more factual claims, eg, that most lives are net negative or that reflective life would regret itself. These seem obviously false, but I don’t see that they matter. These arguments should not have much impact on the actions of the utilitarians that they seem aimed at. They should build a superhuman intelligence to answer these questions and implement the best course of action. If human lives are not worth living, then other lives may be. If no lives are worth living, then a superintelligence can arrange for no lives to be lead, while people evangelizing antinatalism aren’t going to make a difference.
Incidentally, Eliezer sometimes seems to be an anti-human-natalist.
Even if antinatalism is true at present (I have no major opinion on the issue yet) it need not be true in all possible future scenarios.
In fact, should the human race shrink significantly [due to antinatalism perhaps], without societal collapse, the average utility of a human life should increase. I find it highly unlikely that even the maximum average utility is still less than zero.
Why shouldn’t having a higher population lead to greater specialization of labor, economies of scale, greater gains from trade, and thus greater average utility?
Resource limitations.
There is only a limited amount of any given resource available. Decreasing the number of people therefore increases the amount of resource available per person.
There is a point at which decreasing the population will begin decreasing average utility, but to me it seems nigh certain that that point is significantly below the current population.
I could be wrong, and if I am wrong I would like to know.
Do you feel that the current population is optimum, below optimum, or above optimum?
Because of the law of diminishing returns (marginal utility). If you have a billion humans one more (less) results in a bigger increase (decrease) in utility than if you have a trillion.
Whose utility? The extra human’s utility will be the same in both cases.
I have long wrestled with the idea of antinatalism, so I should have something to say here. Certainly there were periods in my life in which I thought that the creation of life is the supreme folly.
We all know that terrible things happen, that should never happen to anyone. The simplest antinatalist argument of all is, that any life you create will be at risk of such intolerably bad outcomes; and so, if you care, the very least you can do is not create new life. No new life, no possibility of awful outcomes in it, problem avoided! And it is very easy to elaborate this into a stinging critique of anyone who proposes that nonetheless one shouldn’t take this seriously or absolutely (because most people are happy, most people don’t commit suicide, etc). You intend to gamble with this new life you propose to create, simply because you hope that it won’t turn out terribly? And this gamble you propose appears to be completely unnecessary—it’s not as if people have children for the greater good. Etc.
A crude utilitarian way to moderate the absoluteness of this conclusion would be to say, well, surely some lives are worth creating, and it would make a lot of people sad to never have children, so we reluctantly say to the ones who would be really upset to forego reproduction, OK, if you insist… but for people who can take it, we could say: There is always something better that you could do with your life. Have the courage not to hide from the facts of your own existence in the boisterous distraction of naive new lives.
It is probably true that philanthropic antinatalists, like the ones at the blog to which you link, are people who have personally experienced some profound awfulness, and that is why they take human suffering with such deadly seriousness. It’s not just an abstraction to them. For example, Jim Crawford (who runs that blog) was once almost killed in a sword attack, had his chest sliced open, and after they stitched him up, literally every breath was agonizing for a long time thereafter. An experience like that would sensitize you to the reality of things which luckier people would prefer not to think about.
Seems like loss aversion bias.
Sure, bad things happen, but so do good things. You need to do an expected utility calculation for the person you’re about to create: P(Bad)U(Bad) + P(Good)U(Good)
P(Sword attack) seems to be pretty darn low.
I think that for you, a student of the singularity concept, to arrive at a considered and consistent opinion regarding antinatalism, you need to make some judgments regarding the quality of human life as it is right now, “pre-singularity”.
Suppose there is no possibility of a singularity. Suppose the only option for humanity is life more or less as it is now—ageing, death, war, economic drudgery, etc, with the future the same as the past. Everyone who lives will die; most of them will drudge to stay alive. Do you still consider the creation of a human life justifiable?
Do you have any personal hopes attached to the singularity? Do you think, yes, it could be very bad, it could destroy us, that makes me anxious and affects what I do; but nonetheless, it could also be fantastic, and I derive meaning and hope from that fact?
If you are going to affirm the creation of human life under present conditions, but if you are also deriving hope from the anticipation of much better future conditions, then you may need to ask yourself how much of your toleration of the present derives from the background expectation of a better future.
It would be possible to have the attitude that life is already great and a good singularity would just make it better; or that the serious possibility of a bad singularity is enough for the idea to urgently command our attention; but it’s also clear that there are people who either use singularity hope to sustain them in the present, or who have simply grown up with the concept and haven’t yet run into difficulty.
I think the combination of transhumanism and antinatalism is actually a very natural one. Not at all an inevitable one; biotechnology, for example, is all about creating life. But if you think, for example, that the natural ageing process is intolerable, something no-one should have to experience, then probably you should be an antinatalist.
I personally would still want to have been born even if a glorious posthuman future were not possible, but the margin of victory for life over death becomes maybe a factor of 100 thinner.
The antinatalist argument goes that humans suffer more than they have fun, therefore not living is better than living. Why don’t they convert their loved ones to the same view and commit suicide together, then? Or seek out small isolated communities and bomb them for moral good.
I believe the answer to antinatalism is that pleasure != utility. Your life (and the lives of your hypothetical kids) could create net positive utility despite containing more suffering than joy. The “utility functions” or whatever else determines our actions contain terms that don’t correspond to feelings of joy and sorrow, or are out of proportion with those feelings.
The suicide challenge is a non sequitur, because death is not equivalent to never having existed, unless you invent a method of timeless, all-Everett-branch suicide.
Precisely.
If the utility of the first ten or fifteen years of life is extremely negative, and the utility of the rest slightly positive, then it can be logical to believe that not being born is better than being born, but suicide (after a certain age) is worse than either.
I think that’s getting at a non-silly defense of antinatalism: what if the average experience of middle school and high school years is absolutely terrible, outweighing other large chunks of life experience, and adults have simply forgotten for the sake of their sanity?
I don’t buy this, but it’s not completely silly. (However, it suggests a better Third Alternative exists: applying the Geneva Convention to school social life.)
Quite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don’t they? Suicide is, I think, a good indicator that someone is having a bad life.
(Also, I’ve seen mentions on LW of studies that people raising kids are unhappier than if they were childless, but once the kids are older, they retrospectively think they were much happier than they actually were.)
Suicide rates start at .5 in 100,000 for ages 5-14 and rise to about 15 in 100,000 for seniors.
Interesting. From page 30, suicide rates increase monotonically in the 5 age groups up to and including 45-54 (peaking at 17.2 per 100,000), but then drops by 3 to 14.5 (age 55-64) and drops another 2 for the 65-74 age bracket (12.6), and then rises again after 75 (15.9).
So, I was right that the rates increase again in old age, but wrong about when the first spike was.
Unfortunately, the age brackets don’t really tell you if there’s a teenage spike, except that if there is one, it happens after age 14. That 9.9 could actually be a much higher level concentrated within a few years, if I understand correctly.
Suicide rates may be higher in adolescence than at certain other times, but absolutely speaking, they remain very low, showing that most people are having a good life, and therefore refuting antinatalism.
Suicide rates are not a good measure of how good life is except at a very rough level since humans have very strong instincts for self-preservation.
My counterpoint to the above would be that if suicide rates are such a good metric, then why can they go up with affluence? (I believe this applies not just to wealthy nations (ie. Japan, Scandinavia), but to individuals as well, but I wouldn’t hang my hat on the latter.)
Suicide rates are a measure of depression, not of how good life is. Depression can hit people even when they otherwise have a very good life.
Yes yes, this is an argument for suicide rates never going to zero—but again, the basic theory that suicide is inversely correlated, even partially, with quality of life would seem to be disproved by this point.
I think the misconception is that what is generally considered “quality of life” is not correlated with things like affluence. People like to believe (pretend?) that it is, and by ever striving for more affluence feel that they are somehow improving their “quality of life”.
When someone is depressed, their “quality of life” is quite low. That “quality of life” can only be improved by resolving the depression, not by adding the bells and whistles of affluence.
How to resolve depression is not well understood. A large part of the problem is people who have never experienced depression, don’t understand what it is and believe that things like more affluence will resolve it.
I suspect the majority of adolescents would also deny wishing they had never been born.
I’m surprised the Paul Graham essay “Why Nerds are Unpopular” wasn’t linked there.
Whenever anyone mentions how much it sucks to be a kid, I plug this article. It does suck, of course, but the suckage is a function of what our society is like, and not of something inherent about being thirteen years old.
Why Nerds Hate Grade School
By the standard you propose, “never having existed” is also inadequate unless you invent a method of timeless, all-Everett-branch means of never having existed. Whatever kids an antinatalist can stop from existing in this branch may still exist in other branches.
Here’s one: I bet if you asked lots of people whether their birth was a good thing, most of them would say yes.
If it turns out that after sufficient reflection, people, on average, regard their birth as a bad thing, then this argument breaks down.
They have an answer to that.
The reason I ask is that antinatalism is a contrarian position we think is silly, but has some smart supporters.
If our contrarian position was as wrong as we think antinatalism is, would we realize?
Do people here really think that antinatalism is silly? I disagree with the position (very strongly) but it isn’t a view that I consider to be silly in the same way that I would consider say, most religious beliefs to be silly.
But keep in mind that having smart supporters is by no means a strong indication that a viewpoint is not silly. For example, Jonathan Sarfati is a prominent young earth creationist who before he became a YEC proponent was a productive chemist. He’s also a highly ranked chess master. He’s clearly a bright individual. Now, you might be able to argue that YECism has a higher proportion of people who aren’t smart (There’s some evidence to back this up. See for example this breakdown of GSS data and also this analysis. Note that the metric used in the first one, the GSS WORDSUM, is surprisingly robust under education levels by some measures so the first isn’t just measuring a proxy for education.) That might function as a better indicator of silliness. But simply having smart supporters seems insufficient to conclude that a position is not silly.
It does however seem that on LW there’s a common tendency to label beliefs silly when they mean “I assign a very low probability to this belief being correct.” Or “I don’t understand how someone’s mind could be so warped as to have this belief.” Both of these are problematic, the second more so than the first because different humans have different value systems. In this particular example, value systems that put harm to others as more bad are more likely to be able to make a coherent antinatalist position. In that regard, note that people are able to discuss things like paperclippers but seem to have more difficulty discussing value systems which are in many ways closer to their own. This may be simply because paperclipping is a simple moral system. It may also be because it is so far removed from their own moral systems that it becomes easier to map out in a consistent fashion where something like antinatalism is close enough to their own moral system that people conflate some of their own moral/ethical/value conclusions with those of the antinatalist, and that this occurs subtly enough for people not to notice.
A data point: I don’t think antinatalism (as defined by Roko above - ‘it is a bad thing to create people’) is silly under every set of circumstances, but neither is it obviously true under all circumstances. If my standard of living is phenomenally awful, and I knew my child’s life would be equally bad, it’d be bad to have a child. But if I were living it up, knew I could be a good parent, and wanted a kid, what would be so awful about having one?
That your child might experience a great deal of pain which you could prevent by not having it.
That your child might regret being born and wish you had made the other decision.
That you can be a good parent, raise a kid, and improve someone’s life without having a kid (adopt).
That the world is already overpopulated and our natural resources are not infinite.
Points taken.
Let me restate what I mean more formally. Conditional on high living standards, high-quality parenting, and desire to raise a child, one can reasonably calculate that the expected utility (to myself, to the potential child and to others) of having the child is higher than the expected utility of not having a child. In which case I wouldn’t think the antinatalism position has legs.
I’d throw in considering how stable you think those high living standards are.
I’m not sure about this. It’s most likely that anything your kid does in life will get done by someone else instead. There is also some evidence that having children decreases your happiness (though there may be other reasons to have kids).
But even if this is true, it’s still not enough for antinatalism. Increasing total utility is not enough justification to create a life. The act of creation makes you responsible for the utility of the individual created, and you have a duty not to create an entity you have reason to think may have negative personal utility. (Strict utilitarians will disagree.)
True—we might call the expected utility strangers get a wash because of this substitution effect. If we say the expected value most people get from me having a child is nil, it doesn’t contribute to the net expected value, but nor does it make it less positive.
It sounds as though that data’s based on samples of all types of parents, so it may not have much bearing on the subset of parents who (a) have stable (thanks NL!) high living standards, (b) are good at being parents, and (c) wanted their children. (Of course this just means the evidence is weak, not completely irrelevant.)
That’s a good point, I know of nothing in utilitarianism that says whose utility I should care about.
Whether or not someone agrees with this is going to depend on how much they care about risk aversion in addition to expected utility. (Prediction: antinatalists are more risk averse.) I think my personal level of risk aversion is too low for me to agree that I shouldn’t make any entity that has a chance of suffering negative personal utility.
I still think that it’s silly, because the common justification given for the position is highly suspect and borderline sophistry, and is, I suspect, not the causal reason for the values it purports to justify.
Yet because of moral antirealism, the mistake is subtle. And I have yet to find a critique of antinatalism that actually gives the correct (in my view) rebuttal. Most people who try to rebut it seem to also offer arguments that are tantamount sophistry, i.e. they are not the causal reason for the person disagreeing with the view.
And I worry, an I making a similarly subtle mistake? And as a contrarian with few good critics, would anyone present me with the correct counterargument?
I’m curious what you think the causal justification is. I’m not a fan of imputing motives to people I disagree with rather than dealing with their arguments but one can’t help but notice that Heinrich Heine was paralyzed, blind and in constant pain for the last decade of his life. Moreover, his religious beliefs prevented him from committing suicide. In that context, antinatalism just in regards to one’s own life seems to make some sense. Thus one might think of antinatalism as arising in part from Other Optimizing
I promise that I genuinely did not know that when I wrote “I suspect, not the causal reason for the values it purports to justify.” and thought “these people were just born with low happiness set points and they’re rationalizing”
I don’t think antinatalism is silly, although I have not really tried to find problems with it yet. My current, not-fully-reflected position is that I would have prefer to not have existed (if that’s indeed possible) but, given that I in fact exist, I do not want to die. I don’t, right now, see screaming incoherency here, although I’m suspicious.
I would very much appreciate anyone who can point out faultlines for me to investigate. I may be missing something very obvious.
If there was an argument for antinatalism that was capable of moving us, would we have seen it? Maybe not. A LessWrong post summarizing all of the good arguments for antinatalism would be a good idea.
We have many contrarian positions, but antinatalism is one position. Personally, I think that some of the contrarian positions that some people advocate here are indeed silly.
Such as?
I knew someone would ask. :-) Ok, I’ll list some of my silliness verdicts, but bear in mind that I’m not interested in arguing for my assessments of silliness, because I think they’re too silly for me to bother with, and metadiscussion escalates silliness levels. Life is short (however long it may extend), and there are plenty of non-silly matters to think about. I generally don’t post on matters I’ve consigned to the not-even-wrong category,or vote them down for it.
Non-silly: cryonics, advanced nano, AGI, FAI, Bayesian superintelligence. (“Non-silly” doesn’t mean I agree with all of these, just that I think there are serious arguments in favour, whether or not I’m persuaded of them.)
Silly: we’re living in a simulation, there are infinitely many identical copies of all of us, “status” as a number on an FRP character sheet, any Omega conundrum that depends on Omega being absolutely known to be absolutely reliable.
Does anyone else think that some of the recurrent ideas here are silly?
ETA: Non-silly: the mission of LessWrong. Silly: Utilitarianism of all types.
There’s an odd inconsistency in how you labeled these. The last is identified by name and the first seems similarly neutral, but the third and fourth (and maybe the second—there are a lot of things that could be referring to) are phrased to make it clear what you think is silly about them. This seems tactically poor, if you want to avoid discussion of these issues. (or maybe the first and are the mistake, but tactical diversity seems weird to me)
Moreover, it seems hard for me to imagine that you pay so little attention to these topics that you believe that many people here support them as you’ve phrased them. Not that I have anything to say about the difference in what one should do in the two situations of encountering people who (1) endorse your silly summary of their position; vs (2) seem to make a silly claim, but also claim to distinguish it from your silly summary. Of course, most of the time silly claims are far away and you never find out whether the people endorse your summary.
What probability would you assign then to a well respected, oft-televised, senior scientist and establishment figure arguing in favour of the simulation hypothesis? (And I don’t mean Nick Bostrom. I mean someone who heads government committees and has tea with the queen)
What probability would you assign to a well respected, oft-televised, senior scientist and establishment figure arguing in favor of an incompatibilist theory of free will?
I don’t think that incompatibilism is so silly it’s not worth talking about. In fact its not actually wrong, it is simply a matter of how you define the term “free will”.
Definitions are not a simple matter—I would claim that libertarian free will* is at least as silly as the simulation hypothesis.
But I don’t filter my conversation to ban silliness.
* I change my phrasing to emphasize that I can respect hard incompatibilism—the position that “free will” doesn’t exist.
Close to 1 as makes no difference, since I don’t think you would ask this unless there was such a person. (Tea with the queen? Does that correlate positively or negatively with eccentricity, I wonder?)
Before anyone gets offended at my silliness verdicts (presuming you don’t find them too silly to get offended by), these are my judgements on the ideas, not on the people holding them.
Ok, but the point of the question is to try to arrive at true beliefs. So imagine forgetting that I’d asked the question. What does your model of the world, which says that simulation is silly, say for the probability that a major establishment scientist who is in no way a transhumanist, believes that we could be in a simulation? If it assigns too low a probability, maybe you should consider assigning some probability to alternative models?
I would not be at all surprised. No speculation is too silly to have been seriously propounded by some philosopher or other, and lofty state gives no immunity to silliness.
[ETA: And of course, I’m talking about ideas that I’ve judged silly despite their being seriously propounded by (some) folks here on LessWrong that I think are really smart, and after reading a whole lot of their stuff before arriving at that conclusion. So one more smart person, however prestigious, isn’t going to make a difference.]
But you changed it to “could be”. Sure, could be, but that’s like Descartes’ speculations about a trickster demon faking all our sensations. It’s unfalsifiable unless you deliberately put something into the speculation to let the denizens discover their true state, but at that point you’re just writing speculative fiction.
But if this person is arguing that we probably are in a simulation, then no, I just tune that out.
So the bottom line of your reasoning is quite safe from any evidential threats?
In one sense, yes, but in another sense....yes.
First sense: I have a high probability for speculations on whether we are living in a simulation (or any of the other ideas I dismiss) not being worth my while outside of entertaining fictions. As a result, evidence to the contrary is unlikely to reach my notice, and even if it does, it has a lot of convincing to do. In that sense, it is as safe as any confidently held belief is from evidential threats.
Second sense: Any evidential threats at all? Now we’re into unproductive navel-gazing. If, as a proper Bayesian, I make sure that my probabilities are never quite equal to 1, and therefore answer that my belief must be threatened by some sort of evidence, the next thing is you’ll ask what that evidence might be. But why should anyone have to be able to answer that question? If I choose to question some idea I have, then, yes, I must decide what possible observations I might make that would tell either way. This may be a non-trivial task. (Perhaps for reasons relating to the small world/large world controversy in Bayesian reasoning, but I haven’t worked that out.) But I have other things to do—I cannot be questioning everything all the time. The “silly” ideas are the ones I can’t be bothered spending any time on at all even if people are talking about them on my favorite blog, and if that means I miss getting in on the ground floor of the revelation of the age, well, that’s the risk I accept in hitting the Ignore button.
So in practice, yes, my bottom line on this matter (which was not written down in advance, but reached after having read a bunch of stuff of the sort I don’t read any more) is indeed quite safe. I don’t see anything wrong with that.
Besides that, I am always suspicious of this question, “what would convince you that you are wrong?” It’s the sort of thing that creationists arguing against evolution end up saying. After vigorously debating the evidence and making no headway, the creationist asks, “well, what would convince you?”, to which the answer is that to start with, all of the evidence that has just been gone over would have to go away. But in the creationist’s mind, the greater their failure to convince someone, the greater the proof that they’re right and the other wrong. “Consider it possible that you are mistaken” is the sound of a firing pin clicking on an empty chamber.
But a proponent of evolution can easily answer this, for example if they went to the fossil record and found it showed that all and only existing creatures’ skeletons appeared 6000 years ago, and that radiocarbon dating showed that the earth was 6000 years old.
The creationist generally puts his universal question after having unsuccessfully argued that the fossil record and radiocarbon dating support him.
I’m baffled at the idea that the simulation hypothesis is silly. It can be rephrased “We are not at the top level of reality.” Given that we know of lower levels of reality (works of fiction, artificial life programs, dreams) it seems unlikely we’re at the top.
Do you have any evidence that any of those levels have anything remotely approximating observers? (I’ll add the tiny data point that I’ve had dreams where characters have explicitly claimed to be aware. In one dream I and everyone around was aware that it was a dream and that it was my dream. They wanted me to not go on a mission to defeat a villain since if I died I’d wake up and their world would cease to exist. I’m willing to put very high confidence on the hypothesis that no observers actually existed.)
I agree that the simulationist hypothesis is not silly but this is primarily due to the apparently high probability that we will at some point be able to simulate intelligence beings with great accuracy.
Reality isn’t stratified. A simulated world constitutes a concept of its own, apart from being referenced by the enclosing worlds. Two worlds can simulate each other to an equal degree.
I mostly agree with your list of silly ideas, though I’m not entirely sure what an FRP character sheet is and I do think status explanations are quite important so probably disagree on that one. I’d add utilitarianism to the list of silly ideas as well.
Agreed about utilitarianism.
FRP = fantasy role-playing, i.e. Dungeons & Dragons and the like. A character sheet is a list of the attributes of the character you’re playing, things like Strength=10, Wisdom=8, Charisma=16, etc. (each number obtained by rolling three dice and adding them together). There are rules about what these attributes mean (e.g. on attempting some task requiring especial Charisma, roll a 20-sided die and if the number is less than your Charisma you succeed). Then there are circumstances that will give you additional points for an attribute or take them away, e.g. wearing a certain enchanted ring might give you +2 to Charisma.
Discussions of “status” here and on OB sometimes sound like D&D geeks arguing about the rules for a Status attribute.
RichardKennaway:
Sometimes, yes. However, in many situations, the mere recognition that status considerations play an important role—even if stated in the crudest possible character-sheet sort of way—can be a tremendous first step in dispelling widespread, deeply entrenched naive and misguided views of human behavior and institutions.
Unfortunately, since a precise technical terminology for discussing the details of human status dynamic doesn’t (yet?) exist, often it’s very difficult to do any better.
Could you expand on how those discussions of status here and on OB are different from what you’d see as a more realistic discussion of status?
I never replied to this, but this is an example of what I think is a more realistic discussion.
I’m not entirely opposed to the idea. 6 billion is enough for now. Make more when we expand and distance makes it infeasible to concentrate neg-entropy on the available individuals. This is quite different from the Robin-Hanson ‘make as many humans as physically possible and have them living in squalor’ (exaggerated) position but probably also in in complete dissagreement with arguments used for Anti-natalism.
Either antinatalism is futile in long run, or it is existential threat.
If we assume that antinatalism is rational, then in long run it will lead to reduction of part of human population, that is capable/trained to do rational decisions, thus making antinatalists’ efforts futile. As we can see, people that should be most susceptible to antinatalism don’t even consider this option (en masse at least). And given their circumstances they have clear reason for that: every extra child makes it less likely for them to starve to death in old age, as more children more chances for family to control more resources. It is big prisoner dilemma, where defectors win.
Edit: Post-humans are not considered. They will have other means to acquire resources.
Edit: My point: antinatalism can be rational for individuals, but it cannot be rational for humankind to accept (even if it is universally true as antinatalists claim).