Closet survey #1
What do you believe that most people on this site don’t?
I’m especially looking for things that you wouldn’t even mention if someone wasn’t explicitly asking for them. Stuff you’re not even comfortable writing under your own name. Making a one-shot account here is very easy, go ahead and do that if you don’t want to tarnish your image.
I think a big problem with a “community” dedicated to being less wrong is that it will make people more concerned about APPEARING less wrong. The biggest part of my intellectual journey so far has been the acquisition of new and startling knowledge, and that knowledge doesn’t seem likely to turn up here in the conditions that currently exist.
So please, tell me the crazy things you’re otherwise afraid to say. I want to know them, because they might be true.
- 19 Apr 2012 22:54 UTC; 123 points) 's comment on A question about Eliezer by (
- Secret Identities vs. Groupthink by 9 Apr 2009 20:26 UTC; 23 points) (
- 15 Sep 2014 15:57 UTC; 12 points) 's comment on What are your contrarian views? by (
- Less Wrong: Progress Report by 24 Apr 2009 23:49 UTC; 5 points) (
- 20 Jan 2011 21:36 UTC; 4 points) 's comment on Theists are wrong; is theism? by (
- 22 Dec 2009 5:12 UTC; 2 points) 's comment on The Correct Contrarian Cluster by (
- 10 Jan 2014 20:49 UTC; 2 points) 's comment on [LINK] Why I’m not on the Rationalist Masterlist by (
- 10 Oct 2010 1:12 UTC; 1 point) 's comment on The Irrationality Game by (
I don’t know if I actually believe this, but I’ve heard reports that cause me to assign a non-neglible probability on the chance that sexual relations with between children and adults sometimes don’t cause harm. For instance, see the Rind et al. report:
It’s hopefully obvious that this is not a justification for having sex with children, even if there is one controversial study suggesting it might sometimes be okay.
That’s probably the case. In western societies, it’s an orthodoxy, a moral fashion, to say that sex between children/adolescents and adults is bad. This can be clearly seen because people who argue against the orthodoxy are not criticised for being wrong, but condemned for being bad.
I am also “in the closet” on this. Sex is generally pleasurable; postulating a magic age or stage of development before which sex must be traumatic seems implausible on its face, without some other evidence. Coercion and intimidation are well-known to be damaging, but I don’t understand how merely convincing a 10-year-old to let you stick something up her vagina (and then doing it) is going to do any more harm than, say, spanking her. Furthermore, looking at the historical record, the ancient Greek custom of pederasty (sexual/romantic relationships between adolescent boys and adult men) doesn’t seem to have resulted in widespread trauma.
There are very few places in which it would be safe to propose this hypothesis, though.
Not “generally” over the domain in question. The pleasurability of sex is supported by brain-specific hardware that has no particular evolutionary reason to be active before adolescence.
Without taking a stance on the question of child sexuality—what you say is true, but is there any particular selection pressure for it to be off, either? Evolution goes for the simplest solution, and “always on” seems to me simpler than “off until a specific age, then on”.
Of course, that’s an oversimplification. The required machinery may simply not be developed yet, in the same way that you need to first grow to be four feet tall before you can grow to be five feet tall. But then, when you reach the size of four feet, you already have four fifths of your five feet-tallness in place, so it stands to reason that that at least part of what makes sex pleasurable will be in place before adolescence. Whether it’s active is obviously a separate question, but I don’t think “has no particular evolutionary reason to be active” tells us much by itself.
Anecdotal: I don’t remember having the slightest concept of sexual interest in anything before puberty.
Anyone got trustworthy better data, go ahead (but we have reason to suspect political interference, which is why I go so far as to cite my own anecdotal memory).
I personally know one girl whom, when she was 8, actively went into sex chat rooms and flirted with older men (anywhere from 16 to 40). I don’t think she actually had physical sexual experiences with anyone, though.
I personally know two girls who have had sexual intercourse with adults, one when she was aged 5, the other 8. It was rape in the sense that they were explicitly nonconsentual (they explicitly said didn’t want to do it), but it didn’t traumatized them. One theory might be that “doing stuff you don’t want to do, but adults tell you to do, so you do them anyway” is pretty common at that age (e.g. being forced to clean your room).
I suspect the sex act itself isn’t “pleasurable” for them, but having “sexual relationships” with adults may be pleasurable (since the first-mentioned 8 year old sought it out). It may be seen by many of them as a neutral act (like the 5 and second-mentioned 8 year old) and a form of curious exploration.
This is assuming, for lack of a better term, “gentle loving pedophilia”. The way pedophilia is often portrayed by mainstream media is violent rape, with screaming, kicking and blood. While I don’t personally know of any girl who actually experienced “violent rape pedophilia”, I think it’s safe to assume that they don’t find this pleasurable at all.
Personally, there’s a certain fetish that I have, and I remember it causing me erections even before puberty. However, as far as I can recall, the experience didn’t feel like anything that I’d call sexual these days. It was something that was pleasant to think about, and it caused physical reactions, but the actual sexual tension wasn’t there.
I also recall a friend mentioning a pre-pubescent boy who’d had a habit of masturbating when there was snow outside, because he thought the snow was beautiful. (I’m not sure if she’d known the boy herself or if she’d heard it from someone else, so this may be an unreliable fifth-hand account.) If it was true, then it sounds (like my experience) that part of the hardware was in place, but not the parts that would make it sexual in the adult sense of the word.
Googling for “child sexuality” gives me a report from Linköping University which states on page 17:
It does, however, also remark that child sexual abuse often causes sexualized behavior in children, and that very little is known about what is actually normal child sexuality. Interestingly, as it relates to the original topic, it also mentions a study that found one third of abuse victims to show no symptoms at all.
I wonder what kind of controls they had (ha, ha) that let them say that it caused the sexualized behavior, rather than just letting the children know about sex. I mean I was entirely ignorant of sex until I was 12. I knew it existed by reading and hearing references to it, and I had seen Playboys and the like, but I didn’t have any idea of what sex was.
Mostly the same here. I didn’t have any arousal-like physical reactions, though. It was mostly like the tension of roller coasters and scary stories, not sexual tension. Then, a couple years after puberty, my sex drive kicked in (in the space of days), the fetish was found impossible to handle and promptly repressed until a few years later when it could merge normally with my general libido.
From personal experience (which I am unfortunately too nervous about to go into detail about), pre-pubescent sexuality is primarily based on exposure and knowledge of sexuality. Puberty simply forces one to become aware of sex, rather than being a prerequisite for it. Similarly, sexual reactions (erections, orgasm, etc.) are definitely possible pre-pubescence, simply different. This may be an anomaly in my case, I do not have any non-personal data to share.
Although I do know that Alfred Kinsey compiled an extensive body of research on child sexuality obtained from the interview of pedophiles, in particular one pedophile who was highly active and documented his explorations extensively. I have never read this body of research myself, but I thought its existence might be worth pointing out.
Maybe no interest in anything in particular, but what of the sexual gratification itself ? Children do masturbate, it’s a known fact. Though maybe it’s not universal. But the brain-specific hardware seems to be in place already at any rate.
http://www.med.umich.edu/1libr/pa/pa_bmasturb_hhg.htm
Anecdotal also, I clearly remember watching the same movie (Star Wars) before and after teenage—the sexual tension passed me by completely as a child but was obvious a few years later.
However, I don’t have evidence that I’d not have enjoyed sex. The desire instinct was offline, that’s all I could swear to.
Also anecdotal: I have liked girls continuously since the age of 4. I do not recommend this....
This is also my experience.
I could be confusing Freudian stuff with real experimental results, but I seem to remember that children go through a stage up until about 6 where they’re somewhat sexual, and then between that age and puberty the sex drive switches off or even into full reverse. This is the reason that young boys tend to think girls have cooties and are gross, and vice versa. It’s evolution’s way of saying “Not yet”.
I can’t find the article now, but an evolutionary-psychology noticed that the “cooties” concept seems to exist across all cultures (though obviously not always given the name “cooties”), and furthermore noticed that children often don’t consider their siblings to have cooties. I.e. boys will feel that most girls have cooties, but not their sisters.
The psychologist offered this as an explanation: We evolved to find the people we grow up with to be not sexually attractive. This is a mechanism to avoid incest (which can result in genetic problems). However, if you live in a society, you don’t want to find people who grow up with you, but who do not share genes with you, to be sexually unattractive (or else you might find no one within your whole society attractive), and thus this “cooties” sensation was placed by evolution so that we can avoid people of the opposite sex during this critical period so that later on, as adults, we may be sexually attracted to them.
That “explanation” sounds awfully just-so-story to me.
Why would evolution want to say this? What harm is there in sexual relations before puberty, when pregnancy can’t result?
Anecdotal: Approximately 30% of the material on Quizilla et al. Whether they’re writing/reading about it solely because they think it’s adult and edgy is a different matter, but there are clearly many children thinking about this kind of thing at the very least.
Considering that, as has been noted elsewhere on this thread, prepubescent children (including infants) self-stimulate their genitals, this seems … ill-founded. Of course, I suppose it depends how much of the pleasure involves romance, which does seem to be restricted to adults; but I somehow doubt you can claim most of the pleasure from sex is due to romance.
I trust my memory of certain things as far back as a few vaguities before age 2 years, and I’ve read other people’s reports, and I conclude that, while children do self-stimulate, it’s typically (but not always) less pleasurable than it is after puberty.
I haven’t read any neurological studies addressing that hypothesis in particular, but of course they could exist and I could be unaware of them.
Hmm, good point—I don’t actually know anything about the topic. Sounds like actual orgasm is impossible without puberty (although note it’s possible way before adulthood.) Still, pleasure is pleasure. Kids wouldn’t enjoy it as much as adults, but some of the adaptations are clearly present—enough for sex to be pleasurable, if not as pleasurable.
Mind you, I personally wouldn’t want to change that particular norm without a great deal of thought and investigation by actual experts. But this particular claim seems to be flawed.
I’ve encountered anecdotes claiming that a form of prepubescent orgasm is possible, if difficult to achieve (especially since most wouldn’t know to aim for it). I’m less convinced of that, but I remember someone actually providing a citation for “utero orgasms in both sexes” (which I assumed to mean while still in the womb).
An aside: I catch myself committing the mind projection fallacy most often when I come across comments that make it very clear people have purged large chunks of childhood from their memory/identity. It takes me a second or so to remember that this makes sense for most people. This has had a weird effect regarding the subject at hand: I’m surprised when I run into adult males talking like they don’t believe boys can get erections, then I’m skeptical when someone else reports that prepubescent males can have orgasms. Noticing the pattern there has me updating in favor of prepubescent orgasm being possible, if difficult.
I find it ironic that ‘notmyrealnick’ got 34 points for this comment. But I suppose there are repercussions other than bad karma for posting unpopular views...
Even if the children themselves after the fact don’t consider the sexual abuse harmful, it may be considered wrong by the humanity as a whole. The babyeaters prefer eating their children, but humans would like them to stop doing that. Drugs addict continues to take drugs even if they lead to decay of his personality and health, but other people consider it a wrong thing to do. Even if it turns out that with (consensually) abused children the moral line is closer to acceptance, I still expect it to be way below the acceptance level.
The babyeater question would be substantially changed if the children didn’t mind being eaten and didn’t take harm by it—more or less from a moral crusade into parochial squeamishness. Eliezer went a long way out of his way to avoid that in the story, but here we can’t dodge it with a rhetorical flourish.
If as it turns out, kids enjoy consensual sex and take no harm by it, on what basis can society consider it wrong? There has to be a reason. Societies can’t just create moral crimes by their say-so.
Edit in Feb 2013: I’ve come to the conclusion that the problem with the above is that children are in an extremely steep power relationship—an artefact of this society, and it’s avoidable, but it can’t be wished away without a huge job of dismantling. Meaning, that right now children can’t even express a preference. “Yes” is meaningless with the ability of an adult to apply pressure that would count as felony kidnapping and torture if done to another adult, with complete impunity and even acclaim. “No” is meaningless when adults have imposed their schemas of asexual innocence willy-nilly over children’s experience, and when they have such huge control of that experience itself, up to and including maintaining “big lies” via censorship.
As such, an age of consent is a damn dirty hack that acknowledges the completely untenable position of children in making a decision that’s true to their intent, while refusing to rescue them from it. It is marginally better than nothing. If it does go, it can’t go first. A lot of rescuing needs to come first.
(Edit) During this entire thread I was misusing the word “coerce.” I meant something more like “entice.” Thanks Alicorn.
I always assumed that part of the problem is that it is easier to coerce children. If I kidnap a child and do nothing but feed them ice-cream and take them on a tour of the zoo it is still wrong, even if they liked it and no harm was done.
If I seduce a child and do nothing but feed them ice-cream and have sex with them… is it still wrong? Even if they liked it and no harm was done? There are certainly risks involved and assuming things will be okay is naive. But is assuming things will be bad/evil/gross just as naive?
Suppressing the moral gag reflex is hard to do. I do not know if I can answer the question objectively. I know if I had kids I do not want anyone coercing them into having sex.
Well yes, because kidnapping involves taking a child from their parents unannounced, possibly against the child’s will too, possibly also asking for ransom, etc. Those are separate harms that happen even if the child enjoyed the ice-cream and the trip to the zoo.
But what are the separate harms of sex? There are health risks, but they don’t hugely exceed the risks in other common childhood activities such as tree climbing.
No ransom and not against the child’s will. If the reason kidnapping is wrong deals with parental consent, does the same thing apply to sex?
This is actually irrelevant for the point I was trying to make. Kidnapping, with no harm done, is still very much illegal. Should it be?
Removing a child from a parent is a harm (as witness the panicked parent). It’s not so much a matter of consent, as of making people worry and separating them from their family. The parents have a protective interest in the child, which is harmed by their non-consent to the zoo trip. This is the very thing that makes it “kidnapping” and not “visiting with friends”. It is a separate harm, which is why the distinction I drew is relevant.
BTW, this line of argument doesn’t get you to “no sex”, it gets you to “no sex without parental consent”. Fair enough, now what if they say “yes”?
If the child is returned before the parent knows they are missing? I am not understanding why the correlation is so hard to see. It is an analogy, not a mirrored situation. Kidnapping is not seducing. There are differences. The original point was that seduction involves coercing children. Kidnapping can do the same thing. So can brainwashing. All three of these (kidnapping, brainwashing, seducing) can produce harm but may not and arguing about exactly when “harm” happens is not really useful. The relevant question is exactly this:
I am not arguing for any particular stance. I just saw an interesting correlation between seduction and kidnapping that involved coercion. If I remember correctly, the laws in some states get remarkably relaxed when minors have their parents’ consent. I could not tell you specifics, however. If you find this sort of thing interesting I am sure it is relatively easy to find information about sex with parental consent.
The bottom line: A child will do an awful lot to please someone. Is it okay to coerce them into doing something? Does it matter if they enjoy it? Does it matter if there is harm? Does it matter if they want to do it?
All of this also assumes “seduction” instead of a real, true romance. I would assume that a real, true romance has less coercion. (Or, at the very least, thinks it has less coercion.)
Perhaps we’re being confused by your use of the verb “seduce”, since to me that doesn’t include non-consensual means—it usually implies cunning trickery at worst and goal-directed charm at best. Can you restate without using it?
You can replace the word “seduce” with “get them to have consensual sex with you.” “Get” in the context I am using basically implies “coerce.” The point does rely on the possibility of convincing someone they want the same thing you want. The catch is that such a sexual encounter satisfies the term “consensual sex.” They completely, and of their own volition, consented to having sex.
The original point asks if there is validity in condemning sex with children because they are easy to coerce. In other words, is the criterion of “consensual” too easy to manipulate?
I don’t think the word “coerce” has the right implications here. It sounds like what you’re going for is more along the lines of “entice”. Coercion arguably invalidates consent even with adults.
Ooh, yes, you are very right. Apologies.
OK, so, we’ll go with entice.
Enticing would usually mean suggesting the activity is intrinsically desirable, offering a trade, asking pretty please, making a dare, or etc. We’ll assume the child’s mind is changed by the enticement.
Why would that change not simply be valid?
Is it valid when considering kidnapping?
Didn’t we already beat that one to death? The child’s volition isn’t all that’s involved with kidnapping. It isn’t directly comparable.
I keep coming back to kidnapping because the I think the example fits. I have been trying to avoid getting into super picky details because I consider the details to be obvious. I apologize for being obtuse.
If I stop by the local pool and convince a kid to take a trip with me and feed it ice-cream, take it to the zoo, and then return the kid to the pool before anyone else notices, was the kidnapping wrong? Would you even call it kidnapping?
If someone found out after the fact and charged me with kidnapping, could I use the defense, “But the kid liked it! It was fun and no harm was done!”?
This is from an above comment you made:
You say that the reason kidnapping is wrong is because the parents will worry. Parents worry about all sorts of things and most of them were not made illegal. Many parents would worry if their child was having sex with an adult.
If you really don’t like the example we can just skip to the abstract view. If I consciously manipulate someone into wanting a particular something, can I use their desire as a justification for my actions? Or, if I brainwash them into having sex with me, is it considered consent?
What are the current laws about consent under the influence of alcohol? That also seems relevant. What about people with mental handicaps? The basic point is that “consent” is not a cut and dry excuse. Consent can be manipulated and it is much easier to manipulate consent out of a child than an adult.
This is not an argument one way or the other, but merely asking if consent from children should mean the same thing as consent from adults.
The American Psychiatric Association explicitly states that children cannot give consent. The problem is that children are completely dependent upon adults, and they see any friendly adult as a caretaker, especially if the parent gives permission to be with that adult or there is any physical affection. Individual kids vary in their sophistication, and it depends on the age of the child, but most kids cannot tell the difference between “do this please so I will be happy” and “do this please so I will take care of you / love you / keep you safe”. It just activates the same “I-need-to-listen-for-survival” pathway either way. It is a relevant observation that when a child feels less safe with an adult, they will usually be more agreeable. A first sign of abuse is often lack of agreeability or hostility in response to requests noticed in school.
Is there a special reason the American Psychiatric Association should be considered an authority on ethics? They can inform us of the empirical facts, of which “children who feel unsafe are agreeable” is one, but “children cannot give morally relevant consent to sexual activity” does not follow instantly and obviously from that statement.
I was citing them as an authority on child psychology.
But knowledge about the psychology of a creature does not instantly and obviously lead to knowledge about the ethical boundaries around treatment of the creature. I could have encyclopedic knowledge of the empirically observable facts about, say, pigs, without being able to derive from that whether it’s okay to kill them for food. Similarly, the APA is undoubtedly an authority on child psychology. It is not at all clear that they are an authority on the implications that child psychology has for ethics, so while most of your comment was quite interesting, the first sentence was noise.
My entire comment was about whether children can consent or not. I didn’t say anything about ethical implications.
However, this paper makes the connection:
http://www.itp-arcados.net/wissen/Finkelhor1979_EN.pdf
While simply giving the appearance of consent is a plain empirical fact which might or might not have ethical features, it’s obvious that children can utter consent-like words, so I assumed you were talking about consent in an ethically relevant sense. Should I not have assumed that? If you’re not talking about consent as a thing that changes what it is ethically okay to do to somebody, then I don’t know what you’re talking about at all.
Whether children can consent or not to sex is a psychological fact. Just as whether a pig can consent or not to being eaten is a biological fact.
Facts may have ethical implications (and thus ethical relevance which is why your question above is confused). The ability to give consent is not obviously and immediately connected with an specific ethical conclusion, because you can argue that it is ethical to eat a pig even though they cannot give consent. To argue that sex with children is wrong, because they cannot give consent, you need to add the ethical argument that sex without consent is unethical.
I’m really surprised you’d claim that. Even if you could propose an experiment that you think would settle this question of fact, it’s far from clear that everyone would agree that your experiment settled it. To me it’s obvious that whether or not we consider that a given act from a given person counts as consent to something is in large part a question of values, not of fact.
Yes, we do seem to disagree. I think that “ability to do X” is factual. However, I suspect there is ambiguity in what “consent” means, and there is room for inserting values there. But I hold my position, because I think that if you define consent in a meaningful way, kids cannot do that. (For example, if you say consent means to just articulate a set of words, I will gladly abandon the word “consent” for what I do mean.)
I would define consent as (a) understanding what you are agreeing to and (b) freely agreeing.
Psychology is a soft science, surely. Which is why I felt more comfortable quoting an authority in psychology than asserting my own beliefs: I hardly know what counts as evidence or good epistemology in psychology. However, I could think of some experiments to demonstrate that children don’t understand and are not freely agreeing. For example, for the latter experiment, first ascertain what the children’s real preferences are, say, for a specific type of cookie. Then demonstrate that if an adult indicates which cookie choice will make them happy, the kid will choose the adult’s choice at a rate proportional to the perceived power imbalance and inversely proportional to their perceived environmental safety.
To be clear, I think that adult-child sex is extremely unethical.
I am motivated to contribute to this discussion, because I hope I may be able to encourage rational people to adopt a similar view on adult-child sex. However, I am not sure it is emotionally safe or that it would be effective to participate. Certain attitudes and comments on this thread make me wonder if any argument for a position that is not counter conventional wisdom will be summarily dismissed. In other words, there seems to be evidence that “you guys” are not unbiased about this.
I don’t agree. I think empathy is to ethics as tastiness is to nutritional content—it’s a reaction that makes us feel good under circumstances conducive to a valuable end and feel aversion to circumstances conducive to deplorable ends, but it’s easily fooled (just as our tastebuds can be fooled by cinnamon buns). We need intuitions and empathy to have a starting point when we talk ethics, but a purely intuitionist morality is inevitably going to be inconsistent and have poor motivations in extreme cases.
It’s obvious that you feel very strongly that adults having sex with children is unethical; you’ve made that abundantly clear. It doesn’t have to follow from that that you are correct, and it definitely doesn’t follow that we can’t consider the question, and I’m sorry to say that you seem to be under the impression that you can’t civilly discuss it with people who don’t share your opinion.
I don’t think anyone is going to read this thread and then find that, because a few people gave some thought to the issue, their qualms about raping children have evaporated. Deep-seated ethical misgivings, legal repercussions, practical concerns, and the simple fact that most people aren’t pedophiles would see to that; anyone who’d be convinced by this thread in favor of actually having sex with children was just looking for an excuse and would have found NAMBLA’s website eventually.
If you cannot stick to solid argumentation in favor of your view (which I suspect is the dominant one—it’s just fashionable in this thread to signal open-mindedness by being cryptic and oblique about the matter) and instead resort to what amounts to shrill, repetitive whining about how unethical we all are, you aren’t “contributing to the discussion” and you certainly are unlikely to make any progress in convincing this particular audience.
All of that having been said, the experiment you describe wouldn’t prove that the children aren’t “freely” agreeing to take the cookie that the adult wants them to take. You can prove that people are likely to incorrectly judge the length of lines when others state incorrect judgments aloud; that doesn’t mean they’re being coerced or that they aren’t free, it just means that humans are social animals. The opinions and wishes of the people around us are important factors in our choices, and it is deeply murky territory when those opinions and wishes turn into coercive power dynamics.
My personal pet peeve in this discussion is that nobody is defining precisely what “adult” and “child” mean.
Teenagers these days are getting thrown in jail (and given lifetime “sex offender” labels) for having consensual sex on the wrong side of arbitrary age lines that vary from jurisdiction to jurisdiction.
So, my empathy on this subject is much more solidly with them, and that’s the ethics I’m personally concerned with in this discussion. We may not be able to prevent all the harm that takes place from manipulation and abuse, but I’d like to see some improvement for the innocents who get caught in the crossfire.
agreed. A consequentialist, however, would not necessarily buy this—weighing the harm to innocents on the border versus harm to children nowhere near the border might well favor keeping things as they are. Not that I buy that justification.
Whether children can consent or not to sex is a psychological fact. Just as whether a pig can consent or not to being eaten or not is a biological fact.
Facts may have ethical implications (and thus ethical relevance which is why your question above is confused). The ability to give consent is not obviously and immediately connected with an specific ethical conclusion, because you can argue that it is ethical to eat a pig even though they cannot give consent. To argue that sex with children is wrong, because they cannot give consent, you need to add the ethical argument that sex without consent is unethical.
I don’t think it this type of quibbling on semantics (for example, a perfectly good meaning of coerce is to compel) is useful to the discourse. When words have variable meanings, you need to use the context to determine the meaning, and request clarification if it isn’t clear.
This is the crux of every modern dissent to old-age prejudices: If it harms no one, it’s not a moral wrong.
Elsewhere, there is a discussion regarding using karma to measure the value of individual comments or commentators themselves. I think this entire thread needs adjustment. It is confused and immoral.
Addressing The Confusion
There’s plenty of evidence that sexual relationships between adults and children is harmful. I think the best evidence is first person: painful, emotional, sincere. There is also plenty of scientific/objective/peer-reviewed evidence (see Wikipedia for references).
Reading through the comments in this thread, there appears to be significant confusion regarding why sexual abuse is harmful. Whether adult sex with children is harmful (and wrong) has nothing to do with whether children are interested in sex or not, or whether the behavior is consentual or not. It has to do with facts specific to children: they are completely dependent upon adults, have incompletely formed ideas about character differences among adults (they can naively give an abusive adult the same continued trust as a loving one), incompletely formed ideas about sexuality (they may not care what is done now, but they will develop an opinion later) and the fact that sexual autonomy becomes an important identity issue in their late teens and early twenties, and so it is painful if that has been usurped.
It doesn’t matter if it is the experience of some abused children that the sexual abuse was not harmful. As victims, they themselves are allowed to feel however they like about it. Also, using the examples of the ancient Greeks is a common but untenable argument for moral relativism on this issue.
Addressing Immorality
This entire thread is immoral, and some of you are using karma to bond over being assholes.
To defend the full strength of this moral position, it was not immoral to consider the original question. In fact, such questions and nearby questions do need to be considered. Why is the incidence of child sexual abuse so high, given society’s unanimous position on it? As someone who finds it quite possible to sympathize with a pedophile – they have strong biological, psychological and social reasons for their moral confusion – I actually do have constructive things to say in addition to just saying “adult sexual relations with a child is evil”.
The immorality of this thread results from considering a question, considering the evidence, and not arriving at and defending the truthful answer. This thread is immoral – specifically—because it almost exclusively considered evidence in favor of the immoral position, even though such evidence is much more rare and difficult to come by. Any evidence for the moral position was weak or irrelevant. (In particular, all comments about whether children are actually sexual or not.) There is a moral obligation to defend morality wherever you see evil (or evil ideas that would result in evil).
For example: you must consider the very real possibility of confused persons reading this post and perceiving permission from the community. Permission from this community would have some value. (The idea being, for example, that someone would interpret the thread as saying that rational people agree – or at least they don’t disagree – that adult sexual relationships with children might not be harmful to them.)
Not only as “rationalists” did you not use the full amount of available evidence to arrive at the correct moral position on the original question, the comments deviated into deeper and deeper immorality without correction.
This statement would have been easy to correct with some real-world common sense about the meaning of sex in most people’s lives. Instead, + 3 karma points granted for this bit of inhumanity. And no outrage.
My theory is that, somehow, “most people” didn’t notice this thread. Or maybe they thought a “closet” survey is not the place to expect or uphold morality. (Yet whenever human beings have looked the other way it is because, somehow, they perceived morality to be “out of context”.) Perhaps this isn’t something they didn’t want to think about. Me neither. I would have rather ignored it. But to the extent to which LW is a community I occasionally or frequently post on, I can’t.
You believe that the idea of adult sexual relationships with a child being bad might be a cached thought?
Except for the fact that many, many kids grow up and report that it’s harmful. These accounts are painful, emotional, sincere. So if the victims say that it is harmful, why don’t you believe them?
Here’s a theory as to why: the experience may indeed be painful in the psychosocial context of our present society, but perhaps only in that context, or more specifically, because of that context.
That is, we have ideas of shame—that certain things are, or are not shameful—that are culturally based, and when we do things that offend our (learned) sense of shame, we feel, and remember, the associated negative emotions, without necessarily remembering their cause. We associate the negative emotions with the circumstance, instead of the long-gone prior that caused us to feel shameful in such circumstances. In some religions, you can feel shameful working on the Sabbath; in our society, you feel shameful having sex when society says you aren’t “ready” to. (I admit that that’s a bit of a stretched analogy.)
The more common reply to your argument, though, is that the children are reassigning a negative emotional weight to their memory of the experiences, after the fact, because the therapist/parent/whomever is expecting the experience to be negative. They don’t have to prompt for this verbally; they may be using completely neutral language, or simply asking “what happened?” Either way, their body language will show their emotional reaction to every word (and if a horse can do math based on our observed body language, we’re obviously not very good at concealing it.)
To demonstrate my meaning: If one of my friends punched me in the arm, I’d interpret that as playful at the time. If a stranger did it, I’d interpret it as hurtful. I literally feel more pain in the latter case, because of this expectation. Now, if, some time later that day, my friend insulted my race, or some other category to which I belong that implied that he just wasn’t my friend any more, I’d re-think that punch. I’d remember it hurting more.
Child abuse recountings are extreme versions of this. If you demonize the adult in the child’s mind, everything they do is going to take on a negative connotation. They’re going to start looking for the negative angle: a hug was really a rough squeeze; a toussle of the hair was really a hair-pulling, and so on. In this light, of course sex was a bad experience—it’s extremely physical with all sorts of pleasurable/painful connotations which can be switched around or played with to no end (for example, BDSM is simply a shared agreement on a set of altered connotations.)
Let’s see… My original question was, “if the children said they are harmed then why don’t you believe them?” Your answer sounds very much like it isn’t that you don’t believe them, but that the harm is discounted because it’s society’s fault.
Yet the original question posed was whether children are harmed or not, not whose fault it was.
Suppose that all the harm (all the “psychosocial” bad feelings) is an artifact of society, rather than society’s way of preventing the bad feelings that are a natural result of sexual abuse. What then? Is it more important for a child to experience sex with an adult than being well-integrated into society? In fact, one of the most painful aspects of sexual abuse is the child’s realization that the adult was deliberately creating a relationship outside societal norms that would alienate them from society.
Secondly, saying that the harm is caused by society and not by sexual abuse is not relevant if your intention is to keep a child from harm. (Sounds more like a rationalization of someone trying to get away with doing harm: I didn’t hurt his feelings! Society did!) In this absurd hypothetical scenario where harm is just an artifact of society, you might have three options if you want to actually prevent the child from coming to harm: prevent sexual abuse, remove the child from society (only a monster would do this), or significantly change society. Good luck with the last bit, as
And yet, other things that cause children as much or more harm (such as emotional abuse) are not similarly outlawed. This raises a strong suggestion that this has more to do with parents’ interests than childrens’ interests.
Evolutionarily, parents have a strong incentive to exert influence over their childrens’ choice of sexual partners. Actually, they have strong incentives to exert influence over their childrens’ choices, period, but this is especially true for children’s sexual choices… which is why teenage girls are now getting slapped with “sex offender” and “child pornographer” labels for sending naked cellphone pictures to their boyfriends.
It it more important for them to be a rational thinker than to be well-integrated into society, whatever that means? Are we abusing children by teaching them to be atheists?
I don’t have any answers to these questions; I’m just pointing out that your reasoning here is suspect. If we were to determine the legality or morality of relationships on the basis of possible emotional harm or social approbation, nobody would be in a relationship at all. Yet, people often choose relationships with others who their family, friends, or entire society are against.
(If we substitute e.g. “Is it more important for a person to have sex outside their race/gender/religion than to be well-integrated into society”, the fallacy is even clearer.)
An argument against large age gaps in sexual relationships due to consent issues, however, is a different kettle of fish. If we say that children below some age can’t reasonably consent to a particular activity due to lack of self-control or adequate contextualization ability, that’s a bit more reasonable, although you then get into a lot of line-drawing arguments about how young is too young. (Some people, OTOH, will likely never be mature enough to have a decent relationship, but at some point you’ve got to let it be their responsibility.)
But this was obviously a response to that question. derefr suggested that when someone asks the child about the abuse, it’s asked in such a way that the child remembers it as abusive. This isn’t a statement about society, but about why the child’s memory is not necessarily reliable.
Here’s something else I can’t normally say in public:
Infants are not people because they do not have significant mental capacities. They should be given the same moral status as, say, dogs. It’s acceptable to euthanize one’s pet dog for many reasons, so it should be okay to kill a newborn for similar reasons.
In other words, the right to an abortion shouldn’t end after the baby is born. Infants probably become more like people than like dogs some time around two years of age, so it should be acceptable to euthanize any infant less than two years old under any circumstances in which it would be acceptable to euthanize a dog.
In America, infants have a special privileged moral status, as evidenced by the “Baby On Board” signs people put on their autos. “Oh, there’s a baby in that car! I’ll plow into this car full of old people instead.”
Do you really deny that there are probably benefits, given limits to average human condition, to at least some hard legal lines corresponding to continuous realities?
/me shrugs… I suppose it is useful to have a line, and once you decide to have a line, you do have to draw it somewhere, but I don’t see why viability is a particularly meaningful place to draw it.
Similar arguments are often used to argue in favor of animal rights; some humans don’t have brains that work better than animals’ brains, so if humans with defective or otherwise underdeveloped brains (the profoundly mentally retarded, infants, etc.) have moral status, then so do animals such as chimpanzees and dogs.
See also: http://en.wikipedia.org/wiki/Argument_from_marginal_cases
I would put the cutoff at ~1 week after birth rather than 2 years, simply for a comfortable margin of safety, but yes.
However, as I’ve written about before elsewhere, this kind of thinking does lead to the amusing conclusion that cutting off a baby’s limb is more wrong than killing it (because in the former case there’s a full-human who’s directly harmed, which is not true in the latter case).
This suggests the following argument: if it’s wrong to cut off a baby’s limb, surely (the possibility of negative quality of life aside) it’s wrong to give the baby a permanent affliction that prevents it from ever thinking, having fun, etc? That’s exactly the kind of affliction that death is.
I think many philosophical questions would be clearer, or at least more interesting, if we reconceptualized death as “Persistent Mineral Syndrome”.
No, because the baby (by assumption) has no moral weight. The entity with moral weight is the adult which that baby will become. Preventing that adult from existing at all is not immoral (if it were, we’d essentially have to accept the repugnant conclusion), whereas causing harm to that adult, by harming the baby nonfatally, is.
Well, on this view the baby does grow into an adult, it’s just that the adult is a death patient (and, apparently, discriminated against for this reason).
Too pseudo-clever?
It ain’t discrimination until an actual member of the supposedly-disadvantaged group complains.
You don’t know it is discrimination until an actual member of the supposedly-disadvantaged group complains (barring other forms of evidence). But that does not mean it isn’t disrimination. The map is not the territory.
… Which is why whenever I want to bully disadvantaged groups I make sure they cannot speak.
(That is to say, “Don’t be daft!”)
Thanks, Wedrifid. The other downvote is mine; there’s so much wrong with the original statement that I was having trouble even figuring out where to start in replying to it.
That’s censorship, not discrimination. Different problem needs a different solution. Once the information’s made available, and complaints are possible, only then can antidiscrimination measures be implemented with any chance of success.
Have you ever known anyone to go out of their way to deny a dead person the opportunity to speak? If someone sat up near the conclusion of their own funeral and disputed the previous speaker’s main points, I doubt they’d be shouted down.
If I censor adequately then, by your definition, it is not possible for me to discriminate. I think that is a silly definition of ‘discriminate’.
Possibly so. Silly, however, is not the same as wrong.
I am arguing that in such an environment of overwhelming censorship, it makes no sense to attempt to deal with the discrimination until the censorship itself has been cut away to the point that specific claims of descrimination—that is, complaints—are available. Censorship suppresses social problems in the same sense that morphine suppresses pain.
Arguing that some group should receive a material benefit which no member of that group has actually requested, and citing discrimination as the cause, is just some political game.
Please consider ‘wrong, stupid, absurd, unhelpful and generally BAD’ to be substituted for the word ‘silly’ in the grandparent.
You gave an unqualified heuristic that applies to lots of situations, most of which it is inaccurate—and very dangerous—in. That’s poor form at a bare minimum.
Not arguing with the downvotes, just trying to clarify what I meant.
This isn’t an argument for death being the worst of the possible outcomes. For example, you may be turned into a serial killer zombie, which is arguably worse than being dead.
There should be an option to downvote your own comments.
To achieve the same effect with current technology, upvote everyone else.
Do you mean that you no longer believe that being a serial killer zombie is arguably worse than being dead? I believe that.
Who do I get to kill as said zombie?
Being turned into a serial killer zombie actually sounds pretty awesome, assuming an appropriate soundtrack.
I didn’t present it as one. I agree death isn’t the worst of the possible outcomes.
You say that like it’s an unexpected conclusion. Which is more wrong: cutting off one of a dog’s legs, or euthanizing it? Most people, I suspect, would say the former.
What happens is that we apply different standards to thinking, feeling life forms of limited intelligence based on whether or not the organism happens to be human.
Personally, I would say that neither of those is wrong (per se, anyway), and I don’t think the situations are very analogous. But I certainly agree with your last sentence (both that we apply different standards, and that we shouldn’t).
Here’s why this is distasteful:
That infant has either experienced enough to affect their development, or has shown individuality of some kind that will be developed further as they mature. An infant is always in the stage of ‘becoming,’ and as such their future selves are to some degree already in evidence. Lose the infant, lose the future—and that is the loss that most people find tragic.
My daughter was showing personality and preferences in the womb. Kicking in time with music she liked (which she continued to like after birth), kicking out of time with music she didn’t like (which she continued to dislike after birth).
I was amazed. I’d had this vague notion that babies were sort of uninteresting blobs and didn’t manifest a personality until maybe a year old. I have no idea why I thought that, but I was utterly wrong.
Of course, I am strongly predisposed to think highly of my offspring in all regards, and I do try to allow for this. But from birth on, she was manifesting sufficient personality for us to regard her as an individual human with her own preferences. Waiting until age two years to accept such a thing is simply incorrect.
“Responds to musical stimuli”, assuming it’s true, is hardly an argument about being a person. A parrot could have similar ability to discriminate between types of music, for all I know.
Edit in response to downvoting: Seriously. There could be correct arguments for your statement, but this is clearly not one of them. This is a point of simple fact: ability to discriminate types of music is not strong (let alone decisive) evidence for the property of being a person. Non-person things can easily have that ability. That this fact argues for a conclusion that offends someone’s sensibilities (or even a conclusion that is clearly wrong, for other reasons!) is not a point against the fact.
It was in response to the assertion that babies could reasonably not be regarded as individual humans until age two. That assertion is ridiculous for all sorts of reasons. It was also noting that until I had actual experience of a baby, my assumptions had also been ridiculous, and that really doesn’t need me putting “and by the way, it’s possible that you’re just saying something simply incorrect due to lack of experience” on the front. I am finding your response difficult to distinguish from choosing to miss the point.
Reading these comment chains somehow strongly reminds of listening to Louis CK.
This still entirely misses the point: “responds to musical stimuli in the same way” is an argument about continuity of identity. If someone at 3 years old is a person, and they’re the same just smaller (both physically and mentally) at 1 year old and at −6 months old, then arguments about their personhood at 3 years old apply (though in a limited sense) at 1 year old or −6 months old.
I can’t think of a situation where I would be willing to accept the death/murder of a fetus or infant where I wouldn’t be willing to accept the death/murder of an adult. How low does your discount rate have to be where you would be willing to kill a one year old but not willing to kill a three year old?
Counterpoint that it does in fact address the point: write half a dozen different programs that can analyse recordings of music and output a beat that is in time. Run these programs on half a dozen different computers and try to claim that responding the same way is decisive evidence of continuity of identity across all computers and programs.
You are opposed to abortion? It seems to me the majority of abortion cases do not constitute moral grounds for the death of an adult. Not a judgement of your possible views, just interested to see if the reasoning is consistent.
Emphasis mine. Illustrative examples are generally not decisive evidence. I have yet to come across someone with significant experience around infants who believes they don’t have personalities until ~2 years old (or whenever infanticide proponents think they develop them), and so until I come across someone with that opinion I feel justified in attributing that opinion to ignorance rather than insight.
I am (and should be) skeptical of someone who says “that doesn’t convince me” instead of “my experience is different.” The first response, which is generally accompanied by hypotheticals instead of examples, does not require any knowledge to create. Generally, experience cannot be conveyed by a few illustrative examples; one should not expect to be convinced by evidence when that evidence is hard to transfer. How, exactly, should one compress memories of interactions with another person over the course of years to transmit to others?
I also find it interesting you have moved the issue from “demonstrates persistent preferences for particular kinds of music” to “detects a beat”- was that intentional? Because if you wrote a program that could classify music into types it didn’t like and types it did, and the classification was predictable/sensible, I wouldn’t have a problem saying that your program preferred one kind of music to another, and that the program is the same even if you run it on a succession of computers with improving hardware.
I consider abortions of both the spontaneous and intentional varieties to be tragic. “Accept” was probably a poor word to use because I am not currently in favor of criminalizing abortion and I feel the best response to a great many tragedies is coping. When asked for advice, I advise against abortion but do not rule it out and do not seek to coerce others into avoiding it. My feelings (and advice) on suicide are broadly similar, and so perhaps it would be most illuminating to say I compare it to suicide rather than to homicide.
Yes. David_Gerard said:
Kicking out of time doesn’t suggest she doesn’t like it as much as it suggests she is failing to kick in time. Which is weak evidence that all she is doing is finding a beat in time with the music.
And all the people I have met who have had significant experience around animals believe they have personalities from birth—I am inclined not to trust experience in this matter because of the almost-certain anthropomorphizing that is going on.
Why shouldn’t animals have distinct personalities from each other? It doesn’t take that much brainpower before you can start introducing differences in behavior between specimens without causing their methods of interaction to collapse.
See my response to Vaniver, but in a nutshell: animals do have distinct personalities, but not in the same sense of the word we have when we talk about embryos and babies having the right to live because they have personality.
Not the first time on this site that someone has been accused of anthropomorphizing humans.
ETA: remarking upon the absurdity of the phrase, not the absurdity of the notion.
Sure, it looks odd. But as I think you discerned, I think babies don’t have much complex agenthood—on the order of domestic animals—and people saying they’ve experienced babies having complex agenthood are not to be trusted because people also say that the weather has complex agenthood.
Have you heard my new band, Complex Agenthood?
You’re opening Saturday night for Emergent Intelligence at the Rationalist’s Rationally Rational Rationality of Rationalness, right?
We don’t actually tell people when or where we’re playing; we just provide enough evidence for a perfect Bayesian reasoner to figure it out.
I don’t think they were.
I think the analogy only holds if “anthropomorphizing” is the problem in both cases.
I understand what you are getting at, but am not convinced.
I think a more charitable reading would be something along the lines of:
Similar in what way? Presented with your “more charitable” reading, I would still think the writer was suggesting anthropomorphism is still the problem in this instance.
Also, it might be relevant to my reading that I often caution against anthropomorphizing humans.
There are perhaps a few things going on here.
There rings a certain absurdity to the phrase “anthropomorphizing humans”: of course it’s not a problem, they’re already anthropomorphic.
My understanding, at this point, is that you are well aware of this, and are enjoying it, but do not consider it an actual argument in the context of the broader discussion. That is, you are remarking on the absurdity of the phrase, not the absurdity of the notion. Is that correct?
I suppose I worry that people will see the absurdity, but misattribute it. When the question is whether a model of a complex thinking, feeling, goal oriented agent is appropriate to some entities we label human in other respects, and someone says “I have interacted with such entities, and the complex model seems to fit”, it is not at all absurd to point out that we’re overeager to apply the model in cases it clearly doesn’t actually fit.
Correct.
Do you have a problem with the idea that animals have continuous “characters” since birth? Because that gets rid of the troublesome word “personality.”
The issue of anthropomorphisation is a tricky one. Even when dealing with other humans, there’s a massive amount of projection that goes on- but it seems to me we can characterize relationships by how much of the other thing’s character you have to generate mentally. For a person you know well, it’s probably low, for an animal you know well, it’s probably moderate, for a machine you know well, it’s probably high. But even your impression of the machine’s character isn’t 100% your mental invention- if a copier jams when placed in a certain situation due to the placing of mechanical parts inside it, it’s practical to describe it as the copier “not liking” that situation despite the copier not being sophisticated enough to “like” or “dislike” things on a level more than “not jamming” or “jamming.”
Under such a model, what would matter is not that you’ve invented 95% of your perception of your relationship with the copier, but whether or not the other 5% that’s actually due to the copier is consistent over time.
The word ‘personality’ is troublesome when applied to animals. I feel like a lot of the opposition to abortion and early infanticide can be sourced from the phrase “unique personality”. If you say a baby has personality, you are pre-supposing they are a person, which triggers the ingrained right-to-live reflex. Not questioning the right-to-live reflex at all; I think it’s a marvelous thing.
Whatever people mean when they say an animal has personality other than personality—I will use your term character, it seems to capture the essence of the non-anthropomorphic ideas people have about animals and photocopiers. The unique character of a pet animal isn’t a strong argument for its right to live, because pets with ailments regularly get put down when the cost for treatment gets into four digits. Also, an animal’s character is not a good argument against eating it, because >95% of the world is not vegetarian or vegan.
So I feel like there is some meaning-smuggling going on. The assertion is we shouldn’t kill babies because they have personality like us, and the argument holding it up is that they have personality like animals do.
I agree with you that character isn’t what gives an entity a right to life. But I don’t think that’s my argument.
To turn a dog into a person you have to do a lot of work. Turning a copier into a person is similarly difficult. But to turn a baby into a person, you just have to wait a few years. It’s automatic, so long as you provide it with sufficient fuel.
If we say “We care deeply about protecting butterflies because they are beautiful, but don’t care at all about protecting caterpillars because they are ugly” then others have a strong reason to question how much we actually care about protecting butterflies (or know about the world), because there are no butterflies that weren’t caterpillars.
And so even though the caterpillar has none of the outward qualities that make us care about butterflies, our feelings about butterflies should extend to them, because they are butterflies, just not yet. But note that we don’t extend those feelings to nectar and leaves and air, even though butterflies are composed of the things that they eat and breathe and cannot exist without them, because nectar and leaves and air are fungible and caterpillars are not.
Your primary argument is “caterpillars are ugly,” and I agree with that. My claim is that argument is insufficient to reach the conclusion that we should not protect caterpillars: you have to show that caterpillars are not butterflies, and that must be done in such a way that is consistent with the statement “I care about protecting butterflies.”
Similarly, we care about persons, and because we do that we should care about babies that turn into persons, even if they aren’t persons yet, because those babies are not fungible. When I ask the question “when did I awaken as a person with a mind?” I might point to my earliest memories or when I began thinking independently or some other milestone- but when I ask the question “when did I begin as a continuous being?” there seems to be one obvious answer, and it’s when my DNA was assembled for the first time.
If your standard is that something has to be sapient right now in order for it to have any protection, that opens the door to a number of horrors. Can someone kill a sleeping human without moral culpability because while asleep a human only has character, not personality? What about if that human has suffered irreparable necrosis of most brain tissue? If your answers for those differed, it’s probably because those represent very different expectations about the future- the sleeping human will probably awaken shortly and resume being a person, but a human with a necrotic brain probably doesn’t have any personhood left in them. And so to treat a baby like a human with a necrotic brain is to ignore the important thing that makes us value sleeping humans- the future.
It takes a hell of lot more than sufficient food to get a person out of a baby. If you do that, at best you end up with a feral child. Human certainly, but only questionably a person. More likely you end up with dead baby from any of a number of untreated diseases. We are social animals. Without company, even those of us that are fully formed often go mad.
I am willing to call interaction with people ‘fuel’; I chose that delightfully stretchy word on purpose.
Your argument suggests that the existence of an ‘uplift box’ that turns dogs into people would give people-rights to dogs, as the process would have been automated. To the extent that turning a baby into a person is automated, it doesn’t mean that any less work is done—it just means that the work has been done by natural selection rather than human ingenuity. So I think the ‘work needed’ measure of how beings of potential value inherit value is somewhat flawed, the flaw coming from thinking about one particular dog and the work needed to raise it to human status, while neglecting the next billion dogs.
As to the caterpillar/butterfly analogy: if we agree that we value butterflies for their beauty, it’s not at all obvious that we shouldn’t breed them for their beauty. Analogously, with limited parental resources, why should humans not produce an excess of babies (or heterogenous fetuses, for that matter) and select based on the predicted characteristics of the adult? Note that in this case we raise our expected utility, whereas in the case of killing a sleeping human we most definitely lower it.
EDIT: I should make my own position clear on this. I vigorously oppose infanticide based in large part on the great psychological and social harm it inflicts. I have basically no problem with zygote selection.
I’m not terribly concerned about that case, and I think my framework handles it pretty gracefully. If dogs have unique characters and can become people in a non-fungible way just like babies have unique characters and can become people in a non-fungible way, then dogs deserve baby rights.
But there’s an underlying issue that highlights: whether our ethics are focused on conservation or, for lack of a better word, quality. A conservation-centered ethic sees people as irreplaceable and expensive; a quality-centered ethic sees people as replaceable. If you can make a million unique sapient simulated people at the push of a button, then the conservationalist ethic simply doesn’t seem appropriate- they’re eminently replacable, and so they’ve become fungible in the way I suggest sperm are, even though they’re cognizant enough to be people. Likewise, by the time the ability exists to turn a dog into a person, it’s not clear that personhood will be sufficient to grant the rights that it does now.
Note that while butterflies are valuable because of their beauty, people have rights because of their uniqueness/irreplaceability. I don’t see anything wrong with designer babies or human genetic engineering; I just have a moderate preference for gamete selection over zygote selection, and think that if we have reached a point where we are willing to kill undesirable babies we will probably also have reached a point where we are willing to kill undesirable adults, as eroding one protection appears like it will erode the other.
Is your answer any different for identical twins, who of course only separate after fertilization? Conjoined twins that don’t fully separate? How about chimeras? (Yes, there have been documented human examples.)
Yes; if I had a twin, my obvious answer would be when I separated from my brother. Were I a chimera, I suspect I would have researched the issue more extensively than I have now, but at my present level of understanding it still seems like there’s a discontinuous event- when the cells fuse together to form one organism.
It seems to me that you can find a discontinuous event for most person precursors, and the discontinuity is important for that question (because the components were continuous beforehand, and the composite is continuous afterwards). The main counterexample I can think of is clones- if I create a thousand copies of my DNA and implant them in embryos scrubbed of DNA, then they seem fungible in a way that a thousand unique fertilized embryos are not. And then, because they are fungible, I would ascribe to the group of them the specialness of a single fertilized embryo, and would only have qualms about destroying the last one (or perhaps last few). Note that as soon as they begin to develop, they begin to lose their fungibility (and we could even quantify that level of fungibility/uniqueness), and could eventually become unique people (that share the same genes).
Likewise, the position “every sperm is sacred” seems mistaken because sperm are by nature fungible (and beyond that, we can complain about the word sacred).
In what way are sperm fungible? There is usually a wide variety of difference between two random ones from the same person. After all, half the genetic variability of two siblings is due to the difference in sperm.
It’s true that differences are such that we can’t easily tell much difference between any two sperm (of the same sex and chromosome number) -- but the same is true of a just fertilized zygote or just divided embryo, which you appear to count as non-fungible when you say that “I can’t think of a situation where I would be willing to accept the death/murder of a fetus or infant where I wouldn’t be willing to accept the death/murder of an adult.”
It seems that “fungibility” needs to be treated as a continuum. I think that just about all reasonable criteria for deciding this turn out on closer inspection to be fairly continuous.
Agreed.
It mostly seems that way because they’re massively overproduced, but you are right to question that.
I think I’m going to turn to my claim about future development as important in identifying sperm as more fungible and fertilized eggs and beyond as less fungible, but I agree that claim is weaker than I thought it was when I made it.
I have a friend who’s a chimera. I used her as an example for this sort of question when I TA’ed intro ethics and my students found her fascinating.
Awesome. Having “near” examples can be quite handy in helping people take hypotheticals seriously.
Excellent point. I can even see in where I went wrong; I had an opaque concept in mind that “human lives are valuable” and was treating the baby as fungible in the sense that it doesn’t appear to be a human now, so it isn’t instrinsically valuable and can be replaced with another baby, later, at no loss the potential futures.
Even accepting the premise that this is an indication of having a distinct personality, I don’t think that’s an adequate basis to afford infants personhood. Cats have distinct personalities as well, although this fact suggests that we could really use a better word than “personality.” In fact, while there might be counterexamples that are not coming to mind, I’m inclined to suspect that every properly functioning vertebrate organism, as well as many invertebrates, has a distinct personality, albeit not necessarily one recognizable to humans.
Which is a really good argument for granting other vertebrates personhood.
Babies can do that? Is it (or something related) something that has been studied? There seem like possible confounding factors regarding this kind of observation but ability to respond overtly like that to stimulus has implications.
IIRC, Pinker in The Blank Slate discusses how babies come out of the womb predisposed for their language’s particular set of sounds based on what they could hear of speech in the womb. That’s learning based on sounds in the womb, so if they can develop preferences about verbal sounds, not too implausible they could develop preferences about other sounds too.
There have been several studies indicating that the neocortex is the part of the brain responsible for self-awareness. People with a lesion on the Visual 1 section of their cortex are “blind” but if you toss a ball at them they’ll catch it. And if you have them walk through an obstacle-laden hallway, they’ll avoid all obstacles, but be completely unaware of having done so. They can see, but are unaware of their own sight. So I would say the point at which a baby cannot be euthanized is dependent on the state of their neocortex. Further study needs to be done to determine that point, but I would say by two years old the neocortex is highly developed.
I guess what would also matter is the relative level of development of the human neocortex at that age as compared to chimpanzees or dogs.
Meh, this is why I tend to endorse speciesism. I mean I can pretend that I actually value humans over X in a situation because of silly reasons like “intelligence” or ability to suffer or “having a soul” or just mine one excuse after the other, but at the end of the day I’m human so other stuff that I recognize as human gets an instant boost in its moral relevance.
That said, I can further observe that I seem to differentially value various nonhuman species.
Simple speciesism is a step in the direction of capturing that, but it ends up with a list of (species, value) ordered pairs, which is a very clunky way of capturing the information and not very useful for predictive purposes.
OTOH, if I analyze that list for attributes that correlate with high value, I may end up with a list of attributes that I seem to value in isolation (then again, I might not). For example, it might turn out that I value fluffy animals, and social ones, and ones with hands, and ones with faces, and various other things.
If I do this analysis well enough, I might be able to predict how much I would value a novel species based on nothing but an evaluation of this species on those terms (“oh, scale-backed lemoriffs are spiny, asocial, lack hands and faces? I probably won’t value a scale-backed lemosaur very much.”). Then again, I might discover that there were parameters I hadn’t taken into consideration in my analysis, and that when faced with the actual species my value judgment might be completely different because of that. (“wait, you didn’t tell me that scale-backed lemoriffs are also about as smart as humans and that 10% of Internet users I enjoy interacting with were in fact scale-backed lemoriffs… crap. Now I wish we hadn’t eradicated them. I’ll add ‘intelligent’ to the list next time.”)
Given that substantial variance may exist between individuals, isn’t birth (or within a day of birth) a rather efficient bright line? I fail to see the gain to permitting more widespread infanticide, even taking your argument as generally correct.
Substantial variance exists between individuals, but it’s not such that month-old babies are different enough from fetuses to merit legal protection.
Medical research, perhaps?
I don’t like libertarianism. It makes some really good points, and clearly there are lots of things government should stay out of, but the whole narrative of government as the evil villain that can never do anything right strikes me as more of a heroic myth than a useful way to shape policy. This only applies to libertarians who go overboard, though. I like Will Wilkinson, but I hate Lew Rockwell.
I think the better class of mystics probably know some things about the mind the rest of us don’t. I tend to trust yogis who say they’ve achieved perfect bliss after years of meditation, although I think there’s a neurological explanation (and would like to know what it is). I think Crowley’s project to systematize and scientifically explain mysticism had some good results even though he did go utterly off the deep end.
I am not sure I will sign up for cryonics, although I am still seriously considering it. The probability of ending up immortal and stuck in a dystopia where I couldn’t commit suicide scares me too much.
I have a very hard time going under 2-3% belief in anything that lots of other people believe. This includes religion, UFOs, and ESP. Not astrology though, oddly enough; I’ll happily go so low on that one it’d take exponential notation to describe properly.
I like religion. I don’t believe it, I just like it. Greek mythology is my favorite, but I think the Abrahamic religions are pretty neat too.
I am a very hard-core utilitarian, and happily accept John Maxwell’s altruist argument. I sorta accept Torture vs. Dust Specks on a rational but not an emotional level.
I am still not entirely convinced that irrationality can’t be fun. I sympathize with some of those Wiccans who worship their gods not because they believe in them but just because they like them. Of course, I separate this from belief in belief, which really is an evil.
Personally I’d prefer an eternity of being tortured by an unFriendly AI to simple death. Is that controversial?
I’m curious about your personal experiences with physical pain. What is the most painful thing you’ve experienced and what was the duration?
I’m sympathetic to your preference in the abstract, I just think you might be surprised at how little pain you’re actually willing to endure once it’s happening (not a slight against you, I think people in general overestimate what degree of physical pain they can handle as a function of the stakes involved, based largely on anecdotal and second hand experience from my time in the military).
At the risk of being overly morbid, I have high confidence (>95%) that I could have you begging for death inside of an hour if that were my goal (don’t worry, it’s certainly not). An unfriendly AI capable of keeping you alive for eternity just to torture you would be capable of making you experience worse pain than anyone ever has in the history of our species so far. I believe you that you might sign a piece of paper to pre-commit to an eternity of torture vice simple death. I just think you’d be very very upset about that decision. Probably less than 5 minutes into it.
I agree with everything you said, but I think it’s worth noting:
IIRC, there’s an Australian jellyfish with venom so painful that one of the symptoms is begging for death After it wears off, though, preferences regarding death revert to normal. I would argue torture is equivalent to wireheading with regards to preferences, only inverted. So “tortured!me would accept death if offered” need not contradict “current!me should not accept death over torture”.
The jellyfish I had in mind is Carukia barnesi, which causes irukandji syndrome. Wikipedia seems to imply the “begging for death” aspect may actually be a separate biochemical phenomenon, but the source provided doesn’t actually claim this—just that sufferers feel “anxious” and a “sense of impending doom”.
I would definitely pre-commit to immortality.
As soon as you stop torturing him though—and it’s clear that the torture will not resume—I have high confidence (>95%) that he would go back to wanting to live.
The relevant question, I think, is not whether an individual would cease wanting to die after the torture had ended. If then offered a choice between death and more torture (for a very long time, and with no afterward to look forward to), would dclayh (or some other person in the same situation) change their mind?
Apparently it is.
I agree with you, and when I brought the subject up elsewhere on this site I was met with incredulity and hypotheticals which seemed calculated to prove I didn’t actually feel that way.
I’m not sure I’d call it controversial, but I have the opposite preference myself. Come to think of it, from my point of view, the fairly commonly-pushed myth of control-freak gods (insert &hellfire_preacher) looks rather similar to being tortured by an uFAI, and makes simple nonexistence look like an attractive alternative.
Are you claiming you would rather die than be bossed around? Or are you comparing hell to torture by an uFAI?
If I have surgery, I want anesthesia; if I have a pain flare at 6 or above, I take sleeping pills and try to sleep. So I prefer losing a few hours of conscious life to experiencing moderate to severe pain for a few hours. I would not want to be anesthetized for six months I’d otherwise spend at a 6, but I would if it was a 7.
I think the criterion is “Yeah, screaming in pain, but can I watch Sherlock?”. If I can do moderately interesting things then I can just get used to the pain, but if the pain is severe enough to take over my whole mind then no dice. Transhuman torture is definitely the latter.
I’m not sure it’s fair to compare “anesthetized for six months” to “dead, permanently”.
Well I don’t have much experience with death and eternal life. What goes wrong in extrapolating from hours or months to eternity?
Well, you wake up after the six months. Unless you expect to wake up from death (in which case it’s a perfectly logical argument, I think) then there does seem to be a difference. As I said, I’m not sure if this difference is relevant, but it seems like it might be.
Plenty of libertarians agree with you on #1.
I sometimes suspect that mass institutionalized schooling is net harmful because it kills off personal curiosity and fosters the mindset that education necessarily consists of being enrolled in a school and obeying commands issued by an authority (as opposed to learners directly seeking out knowledge and insight from self-chosen books and activities). I say sometimes suspect rather than believe because my intense emotional involvement with this issue causes me to doubt my rationality: therefore I heavily discount my personal impressions on majoritarian grounds.
I don’t actually believe it as such, but I think J. Michael Bailey et al. are onto something.
OK, you’re the second person in this thread I’ve seen advocating this view, so maybe my pro-school view is the minority one here.
The idea of curiosity is very compelling, but how often does productive curiosity actually occur in people who don’t go to school? Modern society has lots of things to be curious about: television, video games, fan fiction, skateboarding, model rockets, etc. The level of interesting-ness doesn’t correlate with the level of importance (examples of fields with potential large improvements for humanity: theoretical physics, chemistry, computer science, artificial intelligence, biology, etc.) If you believe model rockets are a sure lead-in to theoretical physics or chemistry, I think you’re being overly optimistic.
The most important effect of school is providing an external force that gets people to study these (relatively) boring but important fields. Also, you get benefits like learning to speak in public, being able to use expensive school facilities, having lots of other people to converse with on the topics you’re learning, etc. To do boring things on your own, you need self-discipline, which is hard to come by. School does a great job of augmenting self-discipline.
By the way, I thought about school much the same way you did until I left high school (two years early) and went to community college. I can’t explain why, but for some reason it’s a million times better.
Well, in community college, you’re now the “customer”, and determine what you want to study, and how to study. It still provides a framework, but you’re much freer in that framework. The question is to what extent can we get similar benefits in earlier schooling. AFAICT, the best way to do so would be to make more of it optional. (Another pet project of mine would be to separate grading/certification and teaching. They’re very different things, and having the same entity do both of them seems like a recipe for altering one to make the other look good.)
″...separate grading/certification and teaching....”
John Stuart Mill advocates that in the last chapter of On Liberty. He wanted the state to be in charge of testing and certification, but get out of the teaching business altogether (except for providing funding for educating the poor). I like the idea.
I should really get around to reading On Liberty one of these days.
I really think this is the domino that could trigger reform throughout the entire system. The problem is that there are only a few professions that require a specific, critical skill-set which can be easily tested and which completion of a degree does not guarantee.
“I sometimes suspect that mass institutionalized schooling is net harmful because it kills off personal curiosity and fosters the mindset that education necessarily consists of being enrolled in a school and obeying commands issued by an authority”
Yes, I agree. Look at the success EY has had as an autodidact. His scientific career is ~10 years ahead of mine (and the gap would be more like 50 years if I hadn’t found OB + his other writings). I spent soooooooooo much time studying theoretical physics… because that is what is socially acceptable for a mathematically talented young scientist to study in the top universities. [edit: most autodidacts probably end up not doing as well. Selection Bias, etc. But it is a tantalizing piece of evidence
See the wikipedia article: http://en.wikipedia.org/wiki/Autodidact
]
So you found OB and his other writings 40 years ago?
Also, kudos for spending a lot of time studying theoretical physics.
That isn’t implied. It merely suggests that OB and his other writings facilitated learning which would have taken 40 years without these resources.
I agree in principle. The problem is kids in the workplace. When you’re gardening and making necklaces, the children can float around among the adults, learn by observation, and from one another. When both parents are sitting in front of a computer all day...
And in the US there’s the whole North-Korea style pledging of allegiance to a piece of coloured cloth. So no shock then that USAans seem to run to “heavily indoctrinated” (and hence woo-girls, laugh-tracks, zinger comedy, etc) - and also no shock that in a Pew Poll of US adults in 2007, 68% of respondents said that they believed that angels and demons intervene in their everyday lives. (Presumably a lot of those people attended school at some stage, and yet managed to get to adulthood believe in the equivalent of the easter bunny).
Outside of your borders, all of that freaks us civilised folks out.
Please do not derail threads to promote your political opinions.
In addition, you appear to be suffering from the halo effect here—pledging allegiance is Bad (because it’s similar to North Korea) and superstition, “woo-girls, laugh-tracks, zinger comedy, etc” and being “heavily indoctrinated” all magically follow. Bad Things somehow generating other Bad Things is pretty damn magical thinking, but it’s a common pattern to fall into (if you’re lazy you get fat, if you ban prayer in schools you get school shootings.)
If, on the other hand, you have some theory as to how “North-Korea style pledging of allegiance to a piece of coloured cloth” is somehow the cause of all these things, and it is relevant to, y’know, rationality, then I advise you to write a top-level post on the topic.
And for the record, I’m not American, and while the low sanity waterline is a problem—and not just in America—I am not especially freaked out by people “pledging allegiance” to their country, or for that matter by laugh tracks.
Do you seriously think that the Pledge of Allegiance (and other similar things) are not designed to indoctrinate? Let’s go to the writings of the guy who penned the Pledge of Allegiance:
″..the training of citizens in the common knowledge and the common duties of citizenship belongs irrevocably to the State.” (emphasis mine)
The foundational aim of indoctrination is to get people when their minds are sufficiently plastic as to have few critical filters (i.e., in childhood) and to ‘re-wire’ the plastic brain/mind with the indoctrinator’s desired trope at the front. This is done by rote (church liturgies, pledges and so forth).
As elsewhere, you commit a logical fallacy: that the fact that you’re unaware of the work that has been done showing that propaganda works, means that it doesn’t.
Also, bad things do cause other bad things if the other bad things stem from a reduction in a defence mechanism, where the reduction was caused by the initial bad thing. Bombing water treatment (bad thing) and sewage plants causes increases in water borne disease (other bad things).
There’s no requirement for magic (and therefore no requirement for attempts at deploying hackneyed middle-school debating tropes).
There is a very sound basis for believing that attempts to indoctrinate lead to a tendency for the population to be indoctrinated: the best basis that I can think of is that governments invest heavily in indoctrination using the same methodology as developed by Bernays and later Goebbels. If the methdology was not leading to the desired result, .gov would change it (I’m no admirer of .gov’s ability to get things right, but the indoctrination of the pubic is the sine qua non of the tax-parasite’s life).
Indoctrinated individuals have a greater tendency to lower levels of critical thinking (ever had an argument with a born again Christian? cheap shot, but I can give you a bunch of cites from the psych lit, too). Thus any device that increases the net level of indoctrination will cause—not by ‘magic’ - an increase in other things associated with reduced critical faculties.
What’s with the formatting? Please adhere to standard conventions, there’s a reason for them (did I do that right?).
Also, you’re overusing applause lights in your comments, it’s frankly annoying. We’re at least trying not to march our little soldier arguments against each other, but to shift our opinions as we encounter flaws in our arguments and strength in the other commenters’. Goebbels and born-against-Christian (hah, I’m gonna leave that typo in) examples just kill rational discourse.
While I agree with your comment, I must say said formatting is simply being (over)used for emphasis, and it seems like rather a cheap shot to attack it.
The remark on the formatting was not meant as a cheap shot or to denigrate the content in any way. At least for me bold/colored text interspersed in a paragraph makes it significantly harder to read and to focus on the flow of the sentences and their nuances. You always have some bold attention! screaming word in your immediate peripheral vision. If it’s a short comment making a single point that is acceptable, but in a multiparagraphed comment it gets tedious.
So I’m happy to lead our arguments where noone has gone before, just not to lead them there boldly …
It’s been some time since I checked the standard style manuals: is there really a stated style for emphasis in comments on the internet? It would not surprise me too much—there are a lot of people with too much time on their hands, who like telling others what to do (and the less important the sphere of endeavour, the more urgent the need to be boss of it). [Oh, and apologies in advance for not using em-dashes...]
As to whether you “d[id] that right”, it depends. Reading it back to myself, it would appear not. Try all-caps on the bold bits and see if it makes sense hen you read it out loud… then do the same for the material to which you took stylistic objection and see if that makes sense.
As to Goebells: that specific example really needed to be in there, since he made clear that he admired Bernays (and the American eugenics movement). Born-against-Christians are the handiest example of indoctrination.
I don’t know what “applause lights” are: doubtless some egregious thing that is so important that it merited a new jargonistic term for the [meta]cognoscenti to use to beat us ‘mundanes’. (What does the style manual say about italicising Italian words on the internet?)
LBNL: if you don’t think that there is a clique here who is, quite specifically, “march[ing] [their] little soldier arguments”, I think you have not been paying attention. It’s as bad as coming across a coven of Randians, and almost as correct-line as the Freepers (the trolling of Freepers is one of life’s little joys).
The sort of people who say “Your entire theory of life and morals is incomplete and would bee useless for programming an AI” in response to a 21-word phrase at the end of a comment which did not purport to be exhaustive or complete, and was never put forward as a candidate for coding an AI.
Also the sort of people who say “What you said doesn’t make sense to me, so you must be wrong and not know what you’re talking about” while revealing gaping holes in their understanding of early-undergraduate material that is absolutely central to the issue at hand.
I’ve taught people like that—usually at first year level: people who throw about words like “utilitarian” and “consequentialist”, while steadfastly ignoring the long-term consequences of the system they are advocating (or implicitly supporting) and attacking anybody they view as ideologically impure. It’s hilarious.
No. There are, however, community norms in this particular corner of the internet.
Still, it is generally a good idea to avoid politicized examples, especially here.
Was that really necessary? Applause Lights.
Firstly, Arguments As Soldiers.
Secondly, please provide specific examples if you have some criticism, don’t just sort people into a reference class containing idiots.
I am not claiming that it is not indoctrination, by that definition. Nor am I claiming that it is. I am asserting that the term “indoctrination” is counterproductive, as the connotations, particularly the political ones, are likely to interfere with discussion and clear thinking. I also note that this comment section is probably not the place for such discussion.
Of course not.
I stand by this statement.
This is usually referred to here as the need to “raise the sanity waterline”—I’m not sure where the term originates—and as I said, I am aware that it’s a problem, but I don’t see why Americans pledging allegiance is an especially vital part of that.
Incidentally, while this does not alter the substance of your post, I note that your writing style seems needlessly rhetorical, which is likely to attract hostility from people pattern-matching to various ideologues. This is a website dedicated to rationality, not politics.
I think people should be allowed to sell their organs if they want to. We don’t consider it immoral to pay a surgeon to transplant a kidney, or to pay the nurse who helps him, so I don’t see why it’s immoral to pay the person who provides that kidney. I also think we should pay people in medical experiments. Pharmaceutical companies could hire private rating agencies to judge proposed Human experiments much as Standard and Poor rates bonds; that way people would know what they’re getting into. The pain \ danger index would range from slightly uncomfortable \ probably harmless to agony \ probably fatal and payment would be tied to that index. A market would develop open to anybody who was interested. It would be in the financial interest of the drug companies to make the tests as safe and comfortable as possible. All parties would benefit, medical research would get a huge boost and everybody would have a new way to make money if they chose to do it.
I also think that if you believe in capital punishment it is foolish to kill the condemned before performing some medical experiments on him first.
I think we do pay people in medical experiments.
Maybe I’m just projecting, but I doubt the first thing is a controversial position here.
Killing people, and locking them in prison for 20 years, are both worse than torturing them.
Killing enemy soldiers is not much better than killing enemy civilians.
It is immoral not to put a dollar value on life.
The rate of technological change has been slowing since 1970.
It can’t be true that both universal higher education and immigration are social goods, since it is cheaper to just not educate some percentage your own people.
Increasing the population density makes the cost of land rise; and this is a major factor in the cost and quality of life.
Men and women think differently.
Ditto that modern Western women hold very wrong beliefs about what will make them happy.
War is not good for your economy (unless you aren’t fighting in it).
This comment perplexed me until I realized you were assuming that the average education level of immigrants is lower than that of “natives” (that is, the pre-existing population of the country). But that need not be the case. To borrow from personal experience — many immigrants from the former Soviet Union are quite a bit more educated than the national average in the U.S. Surely immigrants who bring an above-average education with them are good for the society (assuming that they intend to become productive members of society)? Doesn’t it follow that both of the things you mention can, in fact, be true, conditional on certain contingent properties of immigration?*
*And of higher education, presumably. I mean, we could say “higher education can’t be a social good if we do it wrong in ways X, Y, Z”, to which the obvious response is “we shouldn’t do it like that, then.”
“War is not good for your economy (unless you aren’t fighting in it).”
That’s pretty well accepted in some economics circles. See the broken window fallacy by Frédéric Bastiat.
With notable, perhaps exceptional counter-points (see: the U.S. and WW2).
Perhaps this is nitpicking, but it’s possible for both to be social goods, but one is more of a good than the other.
I think that most people, including rationalists, have significant psychological problems that interfere with their happiness in life and impair their rationality and their pursuit of rationality. What we think of as normal is very dysfunctional, and it is dysfunctional in many more ways than just being irrational and subject to cognitive biases.
I think furthermore that before devoting yourself to rationality at the near exclusion of other types of self-improvement, you should devote some serious effort to overcoming the more mundane psychological problems such as being overly attached to material trinkets and measuring your self-worth in material terms, being unaware of your emotions and unable to express your emotions clearly and honestly, having persistent family and relationship problems, having chronic psychosomatic ailments, etc. Without attending to these sorts of issues first (or at the same time), trying to become a rationalist jedi is like trying to get a bodybuilder physique before you’ve fixed your diet and lost the 200 extra pounds you have.
I fear this may be wishful thinking; you can get much further than I would have thought a priori in a sub-art of rationality without developing a strong kick as well as a strong punch.
It would be interesting to try to diagram the “forced skill development”—for example, how far can you get in cognitive science before your ability to believe in a supernatural collapses—and of course the diagram would be very different for skills you studied from others versus skills you were able to invent yourself.
I’m not sure how much you mean by the doing without a kick analogy. If you mean, for example, that a rationalist should overcome something like social anxiety that impedes his research career by developing techniques from scratch rather than engaging in something like cognitive behavioral therapy, then I disagree. Ditto for the other sorts of psychological problems I mentioned.
The reason is not that I think you couldn’t address anything from first principles, building up techniques as you go, but that this would be hugely inefficient, like developing calculus from first principles rather than studying a textbook.
Would you consider a top-level post about this?
(FWIW, I, at least, see emotional self-awareness as a core rationality skill.)
If you’re interested in this, we should be talking about CBT and related techniques, which are essentially a form of rationalism training directed at those biases which feed eg depression and anxiety disorders. If rationalism training were brought into schools, some CBT techniques should be part of that.
Yes, CBT and related techniques are exactly the sort of thing I had in mind.
I don’t think most rationalists are aware of them though, and it’s not because rationalists suffer from none of the problems for which they are especially effective or because they have already addressed the problems via other means.
I might do one, actually
I would hold myself to much higher standards for a top-level post than for a comment, and I’m extremely busy at the moment, so I won’t be able to do a top-level post for at least the next couple of weeks.
If anybody else has thought about this issue as well and wants to write a top-level post, feel free to do so. If I don’t see such a post, then I’ll write one up when I have time in a couple of weeks.
I’m inclined to believe that rationality is more an instrument rather than a goal, as you try to describe it. Being attached to material trinkets, (or not) will be a rational choice for the one who developed his rationality and was able to think his choice through, while irrationally dismissing the utility of mundane gadgetry as well as wholeheartedly embracing it, most likely as a result of an induced bias, exposes the undertaker to unconsidered, not-yet-evaluated risks—hence the label “irrationally”.
There is some seed of truth in what you’re saying—the balance between the effort of developing a rational art and the likely impact of that development on one’s goal has to receive the necessary attention.
To go with the example provided (the body-builder [the rationalist jedi]) - going straight towards his final goal (obtaining an Adonis physique [being a rational jedi]) will help him develop more muscle mass [more powerful rational skills] which would mean more fat-burning cells in his body [more chances to make the right decisions when various day-to-day challenges arise] to deal with the extra 200 pounds [whatever skewed perception or behavioural pattern one has], which, in my opinion is more close to optimal than a simple diet [blunt choice of “what is right” based on commonly-accepted opinion].
I think of rationality in instrumental terms too. The point is achieving your ends most reliably and most efficiently, and rationality broadly construed is the way to accomplish those ends.
I gave the example of being overly attached to material trinkets, not just being attached. Being overly attached by definition could never be a rational choice.
With regard to the bodybuilder analogy, I think the optimal solution will include some study of diet and nutrition and modification of your diet (it’s likely to be extremely unhealthy if you’re morbidly obese). Working out will be much more efficient given a strong foundation of diet and other aspects of health. Likewise with rationality, progress will be quicker if it builds upon a strong foundation of psychological health. If there isn’t such a foundation already, it deserves serious attention as a high-priority sub-art.
I think the notion the ‘most people suffer from significant problem X’ is very often plain misunderstood. If everybody ‘suffers’ X, X is the norm, not an affliction (with exceptions such as, say, lower back pain). You’re projecting your normative values onto factual matters.
Also, the notion that we have deficient moral/mental capacities seems to me unsupported and basically quasi-religious. “What we think of as normal is very dysfunctional...” Red pill or blue pill. Please.
Our attachment to material trinkets, material self-worth, emotion expression abilities, family problems etc. all stem from our evolutionary background and the conflicting selection pressures our species was subjected to. Why would one even think that an conflict-free perfect Bayesian could, would or should result from evolution?
Yes, it sucks loving your spouse and wanting to cheat at the same time. I just don’t see how this translates into “significant psychological problems.” Especially not some that need be overcome before moving on towards rationality Nirvana. I suggest bullet-biting as the cure for this ailment.
It is possible for X to be the norm and simultaneously cause suffering, contra your first paragraph. How common the characteristic is and how much suffering it causes are only loosely related. I’m not talking about normative values at all.
OF COURSE attachment to material trinkets, etc., come from our evolutionary background. Where else would they come from? That has no bearing at all on whether we would benefit from overcoming some of evolved tendencies. I have no idea how you could possibly have misinterpreted me to be arguing that a “conflict-free perfect Bayesian could, would or should result from evolution”. Please enlighten me as to how anything I said implies that.
You’re arguing against a position that nobody here has put forward. Notice how I said “overly attached” (overly implying that some amount is healthy but that there is commonly too much, where too much means “contributes to losing, not winning”) and you misrepresented me as saying “attached”, how I said “having persistent family and relationship problems” (indicating losing not winning over an extended period of time) and you misrepresented that as “loving your spouse and wanting to cheat” (which most of us probably agree is extremely common and not necessarily a problem at all).
Please try to read more carefully and not immediately pigeonhole me into “the most likely cliche”.
Lower back pain is exactly the model you should have in mind
That’s exactly what normative values are for
The notion that we have deficient mental capabilities is borne out in countless experimental studies.
Of course we haven’t evolved to be perfect Bayesians—that’s the whole point.
Pick a better example—many relationship problems demand a more thoughtful take than “suck it up”.
EDIT: Re-reading, this seems unnecessarily hostile. Don’t have time to reword properly, please accept my apologies...
That both women and men are far happier living with traditional gender roles. That modern Western women often hold very wrong beliefs about what will make them happy, and have been taught to cling to these false beliefs even in the face of overwhelming personal evidence that they are false.
How traditional? 1600s Japan? Hopi? Dravidian? Surely it would be quite a coincidence if precisely the norms prevalent in the youth and culture of the poster or his or her parents were optimal for human flourishing.
If anything, I have the convert’s bias in this regard, Michael, not the true-born believer’s. I’m fairly young and was raised in quite a progressive household. I’d suspect myself more of overstating my case because it has come to me as such a revelatory shock. But that’s neither here nor there, as I’m not advocating for any specific “tradition.”
I’ll posit that gender roles and dynamics since the feminist movement began in earnest in the 60s and 70s have proven to be a sizable and essentially unprecedented break from the previous continuum in Western societies going back at least a couple thousand years. I don’t know enough about 1600s Japan or Hopi or Dravidian societies to speculate as to whether they fit into that pattern too. I understand there are arguments that feminist regimes are actually more original to the human species and that patriarchy only appears with the advent of agriculture and monarchy/despotism. My understanding is that this is an open question, and again beyond my expertise. So I should readily concede that “traditional” is a highly suspect term.
So I’ll be even more blunt, since this is our comment thread to not worry about whether or not these views are currently acceptable, right?
My rather vague comment is based in a more specific belief that women like to be dominated by men, that these feelings are natural and not pathological (whether or not that makes them “right” is of course another question) that they are unhappy when their man is incapable of domination and are left feeling deeply sexually unfulfilled by the careerism which empowers them elsewhere in their lives, that the current social education of both women and men (at least in the circles of the US in which I move) teach everyone that it’s abhorrent and wrong for a man to assert power over a woman, that men who enjoy it are twisted assholes and that women who enjoy it are suffering from deep psychological damage, and that it is practically inexcusable for a woman to admit that her limbic system gives her pleasure signals when a man arouses her this way.
Naturally, I am basing the perception of this relatively new regime, at least in its current extreme form, on my interpretation of what came immediately before in the society in which I was raised (I don’t know firsthand as I was born well into the current regime), so your point stands, I suppose. But I don’t really think using this as a starting off point merits any twinkling snark.
The second sentence of my original post, however, contains the more important point. Regardless of whatever “norm” anyone has in mind, be it Basque, Dravidian, or Branch Davidian, the real problem is that the current norm actively teaches unhappiness-increasing lies. If the last regime was imperfect too, I’d counter that two wrongs don’t make a right.
Though as Z M Davis notes, not all beings value happiness highest. I readily concede that too.
What I personally have observed is that there are plenty of men and women who have a need or desire to be dominated. And that a minority of these people can’t deal with the idea that it’s “just” a sexual fetish or personal quirk, but must convince themselves instead that the entire world would be happier or much better off if only our entire society were male supremacist or female supremacist, accordingly.
I’ve also observed that there are plenty of people who have a leadership or followership preference in a relationship… but the desire to be the follower is both more widespread and more gender-balanced than the desire to be the leader.
So I guess what I’m saying is, the fact that there’s a large unsatisfied market of females wishing to be dominated (sexually or otherwise) should NOT be mistaken for an indicator that this is somehow “the way the world should be”.
That market is unsatisfied for the same reason its male counterpart is: there simply aren’t enough people of either gender with the inclination, experience, self-awareness, etc. to meet the demand.
It’s my impersonal understanding that the ratio of male submissives to female dominants is way worse than the ratio of female submissives to male dominants—both kinds of submissives will have trouble finding a dominant counterpart, but the heterosexual males have it way worse.
That’s why I said the desire to be a follower is more gender-balanced than the desire to be a leader. I also used “leader” and “follower” because “dominant” and “submissive” carry more sexual overtone than is actually relevant to my point… but also because it’s way easier for men to find socially “leading” partners than sexually leading ones.
Also, to make things more complex… there are plenty of people who like to go both ways… and there are people who want to be sexually dominant but socially submissive or vice versa… if you’ve actually met and spoken with enough real people (without the self-selection bias that occurs when people with identical kinks get together), it quickly cures you of any idea that you can just say, “This Is The One True Way Relationships Should Be.”
(My wife owns a lingerie and adult toy/video store, and we’ve socialized with a lot of kinky and swinger folk, including gay, transgendered, etc. -- for a fairly broad definition of “etc.”)
This is very much my impression also—as a switch, I’m topping a lot more than would be my natural inclination because that’s where the demand is.
This makes a lot of sense. I’m thinking of the dilemma my husband and I had when I wanted him to learn to swing dance, but neither of us wanted to learn to lead. Or my 6′4″ male friend who told me sadly that sometimes, he just wants someone who’s bigger than him, whose shoulder he can lean on.
Totally agreed. The thread starter has made a rash and morally suspect assertion—morally suspect because talking about people’s happiness as exclusively a simple thing to be manipulated through cultural dogma, and the only grade on which a life can be rated as pleasant or not is whether a dogma brings the sensation of pleasure to certain individuals in certain circumstances or not—well, it goes against seeing people as an end in itself, and it’s just icky.
You might be Generalizing From One Example—just because you like that doesn’t mean all women do, and in fact I strongly believe that some women do and some don’t, where by “some” I mean “more than 5% and less than 95%”.
I’m curious—is your personal evidence anecdotal, qualitative, quantitative...?
Michael Vassar also makes a good point—the values and implications of “traditional roles” vary a great deal across time, and especially across socioeconomic status. There are certainly career women in the West who perceive taking time off to care for children as a relief from the rat race and a chance to contribute to society in another positive way. They might feel differently had they been, say, a 12-year old Zimbabwean girl who never attended school, was married to an older man to help her family’s finances, developed an obstetric fistula in childbirth, and never left her husband’s compound again. That isn’t just traditional, it’s an active reality for millions of poor women around the world. There are also many happy, healthy, educated African career women and stay-at-home-moms, of course. The context of “tradition” is very important.
I agree. But even though feminists (and other women exposed to the rhetoric) may say they want gender “equality” to increase their happiness, it is not necessarily the real reason. Once it becomes possible for women to enter the workplace (for any reason), competition will force other women to follow suit. Elizabeth Warren’s research shows, for instance, that positional goods (housing, education) have experienced tremendous inflation since the ’70s. The quality of these goods hasn’t improved commensurately.
I believe that many if not most people value some things more than happiness.
“Man does not seek happiness, only the Englishman.” -Nietzsche, on Utilitarians.
I think most people would agree with that statement, if you ask them to think about it a little more. Happiness, or “expected happiness” is just one term in the utility function. There is also “expected unhappiness” which might encompass things like suffering, pain, negative emotions. The concept of utility tries generalize enough to add these things together, but at an everyday conceptual level these seem to be different things (nevermind about how emotions manifest physically.) For instance, we can be happy about one thing and yet about another e.g. “my infant daughter is beautiful, but I’m sad that my parents did not live long to share this joy with us.” People seem to understand this: in English we have the word “bittersweet”, and the juxtaposition of joy and melancholy seems to be present in many other languages and cultures.
Back to the question of value: are people more eager to avoid loss than to pursue potential gains (of the same order of magnitude?) Experience points to most people putting more effort into keeping what they have, even if they are relatively unhappy with their situation. Part of this is probably evolved defaults of the brain influencing even what you might call conscious decision making.
And don’t forget about morality. Although we might try to reconcile the two, there is often some tension between doing what is “right” and doing what we expect may make us happier.
I know for a fact that I value truth over happiness . I tend to do things that other people often point out to me would have “gone better” if I did it some other way or if I did something else entirely .
I find it interesting that this comment is (currently) the highest-scoring, with 7 more points than the second highest.
(Oh, wrong, it’s second among top-level comments. Still interesting.)
Vague promotion of “traditional” values or ways combined with equally vague bashing of egalitarian movements that apparently are a threat to the relevant traditions is one of the most reliable applause lights that there are.
You’re looking at “Popular” instead of “Top”.
I think it’s important to not downvote contributors to this survey if they sound honest, but voice silly-sounding or offending opinions. It’s better to reward honesty, even if what you hear hurts or irritates (but not endless repetition of misguided opinion, that cumulatively will bore other readers too much). Upvoting interesting comments should be fine.
P.S. This advice is not one of these “crazy things” the poll is about. ;-)
In this particular post, I’m upvoting all the comments which make me think. So if I agree with someone’s post, but it’s pretty much a cached-thought for me, I won’t bother upvoting it. And if I strongly disagree with someone, but they’ve forced me to think about why I disagree with them, I upvote it.
(This isn’t the metric I normally use for deciding when to upvote in other LW posts.)
I agree. But I do think it’s worth replying pointing out perceived holes in those beliefs, and seeing if the believer is able to defend them.
I do not believe in utilitarianism of any sort, as an account of how people should behave, how they do behave, or how artificial people might be designed to behave. People do not have utility functions and cannot use utility functions, and they will never prove useful in AGI.
Bayesian reasoning is no more a method for discovering truth than predicate calculus is. In particular, it will never be the basis for constructing an AGI.
Almost all writings on how to build an AGI are nothing more than word salad.
In common with most people here, I expect AGI to be possible. However, I may be unlike most people here in that I have no idea how to build one.
The bar to take seriously any proposed way of building an AGI is at least this high: a real demo that scares Eliezer with what could be done with it right now, never mind if and when it might foom.
All discussion of gender relations on LessWrong, OvercomingBias, or any similar forum, will converge on GenderFail. (Google “RaceFail” to see what I’m comparing this to. The current GenderFail isn’t as bad as LiveJournal’s great RaceFail 2009, but it’s the same process in miniature.)
Some things are right, some things are wrong, and it is possible to tell the difference.
In your opinion, what might be some methods for discovering truth?
Observing, thinking, having ideas, and communicating with other people doing these things. Nothing surprising there. No-one has yet come up with a general algorithm for discovering new and interesting truths; if they did it would be an AGI.
Taking a wider view of this, it has been observed that every time some advance is made in the mathematics or technology of information processing, the new development is seized on as a model for how minds work, and since the invention of computers, a model for how minds might be made. The ancient Greeks compared it to a steam-driven machine. The Victorians compared it to a telephone exchange. Freud and his contemporaries drew on physics for their metaphors of psychic energies and forces. When computers were invented, it was a computer. Then holograms were invented and it was a hologram. Perceptrons fizzled because they couldn’t even compute an XOR, neural networks achieved Turing-completeness but no-one ever made a brain out of them, and logic programming is now just another programming style.
Bayesian inference is just the latest in that long line. It may be the one true way to reason about uncertainty, as predicate calculus is the one true way to reason about truth and falsity, but that does not make of it a universal algorithm for thinking.
I didn’t get the impression that Bayesian inference itself was going to produce intelligence; the impression I have is that Bayesian inference is the best possible interface with reality. Attach a hypothesis-generating module to one end and a sensor module to the other and that thing will develop the correctest-possible hypotheses. We just don’t have any feasible hypothesis-generators.
I do get that impression from people who blithely talk of “Bayesian superintelligences”. Example. What work is the word “Bayesian” doing there?
In this example, a Bayesian superintelligence is conceived as having a prior distribution over all possible hypotheses (for example, a complexity-based prior) and using its observations to optimally converge on the right one. You can even make a theoretically optimal learning algorithm that provably converges on the best hypothesis. (I forget the reference for this.) Where this falls down is the exponential explosion of hypothesis space with complexity. There no use in a perfect optimiser that takes longer than the age of the universe to do anything useful.
It would be a significant part of an AGI. Even the hardest part. But not enough to be considered an AGI itself.
Thank you, that was very enlightening. I see now where you were coming from.
I still think that some breakthroughs are more -equal- fundamental and some methods are more correct, that is, efficient in seeking the truth. Perhaps attempts to first point out some specific interesting features of human consciousness (or intelligence, or brain) and only then try to analyse and replicate them would meet more success. In that sense logic and neural networks are successful, while bayesian inference is not.
I wonder if you are familiar with TRIZ? It strikes me as positively loony, but it is a not-outright-unsuccessful attempt at a general algorithm for discovering new, uh, counterintuitive implications of known natural laws. Not truths per se, but pretty close.
double tildas mean strike-through
I’ve read a book on it, as it happens. It seemed quite a useful set of schemas for generating new ideas in industrial design, but of course not a complete algorithm.
I’ve peeked at your profile and the linked page. See, I’m currently enrolled into linguistics program, and I was considering dedicating some time to The Art of Prolog, so I’ve researched what Prolog software there is and wasn’t especially impressed. Could I maybe ask you for advice as to what kind of side project Prolog is suited for? I’m familiar with Lisp and C and I’ve dabbled with Haskell and Coq, and I would really really like to write something at least marginally useful.
I think Prolog, like Lisp, is mainly useful for being a different way of thinking about computation. The only practical industrial uses of Prolog I’ve ever heard of are some niche expert systems, a tool for exploring Unix systems for security vulnerabilities, and an implementation of part of the Universal Plug and Play protocol.
I’ve read some responses touching on the same issue, but my point is different enough that I thought I’d do my own.
I believe that posession of child, or any other kind of pornography should be legal. I don’t have enough information to decide whether the actual making of child pornography is harmful in the long term to the children, but I believe that having easy access to it would allow would-be child molesters to limit themselves to viewing things that have already happened and can’t be undone.
I would say that the prominence of hentai and lolicon in Japan is a smaller step in the same direction, and seems to have worked well there.
In context it’s interesting that Japanese children’s manga routinely has bawdy jokes, sexualized slapstick and “fan service”. This may be an outsider’s mistaken view but there doesn’t seem to be any serious attempt to fence children into a contrived asexual sandpit.
I agree that’s interesting, but remember these manga are not actually written by children, nor bought or read exclusively by children.
I don’t know how many people here would agree with the following, but my position on it is extreme relative to the mainstream, so I think it deserves a mention:
As a matter of individual rights as well as for a well working society, all information should be absolutely free; there should be no laws on the collection, distribution or use of information.
Copyright, Patent and Trademark law are forms of censorship and should be completely abolished. The same applies to laws on libel, slander and exchange of child pornography.
Information privacy is massively overrated; the right to remember, use and distribute valuable information available to a specific entity should always override the right of other entites not to be embarassed or disadvantaged by these acts.
People and companies exposing buggy software to untrusted parties deserve to have it exploited to their disadvantage. Maliciously attacking software systems by submitting data crafted to trigger security-critical bugs should not be illegal in any way.
Limits: The last paragraph assumes that there are no langford basilisks; if such things do in fact exist, preventing basilisk deaths may justify censorship—based on the purely practical observation that fixing the human mind would likely not be possible shortly after discovery.
All of the stated policy opinions apply to societies composed of roughly human-intelligent people only; they break down in the presence of sufficiently intelligent entities.
In addition, if it was possible to significantly ameliorate existential risks by censorsing certain information, that would justify doing so—but I can’t come up with a likely case for that happening in practice.
Agreed.
Also, if you pile on technological improvements but still try to keep patents etc, you end up in the crazy situation where government intrusiveness has to grow without bounds and make hegemonic war on the universe to stop anyone, anywhere from popping a Rolex out of their Drexlerian assembler.
I very strongly agree, except for the matter of trademarks. Trademarks make brand recognition easier and reduce transaction costs. Also enforcing trademarks is more along the lines of preventing fraud, since trademarks are limited only in identifying items in specific classes of items (rather clumsily worded, but I’m trying to be concise and legalities don’t exactly lend themselves to concision.)
Isn’t yelling “fire!” in a crowded theater a kind of langford basilisk?
Normally, when people say they believe “all information should be free”, I suspect they don’t really mean this, but since you claim your position is very “extreme”, perhaps you really do mean it?
I think information, such as what is the PIN to my bank account, or the password to my LessWrong.com account, should not be freely accessible.
You don’t believe there is value in anonymity? E.g. being able to criticize an oppressive government, without fear of retribution from said government?
You make a good point; I didn’t phrase my original statement as well as I should have. What I meant was that there shouldn’t be any laws (within the limits mentioned in my original post) preventing people or companies from using, storing and passing on information. I didn’t mean to imply keeping secrets should be illegal. If a person or company wants to keep something secret, and can manage to do so in practice, that should be perfectly legal as well.
As a special case, using encryption and keeping the keys to yourself should be a fundamental right, and doing so shouldn’t lead to e.g. a presumption of guilt in a legal case.
I believe there can be value in anonymity, but the way to achieve it is by effectively keeping a secret either through technological means or by communicating through trusted associates. If doing so is infeasible without laws on use of information, I don’t think laws would help, either.
I think governments that would like to be oppressive have significantly more to fear from free information use than their citizens do.
When you use the PIN to your bank account you expect both the bank and ATM technicians and programmers to respect your secret. There are laws that either force them not to remember the PIN or impose punishment for misusing their position of trust. I don’t see how such situations or cases of blackmail would be resolved without assuming one person’s right to have their secrets not made public by others.
I’m not just nitpicking. I would love to see a watertight argument against communication perversions. Have you written anything on the topic?
Agreed.
I don’t agree with it. You can’t believe everything you read in Wired. The “information should be free” movement is just modern techno-geek Marxism, and it’s only sillier the second time around.
All software is buggy. All parties are untrusted.
That may be so now, but that doesn’t mean it’s impossible to change it. That the current default state for software is “likely insecure” reflects the fact that the market price for software security is lower than the cost of providing it.
Laws against software attacks raise the cost of performing such attacks, and therefore lower the incentives for people to ensure the software they use is secure. I think it would be worth a try to take that illegality away, and see if the market responds by coming up with ways to make software secure.
You can’t get really good physical security without expending huge amounts of resources: physical security doesn’t scale well. Software security is different in principle: If you get it right, it doesn’t matter how many resources an attacker can get to try and subvert your system over a data channel—they won’t succeed.
Cryonics membership is a rational choice.
My chances of surviving death through resuscitation are good (as such things as chances to beat death go), but would be better if I convinced more people that cryonics is a rational choice.
In my day to day I am more concerned with my job than convincing others on the subject of cryonics, even though the latter is probably more valuable to my long term happiness. Am I not aware of what I value? Why do I not structure my behavior to match what I believe I value? If I believed that cryonics would buy me an additional 1000 years of life wouldn’t 10 years of total dedication to its cause be worthwhile? Does this mean that I do not actually believe in cryonics, but only profess to believe in cryonics?
Americans no longer significantly value liberty and this will be to the detriment of our society.
A large number of Americans accept the torture of religious enemies as necessary and just.
Male circumcision is more harmful than we realize and one cause (among many) of sexual dysfunction among couples.
Most humans would be happier if polyamory was socially acceptable and encouraged.
I think school, as conventionally operated, is a scandalous waste of brain plasticity and really amounts mostly to a combination of “signaling” and a corral.
I’m not sure what should replace it. There are things kids need to know—math, general knowledge, epistemology, reasoning, literacy as communication, and the skills of unsupervised study and research. (School doesn’t overtly teach most of the above—it puts you under impossible pressure and assumes that like a tomato pip you will be squeezed into moving in the right direction.)
There are also a ton of things they might like to learn, out of interest.
I am not sure those two categories of learning ought to be bundled up. Especially, while I can understand forcing a study of the first category, it seems obviously counterproductive to force the second.
I tried hard to think of something that I haven’t already talked about, so here goes:
I have a suspicion that the best economic plans developed by economists will have no effect or negative effect, because the ability of macroeconomics to describe what happens when we push on the economy is simply not good enough to let the government deliberately manipulate the economy in any positive way.
Update: You could call this half right in retrospect. Fiscal policy is ineffective except when monetary policy is ineffective, and the Federal Reserve didn’t print nearly enough money but the money they did print did prevent another Great Depression. We would not have been better off if the Federal Reserve had done nothing, thinking all their plans ineffective. There might be some kind of lesson here about EAs who fret about “What if we can’t model anything?” whose despair seems kind of similar to Eliezer_2009′s.
To clarify, “the money they did print did print another Great Depression” should (probably) read “the money they did print did prevent another Great Depression”, right? The version with the typo sounds unfortunately like “The Federal Reserve caused the Great Depression”.
Right. (Also the Federal Reserve totally did cause the original Great Depression, but this is a mainstream stance.)
What’s the minimum amount of information you could send Eliezer_2009, that he would agree with you?
Economists’ plans relating to monetary policy do influence how the Federal Reserve Board acts (since it is run by economists) and this does influence the economy.
I was including the Federal Reserve Board in “economists”. Forgive me if that was a mistake.
Let me be more concrete: I suspect that the Obama stimulus plan won’t accomplish anything positive, not because of any particular flaw I could name, but because the models they are using to organize their understanding of macroeconomics are just wrong—somehow or other.
The amount of chaos here seems so great—so many things going differently than predicted, so many plans failing to have their intended constructive effect—that I suspect a chaotic inversion: it’s not chaos, we’re just stupid.
I believe this about climate change as well.
You might try reading Thomas Woods’s new book “Meltdown”. It’s an easy read, it took me about 4 hours. It would have been less but I had to keep stopping and thinking “How come I didn’t realize that before?” It struck me as mostly accurate, which makes me wonder about mainstream economists’ attacks on Austrian economics. I am definitely going to be reading more Austrian economics. Woods is an historian rather than an economist, but the core of the book is that gov’t meddling in the money supply causes the business cycle—that the Federal Reserve caused the current crash by inflating the bubbles with cheap (below market) credit.
This position is not uncommon, and it is very different from my understanding of your first comment.
For what it’s worth, I think lots of people are confused about macroeconomics, including many/most economists. However, there is a particular macroeconomic/monetaryeconomic theory which does give substantial insight: monetary equilibrium theory (goes by a few other names). Unfortunately, I can’t give good resource for learning this theory. I’m slowly working on an introductory series.
I think I agree with your premises here, but my conclusion is that our predictions will be weakly correlated with reality and our best plans will have a 55% or 51% success rate, not that they will have no effect or negative effect.
There no reason to assume that messing with a complex system that you don’t understand means that you will have on average a 50% success rate.
There are many cases where trying to push a complex system into a local optima might have bad consequences. The system might get more robust but lose resiliency.
What do you mean when you say ‘plans’? Do you mean all plans or just most plans?
Edit: oh, I didn’t read closely, you mean macroeconomic plans.
Civilians should be considered legitimate targets in warfare, with the decision whether or not to attack them based entirely on expediency. If a cause isn’t worth killing civilians over, it’s not worth killing soldiers over, either.
I might agree with that if human civilization as a whole was much more rational than it is now.,(especially the institutions that deal with political and military power—this includes organized religion to a certain degree in most places).
If I believed that warfare would only be used to attain noble goals that nothing else can reach (a “cause worth killing for”, as you say), then yeah, if it’s worth killing soldiers, it might be worth killing civilians too.
But right now, it seems that war is mostly about small politics, personal status (both for dictators and democratically elected leaders), xenophobia, and money.
I feel that if civilians had been a legitimate target in most recent wars, the outcomes would only have been worse, not better, and so I can’t support it.
Fortunately, my training as a philosopher left little room for embarrassment about my beliefs (my mentor was a Popperian—of the ‘say it loud’ sort). So there really isn’t anything I could say here that hasn’t come out elsewhere. But a lot of it is somewhat unpopular:
Ethics: eudaimonist egoism—objectivist in the sense that there are facts about ethics, but relativist in the sense that there’s no reason to assume all humans are the same ethically. Consequently, I think it’s fine that I care more about my cats’ welfare than most humans’ - as long as it doesn’t lead to a lack of virtue on my part (which, of course, is an empirical question).
Economics: Markets really are the most efficient way of getting the relevant information, due to methodological individualism and local, distributed knowledge. And my spending really does indicate my preferences, which are some of the best data about ethics.
Politics: Classical liberal (preferring Locke over Mills); freedom is paramount—other people should fight for my freedom, so that I might have room to become more awesome. I acknowledge the tension between this and Nietzsche’s contention that democracy is bad because it does not provide an environment where one can learn to overcome. But I’m not a big fan of democracy anyway, and I see the political history of the US primarily in terms of a struggle between ‘freedom’ and ‘equality’.
Furthermore, governments are inherently bad—it is part of their telos. One of the great things about the US government is that it’s huge and bloated with checks and balances to make it difficult for anything to get done, which makes it a bad government. A trim, efficient government just does a good job of oppressing its people.
Life is a lot more nuanced than a lot of young rationalists or idealogues would think. There is room in the world for all sorts of people, and the diversity of even mistaken opinions leads to interesting and wonderful things. Example: while ‘christian rock’ tends to suck, most religious music is genuinely inspiring like little else. Ditto for architecture. When trying to trim falsehoods from the world, don’t accidentally lose some awesome.
On the same subject, history does matter. He who doesn’t remember history is doomed to something something… Just calling yourself an ‘atheist’ doesn’t mean you’ve pruned religion out of your language and culture—and if you do manage that, don’t be so confident that it will all still stand without it.
Sorry, was this the ‘soapbox’ thread? I’ll stop now.
Corporations literally get away with murder. The corporation is a recent innovation, not something that has always been with us. This recent social contract that governs corporations is deeply flawed, in that it holds no one accountable for consequences that would be regarded as criminal if resulting from the actions of a person. A recent case in point is the wave of suicides in the French national telecom giant.
Responding to the question “What do you believe that most people on this site don’t?”:
I believe that people who try and sound all “edgy” and “serious” by intoning what they believe to be “blunt truths” about race/gender differences are incredibly annoying for the most part. I just want to roll my eyes when I see that kind of thing, and not because I’m a “slave to political correctness”, but because I see so many poorly defined terms being bandied about and a lot of really bad science besides.
(And I am not going to get into a big explanation right here, right now, of why I think what I think in this regard—I’m confident enough in this area here to take whatever status hit my largely-unqualified statement above brings. If I write an explanation at some point it will be on my own terms and I frankly don’t care who does or doesn’t think I’m smart in the meantime.)
Racial differences and gender differences are very different topics. Especially if we are interested in discussing whether, or the extent to which, they are rooted in biology.
I agree (and I see sex/gender as far more valid of a biological concept than “race”, for the record), but I’ve noticed a correlation between people who would describe themselves in terms like “race realist” and people who think there’s good evidence for women being “less suited” to math and science than men, cognitively speaking. (And again, getting deeply into this right now is not something I’m going to do, it would be wandering off-topic for one thing.)
I think there is a huge amount of wisdom in the core ideas of Buddhism. self is a convenient fiction and a source of much confusion and suffering; subtle forms of attachment are frequent sources of suffering; meditation can improve attention/concentration and meta-cognitive awareness, and some Buddhist techniques are effective in this regard; our experience in life is much more determined by our mind than we believe; compassion can and should be cultivated.
Is this really controversial among rationalists?
The question wasn’t whether it’s controversial, but whether most people on the site believe it.
If we just mean that most rationalists would agree that there is (considerable?) wisdom in Buddhism, I’m sure we’d find at least half. If we mean the much stronger assertion that Buddhism is worthy of serious attention, much more than reading a book or two and browsing Wikipedia, then I don’t think most people believe it.
(Ideas below are still works in progress, listed in descending order of potential disagreement:)
Bearing children is immoral. Eliezer has stated that he is not adult enough to have children, but I wonder if we will ever be adult enough, including in a post-singularity environment.
The second idea probably isn’t as controversial: early suicide (outside of any moral dilemma, battlefield, euthanasia situation, etc.) is in some cases rational and moral. Combined with cryonics, it is the only sensible option for, e.g., senile dementia patients. But this group can be expanded, even without cryonics.
Some have mentioned modern school systems to be broken, but I’ll go even further and say that mandatory education is a huge waste of time and money, for all involved. Many, perhaps most, need to know only basic literacy and arithmetic. The rest should be taught on a want-to-know basis or similar. As a corollary, I don’t think many or even most people can be brought into the fold of science or rationality.
(Curiously, the original poster wondered if our crazy beliefs might be true, but many responses, including my own, are value, not fact, judgments.)
I don’t have much of a vested interest in being or remaining human. I’ve often shocked friends and acquaintances by saying that if there were a large number of intelligent life forms in the universe and I had my choice, I doubt I’d choose to be human.
I’m going to be an elven wizard.
Are there (many) people on here who don’t agree with you?
Depending on how we define “human,” I might… I’m not sure. But I’m fairly confident that if I did, my definition of “human” would come out so broad that it would shock swestrup’s friends and acquaintances even more.
Whenever I hear an unsupported vote against conventional wisdom on a web forum, e.g. “adult-preteen intercourse isn’t very harmful”, I don’t update my view much. Absent a well-argued case for the unconventional position, I assume that such beliefs reflect some strong self-interested bias (sufficient to overcome strong societal pressure) and not fearless rational investigation—to say nothing of trolls.
I also strongly discount unreasoned votes in favor of the consensus, especially on issues subject to strong conformity pressure.
It seems that this survey is not intended to solicit arguments for particular controversial anthropological or political beliefs. Does the site accept them at all? I’d expect not, except as case studies for some general claim, due to the risk of attracting cranks.
I agree. See my comment for this post. My position is controversial, but pretty coherent. At least, no one came up with a counter argument, I was just downvoted alot. So, my opinion is a pretty good example of what the poster is looking for, yet such opinions inherently will not do well. Really, this forum is antithetical to this post.
That within human races there are probably genetically-determined differences in intelligence and temperment, and that these differences partically explain differences in wealth between nations. (Caveat: “race” is at least as much a socially-constructed term as a scientifically valid category; however there are diffences in allele frequency that reliably correlate with having ancestors from particular parts of the world).
That these differences may have been partically caused by the fact that peoples from different parts of the world have had literate societies for different times.
Thought of a few more:
Circumcision may be harmful, and may cause more harm than benefit.
It’s generally not worth your time to ask a doctor questions about treatments; the responses you’ll get will be soothing but non-informative.
Doctors probably cause more harm than good, considered over all interventions.
Aren’t all of these kind of obvious?
Gotta ditto BrandonReinhart’s point.
(Many/Most) doctors won’t give me useful information even if I complain about their unhelpfulness.
Most people not only believe that doctors do far more good than harm, but act offended if any other position is suggested even hypothetically.
And that goes double for circumcision. Most people won’t even consider the possibility that it’s not well-justified, much less that it’s harmful.
(edit) Since I don’t think I expressed myself well:
There is at least one person who posts on these boards that I once tried to discuss these issues with. Not only did he insist that they weren’t (non-negligibly) possible, but without hearing any of my reasons why I was unsure about them or offering any points of his own, he insisted that I was stupid for even considering them.
I would say that generally, he’s far more rational than most people, but on certain issues, he became totally irrational. (Not necessarily wrong, just irrational.)
And my experience suggests that happens very, very commonly.
If they are, then why do they persist as sources of harm?
There are a lot of persistent sources of harm in the world. Some of it is down to game-theoretic limitations (Arrow’s paradox, prisoner’s dilemma, etc.). Most of it is down to stupidity.
People’s attitudes can be changed by changing their behavior. Get someone to do something, and they’ll rationalize why they did so if they can’t think of a good reason. Get someone to do something that distresses them, and they’ll rationalize very strongly, especially if their self-image isn’t compatible with a negative assessment of the action.
Think of really harmful hazing. If no such tradition existed, people wouldn’t react well to someone trying to start it. Once people go along with minor hazing, there’s less of a psychological barrier against it and more of a barrier against viewing it as bad. It then becomes easier to progress to more serious hazing. Finally people try to force others to do really stupid, risky, or even certainly-harmful things, while never really considering the costs or consequences.
People are consequentialists. If a consequence of believing X “that an action is harmful” is to conclude that they’ve done harm, people will tend to deny the possibility of X.
Annoyance, do you intend the last sentence to broadly mean “treatment is less effective than prevention,” or “Western medicine is a crock,” or “Doctors specifically are not as effective as other aspects of modern medicine,” or something else?
Western medicine isn’t a crock because it has a lot of valuable content; in contrast, (for the most part) ‘alternative’ medicine does not. A lot of that value comes from coping with sudden crises that would normally result in quick death.
But if you added up all of the benefits that come directly from all medical interventions, and compared them to the harms that come directly from all medical interventions, I very strongly believe the ratio would be far smaller than most people would expect, and I weakly believe the ratio would be less than 1:1.
I believe that double-think is possible and sensible. It generally takes the form of making a deliberate attempt not to learn more about something, and not bothering to assign an expected value to the information you are missing out on.
People avoid watching horror movies if they want to stay composed. They try to avoid internet shock sites if they don’t want to be disgusted. In a similar way, avoiding information that contradicts a belief that
1.) would be painful to discard
2.) exists in an area where accuracy isn’t terribly important
makes sense. For example, if I am a fan of some football team, it makes sense for me to avoid reading articles critical of that football team.
A corollary is that god-belief of the right sort makes sense for people who aren’t scientists, politicians, or philosophers.
Another situation where double-think makes sense is when you’re trying to avoid seeing information which will make you regret a decision, or might influence you to change your decision, but with the potential for only marginal improvement. For example, if I am working in such-and-such a profession, it makes sense for me to avoid reading about how a different job is much cooler.
I believe that the solution to the Fermi paradox is possibly (I don’t place any considerable strength in this belief, besides it’s a quite useless thing to think about) that physics has unlimited local depth. That is, each sufficiently intelligent AI with most of the likely goal systems arising from its development finds it more desirable to spend time configuring the tiny details of its local physical region (or the details of reality that have almost no impact on the non-local physical region), than going to the other regions of the universe and doing something with the rest of the matter. That also requires a way to protect itself without necessity to implement preventive offensive measures, so there should also be no way to seriously hurt a computation once it has digged itself sufficiently deep in the physics.
Any reason AIs with goal systems referring to the larger universe would be unlikely?
Something akin to the functionalist position: if you accept living within a simulated world, you may also accept living within a simulated world hosted on computation running in the depths of local physics, if it’s a more efficient option than going outside; extend that to a general goal system. Of course, some things may really care about the world on the surface, but they may be overwhelmingly unlikely to result from the processes that lead to the construction of AIs converging on a stable goal structure. It’s a weak argument, as I said the whole point is weak, but it nonetheless looks like a possibility.
P.S. I realize we are going strongly against the ban on AGI and Singularity, but I hope this being a “crazy thread” somewhat amends the problem.
In Stross’s novel “Accelerando”, even without the locally deeper physics, the AIs formed Matrioshka Brains and more or less ignored the rest of the universe because of communication difficulties—mainly reduced bandwidth but also time lags.
I love saying crazy things that I can support, and I thrive on the attention given to the iconoclast, so I find it impossible to answer this.
The only beliefs that I wouldn’t feel comfortable saying here are beliefs that I want to be true, want to argue for, but I know would get shredded. This is one reason I try to hang out with smart, argumentative people—so that my concern about being shredded in an argument forces me to more carefully evaluate my beliefs. (With less intelligent people, I could say false things and still win arguments).
This is great. I haven’t waded my way through the whole EvoPsy debate (yet), and then there’s global warming, several flavors of natural selection argument, and who knows what else. Mind if I vent some arguments with you?
Suffering is not evil per se, and we are free to make drastic distinctions in the moral value of suffering depending on the sufferer. In other words, if an AI spawned billions of copies of conscious beings that want to make huge cheesecakes, it may be right to just kill them all off. (I’m not sure about trillions.) On a more relevant note, that means second degree murder of Stephen Hawkins is a far worse action than first degree murder of Joe Plumber.
As a more inflammatory phrasing, I view the world largely in terms of intelligence, and feel that the smart are (typically) “worth more” than the average and below.
I also believe it is naive and wishful to believe that races, which developed (propensities towards) many distinct genetic traits (not just skin color, also hair color, facial shape, disease resistances, etc) do not have differences in intelligence distribution. Affirmative action is therefore racist, and accusations (against employers, scholarship committees, etc) of racist selection merely based on previous selectees (current employees, past scholarship winners, etc) are unfounded.
Hmmm....remove the inflammatory phrasing, and those sound like things I’d get a decent amount of agreement on.
(This also makes me wonder what makes certain phrasings inflammatory—because the opposition to societal positions which require defense is explicitly acknowledged?)
Lastly though, I have a qualified belief in eugenics. I greatly fear the Idiocracy scenario, and thus shudder every time I hear about some genius having few or no children, or women on food stamps having octuplets.
The qualification is that I am a libertarian, and would fear any government eugenics programs as well. Combining the two yields an awkward desire to have lots of children for the sake of having lots of children and a desire for a free-market form of eugenics, such as a private institution which pays the unintelligent to undergo voluntary sterilization.
On a similar note, while it may be justified to characterize a given black person as below-average intelligence (a stereotype) before meeting that person, that characterization still has sizable error bars, and making active judgements based on race is wrong.
I’d donate to that.
Incidentally, the recent mother of octuplets was a nurse who was injured on the job and is receiving disability payments; she doesn’t seem like a particularly good case for eugenic sterilization.
However, measured intelligence can also change over time within a single race, depending on the external environment. I can’t find it now, but recently saw an article pointing out that the average IQ of students from one of the Scandinavian countries (Denmark?) had increased measurably over the last 50 years. Like everything else, intelligence has both biological and societal components. I certainly don’t know enough about intelligence to comment with confidence on its biological bases and how immutable (or mutable) they are, but as long as there is a societal component, then I see no inherent moral problem in trying to provide disadvantaged racial groups with the same favorable milieu that other groups have already profited from.
(And, for that matter, I think the actual harm suffered to white people by affirmative action on behalf of other groups is probably fairly small. There might be a zero-sum calculation about a specific job or specific slot at a college, but whites aren’t being systematically shut out of every opportunity they might have. There’s also a difference between promoting people who are blatantly unqualified for the positions they’re given because of their race, and favoring people who are perhaps at the margin of qualified but could easily improve. The spirit of American affirmative action appears to be the latter, although it’s surely implemented with greater and lesser degrees of faithfulness to that ideal.)
Isn’t that just the Flynn effect? It’s true of far many more countries than just Denmark.
That the most important application of improving rationality is not projects like friendly AI or futarchy, but ordinary politics; it’s not discussed here because politics is the mind-killer, but it is also indispensible.
On a more specific political note, that there are plenty of things government can do better than the market, and where government fails the people the correct approach is often not dispensing with government but attempting to improve government by improving democracy.
I know that both of these, especially this last, go against what many here believe, and I don’t intend to get into a detailed defence of it here—it’s not exactly a fresh topic of debate, and it’s not in line with the mission for this site.
True because good policies can have vastly differnet outcomes than bad policies.
And plenty of things the market can do better than the government.
I would have chosen the original ending of Three World Collide over the “true” ending, and would be, if not entirely pleased, at least optimistic with respects to the outcome of Failed Utopia #4-2.
Judging from the comments on Failed Utopia #4-2, you are far from alone on that one. Even EY, for all that he asserted that people were just claiming to be OK with it to be contrary, eventually conceded that he would choose that world over the current state of affairs. As would I.
That was because they didn’t have the same impending doom of existential risk hanging directly over their heads and people weren’t dying all the time, it wasn’t a function of “yay more people are HAPPY”.
Yes.
I didn’t mean to suggest that you viewed it as a perfect win condition, nor that you believed peoples’ HAPPY level was the most important factor; sorry if it came across that way.
I believe that there are very significant correlations between intelligence and race.
I believe that the reason that the United States is more prosperous than Mexico is that the English killed/drove out the natives when they came to the Americas, while the Spanish bred with them, diluting down the Spanish influence, and that there are other similar examples of this.
I believe that the reasons white people enslaved black people, and not the other way around is due to average intelligence differences.
I believe (though only with weak evidence) that hispanic gangs are taking control of LA drug traffic from black gangs and succeeding because of a difference in average intelligence. I also believe that the if the Russian mafia wanted a part in this game, they would dominate for the same reason.
There is a very strong pressure to be “Politically Correct”, and it seems that most beliefs that would be tagged with “Politically Correct” are tagged with that because they cannot be tagged with “Correct”.
I believe that to be offended, you have to believe in your own inferiority to some extent.
As a disclaimer, (and I think this much will be agreed with) this doesn’t imply that possessing superior intelligence makes it morally acceptable to abuse it any more than owning a sword makes it OK to hurt someone- just easier.
In school they taught that the climate in Mexico led to large sugar plantations while the climate of the US led to smaller farms especially in the north. Then this led to a more egalitarian distribution of wealth in the northern US which created the middle class demand that allowed manufacturing to take off. In Mexico the poor were too poor to buy a lot of these manufactured goods while the rich plantation owners could afford superior goods.
I’m not sure how an intelligence based explanation would explain this better.
The US has had, in its history, a large-scale immigration from just about every region of the world, and most of them have interbred. The result is a population with lots of outbreeding depression and heterosis, leading to a much wider variation in intelligence and other abilities than anywhere in the world. The ultimate outcome of that is a lower average intelligence in the US than in other countries, due to outbreeding depression, counterbalanced by a small number of exceptional people, who got lucky and benefited from heterosis.
Interesting idea. This suddenly makes me take my gf’s remark about how “mismatched” and “weird” average American faces tend to look a bit more seriously.
West Africans where brought to say Brazil because they where mostly from peoples adapted to tropical agriculture while say enslaved Native Americans in the region where mostly hunter gatherers. Not only did forager Native Americans find slavery/serfdom more psychologically troubling than farmer folk, they where not resistant to the diseases that Europeans brought with them either. Their numbers dropped rapidly.
Africa was just the nearest big market where you could buy lots of Old World farmer slaves. I mean sure you could buy some from the Arabs, but they got most of theirs from Africa as well, why go through a middle man when you can sail directly there and deal with local merchants?
Also once you brought lots of Africans you bring with them African tropical diseases which again hit the few remaining Native Americans very hard and made the bad idea of say using imported slave Slavic or Irish labour in a tropical climate even worse.
Basically once you get Africans to a place like Cuba or Haiti they will tend to eventually displace Europeans and Native Americans and almost anyone else too because they are better adapted. I find it telling that in the Caribbean nations that aren’t majority Mulatto or Black you often find a large population of Indians (another people that has experienced thousands of years of selection for agricultural work in a tropical climate with a lots of pathogens making life miserable).
I do think the well know measured achievement gap probably is partially (but perhaps insignificantly so) genetic and probably was already around at the time, but I’m not sure it was as large as it is today. Askenazi Jews apparently needed less than a millennium to get one standard deviation IQ advantage over other Europeans, so telling how clever each people was in ancient times is tricky. Also one shouldn’t forget the evidence that urban civilization seems to often be dysgenic.
I’m sorry I don’t have more time to respond in detail. Just let me recommend the vitally important, if imperfect work by Jared Diamond, James Flynn and William Dickens, and Gregory Clark.
In support of “notmyrealnick” I have to say that most people wrongly believe that the sexual life of humans only starts when they reach adolescenthood, Bronislaw Malinowski in his studies with savages(Book: The Sexual Life of Savages in North-Western Melanesia) showed that the starting age can be as young as 5 years old. But we in our modern society repress the children.
http://en.wikipedia.org/wiki/Bronisław_Malinowski
Edit: related to this is the (IMHO wrong) thought that underage humans cannot possibly give informed consent to sexual acts.
Edit2: Btw, when I speak about underage sex I’m thinking about sex where all the involved are more or less of the same age.
Here’s one on a very different topic:
England’s offenses against the American colonies did not justify the American Revolution.
Well, if we’re going into history… I believe (despite being a northern democrat) that the Civil War was fundamentally unjust. It makes a mockery of the principles of the Declaration of Independence if secessionary states will be outright invaded.
(If slavery was an issue, then the North should’ve just bought out the South—likely would’ve been much cheaper than the actual war.)
I believe that neither side had the foggiest idea of how costly in lives and money the Civil War would turn out to be.
The North (well, congress) tried to buy out the South (well, slaveowners). The South rejected it. There were actually multiple attempts at this, some before the war, some during the war.
The thing is, the War between the States really truly was about slavery, nothing else. The dodge that it was about states’ rights comes down to exactly one right—the right to keep slaves. Compare with such travesties as the fugitive slave acts, which they pushed through congress, which actually did greatly infringe the rights of the northern states. The southern states, despite some of their propaganda, did not generally support the right of secession. Their Constititution explicitly forbade it. Every single article of secession passed by their state legislatures explicitly called out slavery as the reason for secession.
The odd thing is that slavery was not in any immediate danger. But with the election of Lincoln the southern states saw that their grip on the country was not as absolute as they desired, and they threw a tantrum, because they demanded not only the right to have slaves, but that the rest of the country not judge them for it.
It’s a bit more complicated than that.
Considering the length and, um, disputed quality of his writings, could you not simply link to Moldbug’s blog? At least not as if it’s somehow a decisive counterargument—“well, Moldbug disagrees, sorry.” Not saying you intended that as an appeal to authority, but …
I was struck by one quote from Lincoln’s first inaugural address (emphasis added):
In other words, as long as they rendered unto Caesar and didn’t take his stuff, Lincoln was willing to abandon all other federal government functions no matter how constitutionally mandated. This seems like secession in all but name.
Remember also that the casus belli was that Fort Sumter was supposed to be handed over to the Confederacy, but the federal government refused to.
Both seem more consistent with a power theory than a slavery theory.
Not about expanding or preserving the personal power of the most prominent decision makers? Wow. The war between the states sounds truly exceptional!
Okay, sure, in some sense it was about that, just as we can talk about the cause of the war being the laws of physics plus the entire past light-cone.
But that’s not usually what we mean by “cause of war”. I don’t see how this is a cause in any truly useful or predictive sense. Expanding and protecting personal power certainly is necessary for wars, but it’s pretty much vacuously satisfied: the prominent decision makers almost always want to expand and preserve their power. Although it often leads to wars, it often doesn’t. What made or let it lead to war this time, rather than more peaceful politicking?
I am suspicious of any monocausal theory of historical events. Surely slavery is by far the most important cause of the war. But there were a lot of other reasons.
I did not know that, though I am not greatly surprised. Do you have a source where I can learn more?
About which assertions?
Unfortunately failed attempts are written about much less than successful ones, so I have not found in-depth discussions on the net about “compensated emancipation” in the U.S., though that’s the term to search for. I have found a few references to specific attempts though.
http://www.mrlincolnandfreedom.org/inside.asp?ID=8&subjectID=2 lists an attempt in 1847 (unclear what territory it covered—may have only been Pennsylvania) and one submitted by Lincoln in 1849 (limited to DC). Neither succeeded.
http://www.mrlincolnandfreedom.org/inside.asp?ID=35&subjectID=3 covers some of the actions during the war.
For slavery being the only real cause of the civil war, well, to me it’s pretty clear that without slavery there wouldn’t have been a civil war, and that no change that didn’t also eliminate slavery would have eliminated the civil war, though some may have delayed it. There are two nice blog posts more or less on that topic: http://www.theatlantic.com/national/archive/2010/04/the-ghost-of-bobby-lee/38813/ and http://volokh.com/posts/1218531359.shtml
http://www.dailykos.com/story/2010/4/13/856804/-To-Those-Clueless-Wingnuts-Who-Claim-That-SLAVERY-was-NOT-the-Main-Cause-of-the-CIVIL-WAR… has a nice selection of the actual declarations of secession. More can be found with a bit of searching, e.g. http://sunsite.utk.edu/civil-war/reasons.html
I’ll try to add a bit more later.
Your comment was simpler when I responded. I have edited my response to quote the part I was responding to.
I think your first two links address my question, though I will have to look at them in more detail.
Yeah, I have a bad? habit of editing my responses as I think more about them. I try not to substantially alter them after people respond, but I missed this time.
I would say generally bad, but it did not really bother me this time, because it was easy for me to edit my own repsonse to fit. And you still answered my question.
Nitpick: The South shot first. Just a nitpick, though ;)
If we’re going to nitpick, then the South shooting first is as propagandistically misleading as saying Germany shot first in WWI or Japan in WWII. Yes, it’s true, if you ignore the things like supplying arms to Britain or embargoing Japan or lying to the South about evacuating Ft. Sumter:
I believe that the end results of the American Revolution were beneficial enough to justify it in hindsight. However at the time it was initiated, the projected benefits were indeed to little to justify what occurred.
Yeah; as far as I can tell, the United States basically got lucky in that its revolution didn’t result in the kind of mess that appeared in the aftermath of various other famous revolutions.
The justification was the tremendous economic potential needed local, independent government given the constraints of communication and transport at the time. That’s what they taught me in high school History, anyway. Also, that about 1⁄3 of colonists supported the revolution, 1⁄3 the crown, and 1⁄3 weren’t aware it was going on.
By the way: I’m new here, and I notice there’s no way to neutralvote once you’ve already voted. e.g. I voted you up and then changed my mind, but don’t want to vote you down. So you get a freebie.
Clicking on the bold “vote up”/”vote down” undoes it.
Thanks. I thought I was doing that, but I must have been switching to the other link because the score was incrementing by 2.
I might agree with this. But would you say that it was justified on other grounds and that these were just used as the “sellable to the public” excuse?
It worked out fairly well, but considering the results of the French Revolution, the Russian Revolution, and the Iranian Revolution, among others, I’d say we got lucky.
I don’t think people have (ethical) value simply because they exist. I think they should have to do a lot more than that before I should have to care whether they live or die.
Interestingly, you may not care whether a person exists (so you will be indifferent to the instantiation of more people), but still care about how he lives, and whether he dies, and in what manner.
So if I were to start torturing a random child, would you object? Assume the child has never done anything important to make him especially valuable.
I wouldn’t personally object, no. This is happening every day and, like most people, I do nothing. The difference is I don’t think I’m supposed to be doing anything either. That isn’t to say we should live in a society without laws or moral strictures; you need a certain amount of protection for society to function at all. You can’t condone random violence. But this is a pragmatic rather than altruistic concern.
Hm. Upvoted for an honest answer and lack of dissembling. Let’s make it harder.
You have a button. If you press the button, you will receive a (free!) delicious pie, and a random child will be tortured for one year. No one will ever know there was any connection to you, and you can even erase your memory so that you won’t feel guilty about it afterwards. Assume you like pie. Do you press the button?
This is very bizarre situation and difficult to think about but I think there’s a chance I would press the button. My main issue is that children require some kind of protection because they’re our only source of valuable adults. Childhood is probably the worst time to torture people in terms of long-term side effects. But in terms of merely causing the experience of suffering (which I think is what you’re getting at) I think torture is value-neutral.
This is a slightly different matter to the one I initially posted about; I don’t think the experience of pain (or happiness) is cumulative. Consider the situation where I could choose to be tortured for a year to receive a reward. If you could strip this scenario of long-term side effects, which would probably require erasing my memory afterwards, then I would willingly undergo the torture for a reward. The reward would have to compensatory for the loss of time, the discomfort and the impracticality of the scenario. If I really liked pie I’d probably be willing to undergo 5 minutes of torture without long-term side effects for pie. Actually, I’d probably be willing to do it for 5 minutes purely out of curiosity.
Now, the child in question, assuming he or she has no value and comes from a community where he or she would not become a valuable adult, could not have long-term side effects. He or she would surely be changed by the situation but not being a value-contributor could not be changed for the worse; any change would be value-neutral in terms of benefit to the cumulative wealth of society. (There is a possibility that the child would become a greater strain on society, and acquire greater negative value, but let’s put this aside and say there are no major long-term side effects of the torture such as loss of function.)
A complication here is the value I place on pie in your scenario would be unlikely given how I determine value generally. As I said, I do not consider the experience of pain or pleasure cumulative, and consider them value-neutral in general. I would not place a high value on the consumption of pie. But let us say that my love of pie is a part of my general need to stay healthy and happy in order to be a value-contributor. In this case, whether I push the button would be some function of the probability that the child might be a child of value or from a community that produces adults of value weighed against the value of pie to me as a value-contributor, so there’s a non-zero probability I would push the button.
It’s beside the point, but your idea of torture might be a bit light if you would undergo five minutes out of curiosity.
Maybe he’s thinking of water-boading.
It’s worth pointing out that the original comment concerned living or dying, not torture.
Myself, I would avoid the torture button, but would give serious consideration to pressing one that delivered a delicious pie at the cost of painlessly puffing a random faraway person out of existence.
If the button delivered a sufficiently large amount of money, I would press it for sure. Would require much more money for torture than death, however. (Like $1 million versus a few bucks.)
I wouldn’t press the button, though I had to think a bit longer about the “erase from memory” part.
It reminds me of what Eliezer often says about Friendly AI: “If you offered Gandhi a pill that would make gandhi a murderer, gandhi would refuse to take it.”
I would also refuse to do it even if my memory could be erased. Somehow, I don’t feel it’s really relevant, because when I’m considering wether to do it or not, I’m not even thinking about any guilt I might feel, I’m mostly repulsed by torture in general and imagining myself in the place of the person to be tortured.
I don’t think I would have any particular problem with murder for an adequate reason, and I wouldn’t take a “murder pill”. A stupid illustration—though I don’t remember seeing this phrase before and I’ve been following OB from the first post.
Example: X wouldn’t Y.
Rejoinder: Z, which is unlike X in relevant ways, would also not Y.
...huh?
More like: Z, which you could expect to be less bothered by Y than X, also would not Y.
A quick Google search reveals the Gandhi phrase on Eliezer’s website:
http://yudkowsky.net/singularity
But I think I saw it in at least one of his papers too.
Somehow I missed this comment or else I would not have said the same thing in my comment.
There are far fewer well-defined mathematical relations and operations on the set of “utilities” (aka “utility values”, although the word ‘value’ is misleading since it suggests a number) than most self-stated utilitarians routinely use; for example, multiplying utilities by a scalar makes no sense, the sum of two utilities can only be defined under very strict conditions, and comparing two utilities under only slightly less strict ones.
Consequently, from a rigorous point of view utilitarianism makes very little sense and is in no way intellectually compelling. Most utilitarians satisfy themselves with a naive approach that allows it to build an internally consistent rule set, much in the same way as theology or classical physics. But the “utilities” they talk about have lost most of their connection to reality—to subjects’ preferences/happiness—and more closely resemble an imaginary karma score.
I’ve had a bone to pick with Yudkowski* ever since reading Three Worlds Collide. I haven’t gathered all of my thoughts yet, or put them in a proper essay, but since you asked, here’s a quick synopsis (paraphrasing Clausewitz).
I think people nowadays overestimate the value of human life. Generally speaking, we ain’t worth that much—and up until about four-hundred years ago, killing each other was our primary source of entertainment.
As long as we have individuals, conflict is inevitable; and a society where the conflict’s extremes have been narrowed down to nothing but sassy comments and politicking, well… that seems like a pretty boring place to live.
Speaking from experience, yelling at people solves a lot of problems. And I know a few individuals whom would be much less of a trainwreck if they’d been given the punch in the face they deserved. I think we’ve got no call to be judging the Baby Eaters for their biology—anymore than the Orgasmiums have for judging us. Misery can be just as much fun, if you approach it with the proper mindset, and I think HBO Rome does a brilliant job of describing a society with more reasonable standards. At the end of the day, it beats playing checkers, doesn’t it?
:-) Just look at our entertainment—we love a protagonist who suffers.
*It’s a very small bone. A chicken bone, really.
To whom?
You may want to read some of the Romantics, if you haven’t already. Especially Ralph Waldo Emerson and Nietzsche, who don’t necessarily normally fall into that category.
I believe that framing people for possession of child pornography is a widespread practice, and that this accounts for almost all convictions on that charge. I base this on the evidence that is typically used in such cases, all of which comes from computers which may have been compromised; and in fact, trials usually mention evidence that the computers in question were compromised (although it’s possible for an attacker to remove all evidence of that fact), and that hasn’t been a successful defense. If a person were to actually want child pornography, there are simple technical measures which could create a nearly iron-clad guarantee against being caught; and conversely, similar measures with a similar guarantee protect people from being caught planting evidence. Finally, the societal irrationality surrounding child pornography means that successfully getting someone accused of having it will not only get them jailed, but thoroughly destroy their reputation and shame them as well.
I think “it’s easy not to get caught” is not good evidence that most people convicted of an easily-non-catchable crime are innocent. It’s also easy to not leave fingerprints, and security camera footage consistently shows people not wearing gloves when stealing stuff.
A sample of 1 doesn’t help you much, but: I know someone in the UK who went to jail for this, and they weren’t framed.
Is your only evidence that it should be hard to get caught if guilty, but easy to frame someone?
What do you suspect is the typical motive?
The traditional murder motives apply: revenge, and eliminating rivals. Revenge seems like it would be the most likely motive.
There have been cases in which prosecutions were based solely on the use of a credit card number which the owner claims must have been stolen; those cases most likely involve card numbers which really were stolen, but from a random victim, not to frame someone in particular. However, those cases are publicly visible evidence that framing someone is easy, and at least some malefactors must have noticed.
People convicted would tend to maintain their innocence regardless of whether they were innocent or guity, so that can’t be used to determine how many people were really innocent. Computer forensics aren’t very useful, because a sophisticated attacker can modify all the evidence. One way to find out would be to pose as a black market buyer wishing to frame a (fictional) person, and ask anyone who offers to perform the service whether they have experience doing that sort of thing. However, that would provide only qualitative data, nothing quantitative.
To pump up arrest or conviction stats.
I also remember reading, several years ago, that most web sites offering child porn were actually run by the FBI. No way of knowing if it’s true.
Could they reliably avoid leaks over long periods of time?
Would they need to? Any given site is easy enough to shut down if you’re the operator, and presumably the FBI would know how to cover it’s own tracks.
Something that I don’t so much believe as assign a higher probability than other people.
There is a limit to how much technology humans can have, how much of the universe we can understand and how complicated of devices we can make. This isn’t necessarily a universal IQ limit but more of an asymptotic limit that our evolved brains can’t surpass. And this limit is lower, perhaps substantially so, than what we would need to do a lot of the cool stuff like achieve the singularity and start colonizing the universe.
I think it’s even possible that some sort of asymptotic limit is common to all evolved life, this may well be a solution to the fermi paradox, not that they aren’t out there, but no one is smart enough to actually leave their rock.
I have wondered about the assumption that technological / scientific / economic progress can continue forever, and I am also suspicious of the idea that arbitrary degrees of hyper-intelligence are possible. I suspect that all things have limits, and that mother nature long ago found most of those limits.
With probability 50% or greater, the long-term benefits of the invasion of Iraq will outweigh the costs suffered in the short term.
Costs and benefits to whom? America and allies, Iraq, or the world in general?
I can see the reasoning though I don’t quite agree for two reasons.
1) If the Lancet report is at all accurate that’s a lot of deaths for the long-term benefits to make up for.
2) How much more extreme has that made the rest of the middle east? How has it hurt the possibility of peace in Israel.
I was, and still am against the start of the war, though I’ve been fairly consistent in thinking they should stay since then. (Oddly enough I thought the surge was a good idea when virtually no-one else did, though have since started to think it didn’t really do anything now that everyone is moving on board!).
Do you still maintain the statement, in 2015 with ISIL attacks?
I disagree with Eliezer on the possibility of Oracular AI (he thinks it’s impossible).
Other moderately iconoclastic statements:
The computer is a terrible metaphor for the brain.
In the ultimate theory of AI, logic and deduction will be almost irrelevant. AI will use large scale induction, statistics, and memorization.
In order to achieve AI, it is just as important to study the real world as it is to study algorithms. To succeed AI must become an empirical science.
AI is a pre-paradigm discipline.
Rodney Brooks is a great philosopher of AI (I have no comment regarding his technical contributions).
Large scale brain simulation will not succeed.
Evolutionary psychology, while interesting from the perspective of explaining human behavior, is irrelevant for AI.
Computer science, with its emphasis on logic, deduction, formal proof, and technical issues, is nearly the worst possible type of background from which to approach AI.
I think it’s more that he doesn’t think it’s a good solution to Friendliness.
I think it would be a good idea to create a sister website on the same codebase as LW specifically for discussing this topic.
Strikes me as an idea worth considering. If we had a sister website where AGI/singularity could be talked about, we could keep a separate rationalist community even after May. The AGI/singularity-allowed sister site could take OB and LW discussion as prerequisite material that commenters could be expected to have read, but not vice versa.
I endorse this proposal.
But then, in the still-censored site, we still wouldn’t be able to mention AGI/singularity in a response, even if it would be highly relevant.
A possible solution could be to have click-setable topic flags on posts and comments when bringing up topics that...
Are worth discussing
Are likely to be, fairly frequently
Lots of people would really rather they weren’t
...and readers can switch topics off in Options, boosting signal/noise ratio for the uninterested while allowing the interested to discuss freely. Comments would inherit parent’s flags by default.
Possible flaggable topics:
Friendly AI/Singularitarianism
Libertarian politics
Simulism
Meta-discussion about possible LW changes
Another idea, more generally applicable: the ability to reroot comment threads under a different post, leaving a link to the new location.
My conception of the proposal was that the LW ban could be relaxed enough to allow use of relevant examples for rationality discussions, but not non-rationality posts about AI and the like.
I was responding to AnnaSalamon:
I thought the same.
I thought that was what was planned already (after May). I was responding to AnnaSalamon:
I took that to mean keeping LW separate from AGI/singularity discussion, or why say ‘even after May’? Someone please explain if I misunderstood as I’m now most confused!
I think Anna wants to use the LW codebase to create a group blog to examine AGI/Singularity/FAI issues of concern to SIAI, even if they are not directly rationality-related. I think that’s a good plan for SIAI.
Does the ban apply to Newcomb-like problems with simplifying Omegas?
Daniel, why do you consider these things crazy enough to qualify for the poll? I think many of them are quite reasonable and defendable.
Thank you for stating your disagreement, but topics like these aren’t supposed to be discussed until May. This thread should go no further, because people could list AI “disagreements” all day and really not come any closer to the spirit of the original post.
There a “LessWrong” schedule?!?
I think that in this case, Eliezer specifically requested that everyone refrain from posting on AI after his AI-related Overcoming Bias posting spree.
I reread the “About page” and it currently contains:
“To prevent topic drift while this community blog is being established, please avoid mention of the following topics on Less Wrong until the end of April 2009: The Singularity Artificial General Intelligence”
Forbidden topics!
I am an atheist who does not believe in the super natural. Great. Tons of evidence and well thought out reasoning on my side.
But… well… a few things have happened in my life that I find rather difficult to explain. I feel like a statistician looking at a data set with a nice normal distribution… and a few very low probability outliers. Did I just get a weird sample, or is something going on here? I figure that they are most likely to be just weird data points, but they are weird enough to bother me.
Let me give you one example. A few years ago I had a dream that I was eating and out of the blue I discovered a shard of glass in my mouth. The dream bothered me so much that I had a flash back to the dream the next day as I was walking down the road. For me that’s extremely unusual. It’s rare that I can even remember a dream, and when I do they certainly don’t bother me the next day. So, the day after that I was eating a salad and crunch. I spat out what was in my mouth and there was a seriously nasty looking slither of glass. I didn’t cut my mouth or anything, no harm done. I just hit it with my tooth.
To the best of my knowledge that was the only time I’ve ever found glass in something I was eating, and it was the only time I’ve had a vivid dream about it that bothered me the next day (or any dream about it all). I didn’t have any particular glass eating phobia before all this took place (except for a normal aversion to the idea), and I haven’t been worried about it since (ok, except for looking rather carefully at salads from that particular cafeteria for a few weeks afterwards). Was this all just a really weird coincidence? As far as I can make out the probabilities are just too low to be ignored. To make matters worse, I have a few other stories that I find just as difficult to explain away as coincidence.
Now, I wouldn’t say that I “believe” that something seriously weird is going on here. That would be much too strong. However, because I don’t feel that I can adequately account for some of my observations of the world, I think I must assign a small probability that there is something very seriously strange going on in the universe and that these events were not weird flukes.
I have other things to say but that would get into topics currently banned from this blog :-/
I’ll answer with a koan.
Of all the people who live in the world, should the lucky thousand who witness the events that are a million times too unlikely to witness for any single individual, start believing in supernatural, while the rest shouldn’t?
No, but if those thousand people don’t know if they are part of the thousand or not, after all in any normal situation I wouldn’t tell these stories to anybody, shouldn’t they assume that they probably aren’t part of the 1 in 1000 and thus adjust their posterior distribution accordingly?
http://en.wikipedia.org/wiki/Littlewood%27s_Law_of_Miracles
Also the Birthday effect, as the coincidence was a match between two different events.
I know about the birthday effect and similar. (I do math and stats for a living.) The problem is that when I try to estimate the probability of having these events happen I get probabilities that are too small.
Well, I’m getting my karma eaten so I’ll return to being quiet about these events. :-)
http://www.unicornjelly.com/oldforums/viewtopic.php?p=135082
I do not believe that the Singularity is likely to happen any time soon, even in astronomical terms. Furthermore, I am far from convinced that, even if the Singularity were to happen, the transhuman AI would be able to achieve quasi-godlike status (i.e., it may never be able to reshape entire planets in a matter of minutes, rewrite everyone’s DNA, travel faster than light, rewrite the laws of physics, etc.). In light of this, I believe that worrying about the friendliness of AI is kind of a waste of time.
I think I have good reasons for these beliefs, and I operate by Crocker’s Rules, FWIW...
Anything that does not have sufficient intelligence to be considered a threat does not even remotely qualify as a ‘Singularity’. (Your ‘even if’ really means ‘just not gonna happen’.)
Anything that cannot “reshape entire planets in a matter of minutes, rewrite everyone’s DNA, travel faster than light, rewrite the laws of physics, etc” cannot possibly be intelligent enough to qualify as a threat? That seems an odd statement, given that some of those are thought to be impossible.
No. That isn’t implied by what I said.
The relevant sentence is “In light of this, I believe that worrying about the friendliness of AI is kind of a waste of time”. If that to which the label ‘singularity’ is applied is not sufficiently powerful for worrying about friendliness then the label is most certainly applied incorrectly.
As I’d already mentioned, I am far from convinced that a sufficiently powerful AI will emerge any time soon. Furthermore, I believe that such an AI will still be constrained by the laws of physics, regardless of how smart it is, which will put severe limits on its power. I also believe that our current understanding of the laws of physics is more or less accurate; i.e., the AI won’t suddenly discover how to make energy from nothing or how to travel faster than light, regardless of how much CPU power it spends on the task. So far so good; but I am also far from convinced that bona fide “gray goo” self-replicating molecular nanotechnology—which is the main tool in any Singularity-grade AI’s toolbox—is anything more than a science fictional plot device, given our current understanding of the laws of physics.
Maybe supersmart AI’s are so good at disregarding the known laws pof phyisc that they exist already.
I find it amusing that there are actual mechanisms that “our current understanding of the laws of physics” predict will allow both of these (zero-point energy and alcubierre drives, respectively.)
The Alcubierre drive is an highly speculative idea that would require exotic matter with negative mass, which is not considered possible according to mainstream theories of matter such as the Standard Model and common extensions and variations.
Zero-point energy is a property of quantum systems. According to mainstream quantum mechanics, Zero-point energy can’t be withdrawn to perform physical work (without spending more energy to alter the underlaying physical system).
Among the perpetual motion/free energy crowd, Zero-point energy is a common buzzword, but these people are fringe scientists at the very best, and more commonly just crackpots or outright fraudsters.
Ah … no.
Not exactly. ZPE has measurable and, in some cases, exploitable effects. I’m not saying it’ll ever be practical to use it as a power source (except maybe for nanotech) but it can most definitely be used to perform work. For example, the Caismir effect. I note that Wikipedia (which I can’t edit from this library computer) makes this claim, but the citation provided does not; I’m not sure if it’s a simple mistake or someone backing up their citation-less claim with an impressive-sounding source.
Well yeah, anyone claiming to have an actual working free energy machine is lying or crazy. Just like anyone claiming to have flown to Venus or programmed a GAI. Likewise, anyone claiming to have almost achieved such technology is probably conning you. But that doesn’t mean it’s physically impossible or that it will never be achieved.
Uhm, I’m not a physicist, but that’s a short paper (in letter to the editor format) regarding wormholes, which was published in 1988. The Alcubierre drive was proposed in 1994. Maybe somebody used an FLT drive to go back in time and write the paper :D
Anyway, while I don’t have the expertise to properly evaluate it, the paper looks somewhat handwavy:
One can imagine the Moon being made of cheese, but that doesn’t make it physically plausible.
AFAIK, there are multiple interpretations of the Caismir effect, but in most of them it is maintained that the phenomenon doesn’t violate conservation of energy and can’t be used to extract energy out of the quantum vacuum.
It can, in theory, be used to convert mass to energy directly. Bias quantum foam flux over an event horizon - and this need not be a gravitational event horizon, an optical one ought to work- and one side of the horizon will radiate hawking radiation, and the other will accumulate negative-mass particles. These should promptly annihilate with the first bit of matter they encounter, vanishing back into the foam and clearing the energy debit of the hawking radiation—effectively making the entire system a mass->energy conversion machine. Which does not violate CoE.
One second.. http://arxiv.org/pdf/1209.4993v1.pdf
AKA: A theoretical way to make a mass-annihilation powered laser amplifier. No way to tell if this is good physics without actually building the setup, but the theory all seems sound.
Eh… Only.. Do not point that lab bench at me, please? The amplification ought to stop when the diamond turns into a plasma cloud..
I’m not sure I understand what you mean. Sure, assuming that Hawking radiation exists, you could use a black hole to convert mass to electromagnetic radiation (although the emission power would be exceptioally low for any macroscopic black hole).
That paper seems to be discussing lasers with non-linear optical media.
Anyway, AFAIK, in physics, the term ‘annihilation’ is typically used in the context of matter-antimatter reactions. Both matter and antimatter have positive mass.
The point is that if hawking radiation is a physical phenomenon, then any event horizon should produce it, not just a gravitational one - and the non-linear optical medium forms two optical event horizons, which the laser pulse bounces between, picking up more input from hawking radiation each turn around. Very clever, limit should be the optical properties altering when the diamond sublimes into a fine carbon plasma.
Might be an energy source that makes fusion look like cave men burning dried dung, might be a way to disprove the physicality of hawking radiation, might be a lab demonstration that it exists that cannot be engineered to the point of net energy gain (you have to fire quite powerful lasers into the diamond to set things off. Even if it amplifies the laser pulse a lot, no guarantee you can get enough electricity back out to net positive..) Currently, it is simply an interesting computer simulation.
Strictly speaking, you’re completely destroying the mass, but in the process gaining equivalent energy from nowhere. Of course, it balances out in the end.
I understand (I can’t get past the paywall) that it describes how the Caismir effect creates an area that violates the positive energy condition, proving that it’s not a law of physics. This is only part of their more general point (which is time machines, which are, of course, equivalent to FTL drives in any case. Harder to build though.)
The quote is handwavy. Then again, I don’t know much about quantum foam. OTOH, considering their paper concerns a mechanism for holding wormholes open, it’s not an unreasonable proposition (and it’s not the only way to get a wormhole, after all, merely a possible way.)
The Caismir effect isn’t the only example. ZPE keeps liquid helium liquid and probably contributes (although it’s not the only contributor) to the expansion of the universe. Conservation of energy simply doesn’t apply on a quantum scale; it’s an emergent property of quantum mechanics, like, say, chairs.
IIUC, while the Caismir effect has been observed, it is still debated whether it is actually evidence for the vacuum zero-point energy, since the calculations aren’t completely developed and there are other proposed mechanisms.
Anyway, even in the vacuum zero-point energy explanation, the vacuum energy density in the geometrically constrained region is still positive, it is just smaller than the vacuum energy density in the unconstrained empty space. It’s only negative if you arbitrarily consider the energy density of empty space equal to zero.
Without a theory of quantum gravity, the speculative connection between vacuum energy density and gravitational effects (the cosmological constant) is highly debatable: typical attempts at calculating the cosmological constant from vacuum energy yield absurdely high values, while astronomical observations are consistent with a very small strictly positive cosmological constant.
Even if the vacuum energy density generates gravitational effects by influencing the cosmological constant, the lower than average energy densiity of a “Casimir vacuum” is probably not the same thing as the absolutely negative gravitational effect of exotic matter with negative mass, which, IIUC, is required by the Alcubierre drive (I don’t know about wormholes).
BTW: I’ve found this post on Physics Forums
EDIT:
And in any case, the Casimir effect can’t be used to extract energy out of nothing: the Casimir forces are attractive or repulsive depending on the geometric configuration. If you use these forces to extract work, the system will eventually transition to a configuration where the attractive and repulsive effects are balanced. You have to pay back the same work you extracted to return the system to the original configuration. You can’t complete a cycle with a net gain.
This is the same problem of most of the proposed perpetual motion contraptions: you can extract work in an one-shot transition, but you have to perform the same work on the system (actually more, once you account for the inevitable thermodynamic losses) to return to the initial configuration.
You know, you’re right. ZPE is far less certain/accepted than Alcubierre drives.
I’m going to go on being amused just the same, though. Those really were unfortununate examples to pick :)
Thanks ;)
Just one last technical nitpick, if you don’t mind: Zero-point energy is a property of all quantum systems, and this is essentially uncontroversial. The existence of a quantum vacuum with a positive zero-point energy is considered less certain, but relatively plausible in the mainstream models such as the Standard Model. The idea that is possible to extract work from the zero-point vacuum energy is generally considered wild fringe science speculation/crackpottery/fraud.
I was referring to using it as an energy source, as in the original comment.
That seems a little strong. Still, it’s certainly impossible with current tech, and there’s no method anyone’s come up with to do it with a higher tech level.
It’s not just matter of technology. Such a feat would most likely require a violation of the principle of conservation of energy. While there are still some unresolved issues with renormalization and general relativity, it is generally believed that conservation of energy applies to the universe. The discovery of a violation of conservation of energy (which would imply that the laws of physics are not invariant under time translation) would be a groundbreaking result.
Wrong link? The abstract (full text is paywalled) says:
I don’t see any connection to Alcubierre drives. Classic Kip Thorne, though.
Without even pretending to be anything other than an amateur layman in such questions, I found this on arxiv, quote:
(Lastly, if you’re wondering why I’m replying to you a lot, it’s just because you are a prolific commenter with whom I occasionally disagree.)
looks embarrassed
I just grabbed a citation from someone talking about how the Caismir effect can be used to create negative energy (in the context of stabilizing wormholes.) I should probably have checked that, I would have found it wasn’t actually in the abstract.
Nevertheless! My point was that negative energy is pretty obviously physically possible, since it’s what predicts the Caismir effect working. (There has been some attempt to claim the CE is actually predicted by some other theories, but that’s not widely accepted.)
From what I understand it may be closer to say “doesn’t rule out” rather than “predict will allow”. Even that much of a possibility is somewhat mind-blowing.
Um, the current definition of speed prohibits FTL motion.
Only locally. And ‘local’ is rather malleable (which is the principle alcubierre drives theoretically rely on).
It’s distance and time which are more malleable; if light travels through a vacuum and arrives in x time, the arrival point is defined as being x distance away from the departure point of the light when it arrives. The Alcubierre drive would (given a couple of facts not in evidence) allow you to change the distance. Light emitted from you at the time of departure would still beat you to the destination.
Sure, so long as the space being traversed remains consistent. Which it doesn’t (always) given General Relativity. Hence Alcubierre drives.
No, it wouldn’t. The drive in question is described thus:
Notice the link there to faster than light travel. That title is a literal description.
For emphasis: This is General and not Special Relativity.
Have you finished reading the paragraph the sentence you quoted comes from? And section “Alcubierre metric” from that article, in particular the fourth sentence?
Of course I have finished reading the paragraph. As soon as I encountered the notion. Because math that allows what is for most intents and purposes a warp drive within general relativity is freaking awesome. Even if it relies on pesky things like negative mass and ridiculous amounts of energy. Oh, and would utterly obliterate the destination. Still damn cool.
In answer to the presumed (and I hope I’m not misrepresenting you here) rhetorical intent of “The first paragraph demonstrates that your claim is wrong” I would (unsurprisingly) disagree.
The paragraph in question is:
And, assuming I can count periods correctly, the fourth sentence in the passage you refer to is:
Both of these are precisely correct. And the claim:
Is false. Light continues to be faster than you locally. That is, within the bubble. And the bubble goes faster than the speed of light. Light not inside such a bubble goes at the speed of light. You can get to a destination before that light does. Which is the entire point of a Faster Than Light drive.
I was assuming Decius wasn’t assuming the light doesn’t go through the bubble.
Rather than not assuming that it doesn’t he would need to be actively assuming that it does, or he would have to make a different, more specific claim. And that more specific claim (that applies to light that travels in the the bubble) would not have supported the point Decius was using his claim to make in the context.
What happens when the bubble overtakes light? Per my understanding, virtually all of the light emitted within the bubble in the direction of travel ends up in one front at the front of the bubble, along with all of the light overtaken. All of the matter overtaken accumulates along the edge of the bubble where space warps at their velocity, after experiencing some effects of misunderstood severity when space warps around them (does interstellar atomic H-1 fuse into He-2 when the distance between atoms falls inside the region where the weak force dominates? What happens when a solid chunk of mostly Pb-206 has the distance between multiple atoms is reduced to below the range of the weak force?). What happens when the bubble overtakes the gravity effect of the matter within the bubble?
‘Obliterates the destination’ might be a little bit of an understatement.
Do you mean ‘poorly understood’ rather than misunderstood? When talking about using negative mass and enormous energy to warp space itself to travel faster than freaking light. Most with even the most rudimentary grasp would see that the effects are inconceivably severe in relation to such a small object and bubble. Being unable to conceive of the scope or currently not knowing in detail is a very different state of knowledge to misunderstanding. At least, it is difference in epistemic states that seems rather important to me. (A wrong map leads you to walk into quicksand. An known to be incomplete map leads you to watch where you are walking or google up a better one.)
The same thing that happens when I overtake a car. I go around. What I definitely do not do is go around saying “Um, the current definition of speed prohibits FTL motion” because whenever I am racing light I must handicap myself and take the light I am racing along with me for a ride.
I meant “Everything that we currently understand about the phenomenon is almost certainly completely wrong.” That’s after accounting for what we know we don’t know.
How, exactly, do you “go around” a wavefront which is propagating out from you in all directions? I’m still hazy on what the effects of autogravitation would be; once you overtake the light/gravational effect from you, are you accelerated towards your prior location proportionally to your mass and the inverse of the cube of distance from yourself?
Of course, if the travel is at some speed slower than that of light, no self-interaction effects are required. The warp drive doesn’t beat the speed record, it beats the distance record.
Describing a method of travel as “faster than x” when x departs at the same time and arrives before you is the opposite of plain language. Distance and elapsed time between events is already agreed to be not constant even between colocated observers. That is an effect of postulating that light propagates at the same speed for all observers.
If you leave Earth in a spaceship using an alcubierre drive and simultanously have someone emit a radio signal from Mars, reach alpha centauri in half an hour, then observe that same radio signal arrive at alpha centauri after approximately six years of lounging around on an alien planet, then you have beaten that light to your destination. You have traveled faster than light.
Your other questions, about your own gravitational force, I assume need to be answered by an actual expert on general relativity.
What do you mean by “simultanously”? You’ve used it to refer to events which do not occur at the same place.
I think that you’ve shown that the distance between your departure point and Mars is six light years; you’ve done that by moving space around such that the point you departed from is in the vicinity of Alpha Centauri.
Space and time aren’t defined to be static in the way the math I understand requires it to be.
The details are not significant. Simultaneously in the rest frame of earth. Whatever. Or send a timing signal from Mars to Earth at the same time as the radio message is emitted toward Alpha Centauri, then leave Earth when you receive the timing signal. You’ll still arrive before the radio message, even though you’ve given it a head start.
The distance between Earth and Mars is 225 million km on average, or 12.5 light minutes.
If you like, you can send your radio message from the same location as your departure point. First emit a (directional) radio signal from earth toward Alpha Centauri. Then depart in your spaceship, just making sure not to collide with the radio signal on your way there (go a different way, say by taking a pit stop at Vega). You’ll still get there before the light signal.
In a sense, yes, that is exactly what an alcubierre drive is meant to do. The trajectory that starts at Earth, enters the bubble, sits there a while, exits the bubble and arrives at Alpha Centauri travels “locally” less than six light years. The bubble train might be analogised to a wormhole in that it establishes a shorter path between two otherwise distance places.
But unlike a wormhole, the Alcubierre drive doesn’t require you set up the path and destination in advance (unless Krasnikov is right, and there aren’t any tachyons), and it’s an effect confined to the vicinity—in space and time—of the ship using it. So in all meaningful senses it can reasonably be described as a faster than light drive, as opposed to a bridge, which is what a wormhole is.
That ‘directional radio signal’ is taking a longer path, as noted by the fact that a different directional radio signal (one that went with the traveler) would get there first.
Are you using a Euclidean definition of speed? Part of the insanity is that the payload, inside the bubble, can be at rest relative to the origin and/or destination, despite the distance changing.
Sanity check: before, during, and after the trip, shine a laser continuously ‘forward’, toward the destination. Turn off the bubble well short of arrivial. What pattern of red shifting should the destination expect to see?
I’m sure it only looks like insanity to people who haven’t studied general relativity.
The point is that an Alcubierre drive lets you get from here to Alpha Centauri (which I now discover is actually 4.4 light years away, since I finally decided to look it up just then) in less than 4.4 years. Whether it does that by temporarily making the distance shorter along a certain path is mostly irrelevant for the purpose of classifying it as a particular kind of starship drive.
The point which started the discussion is that you don’t get to look back and see yourself leave. (probably; I’m not certain how light behaves when there is more than one ‘straight line’ path, of different lengths, to the destination; that seems like is could happen if you took a dogleg around the most direct path.
The radio signal and the ship leave from points that are near each other in the space-time metric. In other words, simultaneous from a reference frame in which they are physically close.
You’ve moved space around, but only for a small local (space-time wise) area; you haven’t permanently moved the two stars closer together.
If the radio signal ever touches the bubble, it arrives before/with the non-light content of the bubble.
The point of departure is now six years away from points that it was previously nearby.
Imagine a strip of topology rubber running the length of the trip; you start next to one end, but instead of moving along the strip, you compress it in front of you and stretch it behind you.
And in any case, you’ve moved a ‘cylinder’ of spacetime roughly 6 light years long. Just because you’ve expanded just as much as you’ve compacted doesn’t mean you’ve expanded the ‘same’ spacetime that you’ve compacted.
So go around the radio. Or use a laser beam or high energy particle beam (near-c, not c, obviously) if you’re worried about diffraction and aiming or refraction of your bubble.
When you get there and turn off the warp drive, space is now flat. (We’ll assuming no one else is making the journey recently / soon / nearby / whatever.) You’re saying the original point of departure is now near where you ended up. I say that’s a distinction that doesn’t matter, and all that’s relevant is that you were near one star, now you’re near another, and at no time were those stars near each other. And you got there faster than a photon / high energy particle / whatever could have, via the normal route.
What experimental result do you anticipate, that distinguishes between the “original departure point” having moved, versus my assertion that all points in space are distinguishable only by things like what matter / energy is occupying them (and the curvature that results)?
A suffienctly flexible braided rope, fixed to Earth and some point beyond the destination, with a splice in it at the point of departure: the splice will end up at the point of arrival, but the number of braids on either side will remain constant and no tension will be noted at either end.
A lack of time-dialation effects on the transported cargo-an atomic clock that made the round-trip would remain synced with one that didn’t, showing that it hadn’t moved.
I’m saying that the path you took is shorter than the naive one. There is no meaningful discussion of instant distance between two points/objects in general relativity; that’s a holdover from Euclidean geometry with time-variable additions. Finally, the math.
I don’t and didn’t say that. It is plausible or even likely that you are not being deliberately disingenuous here or in your recent comments but the effect on my expectation of future replies being sequitur is the same regardless of the cause.
I refer to either my previous comments or to the relevant wikipedia article for any future reference.
Er, I think you were substantially less clear than you seem to think you were.
Let it be known henceforth that for all X (where X includes ‘wavefronts that are propagating out from you in all directions’) when I do not mention X and where a claim of X by myself is actively ruled out by multiple comments of mine and would be a trivial contradiction of basic physics (well, comparatively basic physics) then I do not claim X.
The “That’s not a straw man, you just aren’t clear” social move is rather flexible, particularly when used in response to even moderately subtle goalpost-shifting. (ie. By default it will be supported and assumed to be pro-social by all those who are not interested or have not been following the context.) Nevertheless, I consider it safe to say that those who read the context and still believe that this comment can be legitimately interpreted as a valid reply to the previous comments is sufficiently poor at keeping concepts distinct as to be way out of their depth when trying to comprehend the implications of novel, probably counterfactual physical phenomena such as Alcubierre drives.
My comment was meant to be a data point that IMO Decius’ misinterpretation of you is not as unjustifiable as you think it is, and I would rather see less indignance if possible, as it makes reading the recent comments section much less fun. I thought this data point would be useful as it is coming from someone not actually involved in the conversation at hand and hence with presumably less motive for social maneuvering. If it gets voted below −2 then I would assume that I’m in a minority that’s not good at understanding your posts.
For the record, the exact sequence of statements that prompted me to say that the issue was lack of clarity as opposed to something else:
The reasonable interpretation of your “I go around” statement isn’t the one that occurred to me first. Should Decius have spent more than 20 seconds puzzling out a model for you that doesn’t mean something bizarre by that statement? Possibly. Sometimes it’s faster just to ask what the other party meant (Decius could have done a better job of this). Should you have spent more than 20 seconds considering whether that statement had obvious misinterpretations? Possibly. It’s difficult to predict how people will misunderstand one’s own statements.
I thought it was clear that he was saying that he was overtaking something which was traveling in the same direction and going faster than him. I probably read a little bit too much into it, thinking that he was intending to win by driving the bubble in a ‘path’ that went ‘around’ the ‘straight line’ between the start and finish, not distorting any of the space through which the ‘direct’ radiation was traveling. (quotes because the terms aren’t strictly meaningful).
In other words, he was ‘going around’ the light he was beating. I was pointing out that he didn’t just have to go around a ray of a photon, he had to go around a wave expanding in all directions, and that the ‘region’ of ‘compressed space’ would also help that wave arrive at the destination ‘sooner’, regardless of the method used to ‘go around’ it.
I understand recent formulations are better in this regard.
I assume you refer just to the ridiculous amount of energy required being a half dozen orders of magnitude less ridiculous than first calculated? Not that there are actually formulations that don’t require negative mass? or don’t obliterate the destination?
Oh, yeah. They’re smaller, but they still need negative energy and they still obliterate anything directly in front of them—although that’s hardly an impossible drawback.
Definitely. Especially when it comes to one of the first uses people would consider putting this (or most other) technology toward. No need for a payload!
Travel, on the other hand, is a much looser term. Alcubierre drives, in theory, travel faster than their speed woud suggest by distorting space. Until recently they were merely interesting mathematical curiosities, but recently new variations that allow them to be constructed by a non-godlike tech level have been discovered.
[EDIT: whoops, double-post.]
Travel, on the other hand, is a much looser term. Alcubierre drives, in theory, travel faster than their speed via distorting space. Until recently they were merely interesting mathematical curiosities, but recently new variations that allow them to be constructed by a non-godlike tech level have been discovered.
[EDIT: whoops, double-post.]
Travel, on the other hand, is a much looser term. Alcubierre drives, in theory, travel faster than their speed would suggest by distorting space. Until recently they were merely interesting mathematical curiosities, but recently new variations that allow them to be constructed by a non-godlike tech level have been discovered.
Fair enough. I might recommend cutting your quote down to the relevant bit for clarity and brevity. I should have got your intended meaning with a few more cycles invested, but anything you can do to make the reader’s job easier is a win.
Ok, I put some [...] in.
What dlthomas said. A hyper-intelligent AI could still pose a major existential threat, even if it did not have something like gray goo at its disposal. For example, it could convince us puny humans to launch our nuclear arsenals at each other, or destroy the world’s economy, or come up with some sort of a memetic basilisk, etc. Assuming, of course, that such an AI could exist at all (which I am quite uncertain about), and that such feats of intelligence are in fact possible at all (I kinda doubt that basilisk one, for example).
See reply to dlthomas.
That is a feat of intelligence that humans can achieve, moreover it is one that humans have already achieved. It isn’t a spectacular feat of intelligence at all and any significant intellectual challenge involved is on the part of the individual working out how to respond in light of such considerations.
Retraction: Bugmaster meant something different when talking about ‘that basilisk’ than I expected.
What… really ? You mean, there’s a bitmap I can show to someone, or a song I can whistle, or a passage I can read, which will immediately make my victim drop dead (or become catatonic, or actually non-metaphorically insane) ? This sounds to me like an extraordinary claim, and I’d like to see some evidence. Er, please don’t show me the actual basilisk on the off chance you do have it in your possession :-)
How tightly are we defining memetic basilisk? It’s obviously possible to talk some people into getting themselves killed.
It isn’t too hard to talk them into wars either—especially if you first talk someone into getting themselves killed in an appropriately provocative way. Or even just the right person.
Destroying humanity with mere words seems like a comparatively trivial task from the perspective of “is it even physically possible to do with intelligence?”.
I wish I could upvote this a second time solely for the understatement.
I don’t know whether this is true or not; there seems to be supporting evidence either way. It’s true that you can point to many historical events when a seemingly well-placed murder, or just a well-placed word, sparked a major war. However, in many (if not most) of these cases, the local culture was on the brink of war anyway, and thus the well-placed murder wasn’t as well-placed as it appeared—because the critical mass could be achieved by killing virtually anyone, or even simply by doing nothing but waiting a few years for war to erupt.
Yes, and some people will kill themselves spontaneously even if you don’t talk to them (or even especially if you don’t talk to them). However, AFAIK there’s no generally applicable mechanism that you can use to talk any arbitrary person into killing himself, with a high degree of reliability.
I think it’s worthwhile to separate out intentions, plans, actions, and consequences for this definition. If you see memes as intentions or plans, it’s odd to see a meme touted as being a consequence (“if you see this bitmap, you will die”) rather than an intention or plan that leads to a consequence (“if you slit your wrists, you will die”). The latter obviously exist, the former seem like a definition error.
I believe that some improvements in rationality have negative consequences which outweigh their positive ones.
That said, it might be easy to make too much of this. I agree that, on average, marginal improvements in rationality lead to far superior outcomes for individuals and society.
So, you believe that “It’s dangerous to be half a Rationalist”. Literally part of the sequences by now. A good thought but probably shared by many here by now :)
Could you give an example of such a negative consequence?
When I really get depressed I speculate that drug abuse could be the explanation of the Fermi Paradox, the reason we can’t find any ET’s. If it were possible to change your emotions to anything you wanted, alter modes of thought, radically change your personality, swap your goals as well as your philosophy of life at the drop of a hat it would be very dangerous.
Ever want to accomplish something but been unable to because it’s difficult, well just change your goal in life to something simple and do that; better yet, flood your mind with a feeling of pride for a job well done and don’t bother accomplishing anything at all. Think all this is a terrible idea and stupid as well, no problem, just change your mind (and I do mean CHANGE YOUR MIND) now you think it’s a wonderful idea.
Complex mechanisms just don’t do well in positive feedback loops, not electronics, not animals, not people, not ET’s and not even Jupiter brains. I mean who wouldn’t want to be a little bit happier than they are; if all you had to do is move a knob a little what could it hurt, oh that’s much better maybe a little bit more, just a bit more, a little more.
The world could end not in a bang or a whimper but in an eternal mindless orgasm. I’m not saying this is definitely going to happen but I do think about it a little when I get down in the dumps.
Doubtful. The first person to invent an ‘expansionist’ drug, that turned users into hyper-competitive, rapidly-reproducing, high-achieving types—basically, a pill for being a Mormon—would have lots of offspring, lots of success, etc. Many people choose to abuse heroin, but many people also choose to abuse Adderall, or to use Piracetam or other similar substances. The success-druggies will outbreed and outcompete the orgasm-druggies, leading to more intense success-drugs and perpetuating the cycle.
What you’ve just said is a perfect example of the way in which the “far” brain’s intuitive modeling of minds, inaccurately predicts REAL human behavior, especially with respect to emotions.
Positive motivation actually consists of associating a positive emotion with goal completion… and this requires you to have a taste of the feeling you’ll get when you complete the goal. (i.e., “Oh boy, I can almost taste that food now!”).
So what actually happens when you give yourself the feeling of pride in a job well done, before the job is done, you get more motivated, not less, as long as you link that emotion to the desired future state, as compared to the current state of reality.
It’s worth us worrying about as far as our future is concerned, but to be the sole explanation of the Fermi Paradox (rather than just a contributing factor) it would have to have happened to at least an overwhelming majority of extraterrestrial civilizations, many of whom would presumably have considered the problem beforehand.
I’ve wondered this too. Without the the impedance of difficult goals, the amperage of intelligence drives up the voltage of pleasure; total wattage spikes for a brief moment, then the whole system burns out in a whiff of blue smoke. Rationality must be driven into some kind of load, else things tend to fail spectacularly. (You can probably tell I’ve spent too much time worrying about amp/speaker configurations.)
Most big issues that people (especially males) spend time on are not really worth bothering with.
Don’t most of us believe this?
But people don’t know that what they do isn’t worth doing, so “not worth” becomes a weasel word, prone to arbitrary interpretation. They do what they believe to be valuable, and what they do is valuable, the question is how valuable. It’s clearly not maximally valuable, but even a superintelligence won’t be able to do the maximally valuable thing, only the best it can, which is “the same” situation as with people.
I don’t think this qualifies as a belief; it’s just something I have noticed.
My dreams are always a collection of images (assembled into a narrative, naturally) of things I thought about precisely once the prior day. Anything I did not think about, or thought about more than a single time, is not included. I like to use this to my advantage to avoid nightmares, but I have also never had a sex dream. The fact that other people seem to have sex dreams is good evidence that my experience is rare or unique, but I have no explanation for it.
My nightmares are some of my most interesting dreams, so I don’t try to avoid them.
I used to have really interesting nightmares too. Unfortunately, nightmares need a charge of fear to sustain them, and I haven’t really been afraid of anything in the last few years, so no more nightmares. My dreams have been a lot more disorganized and less memorable since.
There’s a lot of nonsense I daydream about, like how it seems like my life is actually repeating itself again and again as if I was stuck in a time loop and was the only person to faintly remember bits of those preceding iterations. I like to play pretend with such ideas, though I don’t believe in them in the rational sense, more in the “I don’t believe in ghosts but I’m still crept out at night”
The closest I come to believing something rationally, which is still not rational in the purest, Occam sense, is that we may be living in a simulation that is running in a reality that is ontologically different from ours. After all, if we were running in a simulation, why should it be run by our descendants, or even in an universe like the one that was simulated ? To assume so is to fall for an observation selection bias I think. Why not from a place where “place”, “running” and “simulation” do not necessarily take the same meaning as they do here.
Like, you know, it is common to muse about universes with different physical rules and constants, I’m just taking this a step further; a reality whose rules of “mathematics” would encompass and supersede ours, that is, there would be mathematical, or ontological principles, that would exist up there, but not here. We would be prisoners in an ontologically impoverished reality, without even the tools to understand the higher realm, let alone break out of ours.
In such a reality, the equivalent of mathematics would not obey Gödel’s theorems, they would be consistent and all statements would be true and provable; that would need and imply at least one supplemental axiom there, the one that would at least not exist here, that would permit it, and open a whole new branch of mathematical truths and possibilities.
Like, if we all have a God-shaped hole in our soul, then mathematics has a Gödel-shaped hole in its own, and I wanted to imagine what it’d be like to have it filled.
I don’t really see how we could ever prove or disprove that though. Maybe some variation of that idea, might be falsifiable. If not, then it’s an irrational belief too.
It seems to me that a form of modal realism and a strong version of the Simulation Hypothesis (not just a large fraction of all observer-moments in apparently pre-Singularity civilizations are simulated, but a large fraction of all observer-moments period) are substantially more likely than not. Others whom I respect emphasize the extent of our current confusion about anthropics, etc, so I assign a lower probability than I would based only on my impression, but I haven’t fully exchanged private info.
That the exaggerated use of “rationality” is not rational. That many of the contributors to LessWrong are regurgitating Yudkowsky and acting as disciples. In human affairs there is seldom one right answer. The certainty of numbers is misleading with that which cannot be measured. Rationality cannot be achieved with a checklist or any other standardised form. The discipline of being honest (whatever that means and the many years that takes) is more important than overcoming cognitive bias. The calm of OB is more pleasant than the frantic commercialism of LW’s karmasystem. All this religion-debate is completely uninteresting for Northern Europeans. America by day must be like Disneyland after Dark. The average age of contributors to LW is 25. The average mental age is 15. Only one contributor has emerged with the stature of Hanson and Yudkowsky. A sense that less is possible.
I think there’s a strong impulse in many people here to idolize Eliezer, I know for myself that he’s one of the only persons, if not the only one, who manages to really awe me. The questions would be, does that go against the objective of building a rationalist community, do we want that community to begin with, and if the answer to those is yes, twice, then, what can we do, as aspiring rational gentlemen, to do better than so many failed communities that fell for X or Y such as cultish-ness or whatnot ?
I think we have our chance, and, like, a sense that more is indeed possible, regardless of what Eliezer or anyone else said.
Ditto for the karmasystem, it has much potential to degenerate into a vain collection of E-status.
I’m 25 too btw.
The fact that his post “Don’t Believe You’ll Self-Deceive” currently holds only 3 points of Karma (it was made 6 days ago) is strong evidence that Eliezer isn’t as blindly worshiped as it may seem. It was a weak post, and the rating reflects that well.
The opposite can be happening too, people may be over critical of EY
How can you object to the karma system when it isn’t explained anywhere? Until it is, it’s just mysterious numbers.
Give us a LR FAQ on karma, please.
I comment. People read. They vote up, I win, it must be looking like I’m rational (which is a good thing in a rationalist community). My status gets better. I like status, don’t you ?
At karma 20, I win even further, as I can post my own articles. Why shouldn’t I post as many comments as possible, to get there as soon as possible ? Like, in replying to my own comment, rather than editing it ? Or posting something pretty obvious wherever I can, even though it won’t significantly add anything ?
I am voted up or down, you see this, your own vote will be influenced (anchored).
I am amongst the first people to comment, others have more time to vote me up or down, amplifying the initial effect of the combined quality of my post and the biases of those judging it.
I comment later, quite a few people won’t bother sifting through 90 comments most of which they have been reading already, my post won’t be noticed.
That’s what I can think of on top of my head, pretty sure there’s more.
Do the benefits outweigh the costs?
This seems like the relevant question.
I strongly suspect the karma system is “very similar” to reddit’s karma system.
Plus, you know, LW is open source, so if you were really curious, you could find out exactly how it works.
Someone figured out General Relativity without being given an explaination. I figured out the Karma system as well as give a reasonable interpretation of how it impacts me personally and the community in general.
I have all the information needed to object, were I so inclined.
Where do you get “commercialism”? There is no benefit from karma points after you get 20 and can post. I think you are confusing “status seeking” with “commercialism”. As an aside, I have noticed before that many socialistic weenies seem to equate everything they think bad with “commercialism” or “business”. Also, commercialism is superior to status seeking in that status seeking is a zero-sum game unlike free market economics.
Ironically, Paul Graham gets exactly the same accusations on Hacker News.
But this is really a serious concern. Brushing it away with a comment along the lines of “Well, they also say that about obviously non-cultish figure X; how silly!” is like—like—well, um...
--failing to pump against entropy, as it is written that “Every Cause Wants to Be a Cult.”
Downvoted.
Still, how would we go about determining whether this accusation was true? I think that EY is a smart cookie who writes engagingly on an important topic that doesn’t get enough attention; is that in itself disciple-like behaviour, and if not how should we determine whether that’s enough to account for my behavour, or whether the additional disciple hypothesis is warranted?
Re: All this religion-debate is completely uninteresting for Northern Europeans.
Except for Richard Dawkins. He carries on as though theistic religion is still a live issue. I still don’t really understand that. The Dawkins gutter-outreach program. Maybe he spends too much time in the US?
Let’s not forget that the US isn’t the only place where religion is a problem. The Middle-East isn’t exactly a stable and enlightened place, for the most part.
I think that what Dawkins does it marvelous if only because he’s helping to break the taboo that religion is somehow above criticism and that the same standards that apply to everything don’t apply to it.
This helps people be rational about it; ie. being non-religious for the ‘good’ reasons, instead of for the same reasons why others are religious (was raised that way, inertia, social pressure, etc).
I think that it’s worth striking against religion in the US because it is so strong, and worth striking against it in the UK and Europe because it is so vulnerable.
Deep down I believe in some sort of afterlife because my brain is unable to handle the concept of not being alive.
A better (but more confusing) way of saying might be “I don’t believe in an afterlife, but my brain does”.
I believe that every single social interaction is linked to power/hierarchy. (see Robert Greene book’s)
I also believe that most on LW simply opt-out their local/most proximate hierarchy (and they may actively and/or secretly seek to discredit it), as Paul Graham in high-school. In one of his articles he talked of how wanted to be more intelligent than popular. That is dominance in one field instead of another. (A tip to entrepreneurs is to aim to be #1 in your field or not start at all.)
If it’s not their most proximate hierarchy then it is the one they internalized during their youth. Parents? Friends?
I believe that’s human, good and perhaps to an extent “WEIRD”. I remember reading an old quote of an amerindian chief talking of the unrest in the eyes of europeans, also how Thomas Jefferson (or was it Franklin?) talked of the indolence of amerindians.
Culture is an internalization of the power/hierarchy in place/followed natural or not. As is everything else of social nature, pretty much everything else manmade. (In theory not science, that’s what I love about it. If you take out the “In theory” and the human nature of scientists.)
http://www.paulgraham.com/nerds.html
His argument boils down to nerd kids being exceptionally smart, and caring much more about being smart than being popular, hence failing at the latter.
I think this argument is overly general, as it can be applied to any kind of excellence: jock kids are exceptionally athletic, and they care much more about being athletic than being popular, hence, according to Graham’s argument, they should be failing at being popular, while in the American school system they succed.
I wonder whether this “popular jock, unpopular nerd” phenomenon is specific to the American, and perhaps to a lesser extent Western, culture. AFAIK, in East Asian cultures such as Japan and South Korea, school popularity is positively correlated with scholastic performance, probably with good reason, since in these countries scholastic performance is highly correlated with future income and social status.
The closest Japanese equivalent to the Western ‘nerd’ or ‘geek’ is the ‘otaku’. The word otaku typically refers to social ineptitude, an excessive fixation on pop culture items such as manga, anime, videogames and associated paraphernalia, and general tendency to withdraw from normal social interactions and escape to a fantasy world.
While perhaps many Western nerds can be considered otaku or near-otaku, Japanese otaku are not, in general, nerds, in the Western meaning of “socially awkward smart person”. I don’t know about IQ scores, but AFAIK, otaku usually have lower-than-average scholastic performance.
I suppose that escapism is the result of social isolation, which results from being underperforming in whatever measure of success your local society values. Different societies value different things.
It’s true that athletics are very demanding (I remember vividly the absurd amounts of time my high school’s football team demanded of its members), but in practice, athletics does seem to somehow escape the double-bind of ‘you cannot serve two masters’.
Is it general physical fitness and attractiveness? Yes, I bet that’s part of it (although it makes one wonder if there’s a causation/correlation confusion). Is it immediate advantages from intimidation due to physical size? I remember the football players at my highschool benefited a bit from this, from simply being huge, but it doesn’t seem adequate. Is it the tribal nature of sports, in warring against the enemy school, where athletics short-circuits the need to earn popularity the hard way by players wrapping themselves in the proverbial flag? It’d explain why the competitive sports like football seem to elicit the most admiration of its athletes (and huge donations from alumni), and various track and field events ignored by most students. I like this as the biggest factor.
If I were going the correlation route, I’d probably appeal to the same excuses universities make in choosing on non-academic merits: the kids who do aggressive sports are generally more likely to succeed spectacularly in business or life and earn lots of money which they can donate back. (Consider the Terman study which found massive lifetime income returns to being extraverted.) So when the girls flock helplessly around the football team, making them ‘popular’ even though they are specializing in football and not ‘being popular’, they are executing an effective choice of future allies and boyfriends. (How many football stars marry their highschool sweetheart and go on to success...?)
I can only speak from my extensive anime-watching experience (he said, self-mockingly), but I get the impression that athletics is a great way to popularity and girls in Japan as well. Yes, the ‘ideal student’ archetype will be great at sports and academics, but that’s true in the US as well, and it seems that if you can’t have both, better to go with sports.
FWIW, it did not exist at the schools I went to in Edinburgh, Scotland, in the 1960s, nor at university (Edinburgh and Oxford) in the 70s. There were sports; some excelled in them and some didn’t, like anything else. In my later years at school, one of the options for sports (a compulsory subject for all) was chess. From over here, the jock/nerd thing looks like an exclusively American phenomenon that only exists elsewhere, where it exists at all, by contagion from the original source. “Jock” is an American word. I don’t see it used here.
For that matter, the idea of the “popularity totem pole” didn’t exist either. Everyone had their own circle of friends. There was no such thing as being “popular”. I have no idea what it’s like in British schools these days, but “popular” in that specific sense isn’t a concept I hear used.
See the comments to this post.
I believe there’s a significant probability of economic collapse in large developed countries in the next fifty years. (Possibilities: fiscal collapse, default, financial crash resulting in a true depression.) I believe that it’s worth effort and money to plan for this eventuality.
I believe that choosing to focus attention on uplifting things is the most practical use of one’s mind. (This is more controversial than it sounds: it means placing a noticeably higher value on high culture than low culture, and it means that making cynical observations corrodes most people’s ability to be productive.)
I believe that the personal really is political. That is, many “political” isms are actually total sets of values about interpersonal relationships and the good life. So you can’t really talk about values and ethics without ever bringing up contemporary politics, because often people’s personal creeds in daily life actually are libertarian, feminist, conservative, socialist, etc. Therefore rules like “don’t talk politics” imply that we don’t talk about values either.
This would make an awesome Edge topic if they could offer sufficient assurance of anonymous answers.
Their 2005 annual question was pretty close to this one and has many fascinating answers: What do you believe is true even though you cannot prove it?
It wasn’t anonymous or pseudo-anonymous though.
Almost everything we do is partially influenced by status-seeking.
We know.
Yea, after I submitted I realized that people will agree with me on that. But decided to not delete anyways.
At least one European country will have jailed one of its citizens for criticizing Islam before 2013 comes around.
too late
edit: It depends on your definition of “criticizing” I guess. Even so I bet there’s at least one example, in some European country.
“too late”
That is also shocking…
The vast majority of held beliefs are not only wrong and unjustified, but unjustifiable.
If a belief can’t be justified, it shouldn’t be held and it definitely shouldn’t effect your actions.
Depending on your definition of “justified” and “justifiable”, you may run into the problem that eventually your beliefs depend on other believes that depend on other beliefs, and so on until you reach an axiom. And this axiom may be “unjustifiable” or “unjustified”, but you “need” to believe it in order to have any beliefs at all.
One such axiom may be that your brain is “sane”, in the sense that when it tries to use logic to reason about something, you can trust the conclusion. For example, let’s say your thoughts are “All A are B. C is an A. Therefore C is a B.” Can you trust the conclusion that C is a B? Well, maybe you might revisit your thoughts, starting from the first statement “All A are B”. But wait, was that really the first statement? Here you’re relying on your memory, that you can correctly remember what you were thinking about just 2 seconds ago. Is that belief justified? How could you know?
Were you waiting for someone to try correcting “effect” to “affect” so that you could play this trick on them?
The nature of reality will turn out to be very different from what most people imagine. Supernatural events occur in the world, and supernatural beings walk among us, but they are very rare.
HalFinney: “The nature of reality will turn out to be very different from what most people imagine. Supernatural events occur in the world, and supernatural beings walk among us, but they are very rare.”
Thirty years ago I was playing a game of Risk with two friends. The rivalry between the two meant that I would usually win. In that game I had an overwhelming advantage. I had 26 armies and was attacking the last army of the last territory of one of my opponents. (His captured cards plus mine would give me enough additional armies to defeat my remaining opponent.) My opponent told me that he usually let me win, but not this time. He’d never said anything similar before. I remember thinking to myself, “Fat lot you have to say about it fella.” I rolled three die against his one. After losing several rolls, I asked that he use a die cup and he complied. I lost 25 times in a row. It was my Risk game and my die in my apartment.
23 attempts with 3 attack die against on defender die: defender wins 34.03% of the time. 1 attempt with 2 attack die against on defender die: defender wins 42.13% of the time. 1 attempt with 1 attack die against on defender die: defender wins 58.33% of the time. Defender wins the battle 0.58330.4213(0.3403^23) = 4.20057037 × 10^-12.
Assuming my description of the event is correct (i.e., fair die, fair rolls, accurate memory, etc.) then my opponent would be expected to win about 1 out of a 100 billion such battles. (I doubt 100 billion Risk games have been played throughout all history.)
I decided it was more likely that my understanding of the universe was flawed than it was likely that I had witnessed such a rare event. I discussed the event with fellow math graduate students. A couple of them wondered how I, as a scientist, could even question the standard probabilistic model. My response was, “As scientists, how much evidence would they need before they were willing to question their prior beliefs?”
That experience led me to conclude that reality is far weirder than I had imagined. Strange things do happen for which I have no scientific explanation.
Mostly, of course, my response is that I feel confused; therefore, I deny that this event ever actually happened.
But if you’re being honest and telling the truth as you know it and you remember accurately, then the next step is to consult a stage magician, not math grad students or a physicist or a theologian.
Same goes for anyone in the audience who’s witnessed a bended spoon, an improbably guessed sequence of cards, etc.
Putting on my magician’s hat for a moment, that sounds like a magic trick to me.
Given your description, the simplest answer consistent with the laws of physics is that another player switched the dice when you weren’t looking. Perhaps you stopped the game briefly to take a restroom break or answer the phone or deal with some other interruption. Your memory tends to edit breaks like that out of the narrative flow, especially if they don’t seem relevant to the story. Somehow, the other player had the opportunity to switch the dice. Dice can be gimmicked in a variety of ways—they could be weighted, shaved, or simply printed with the wrong dot pattern—using a die cup wouldn’t interfere with any of these. You’d played the same opponent before so he knew which type of dice you used; he could have brought fake dice of that type with him, swapped them in during the now-forgotten distraction, and swapped them back later. It’s even possible the two friends were working together to play this joke on you, with one providing the distraction while the other made the switch.
At the moment he looked at you and said “not this time”, the switch had already been made.
re: Magician’s Trick
My friends had the opportunity to trick me since we regularly played Risk (and I would have been highly amused if they had done so). Since the dice were mine and were distinctive they would have had to get trick dice that matched my own. Then they would have had to wait for the right game opportunity, e.g., my 26 armies against my opponent’s last remaining army on his last territory. Knowing my friends very well, it doesn’t seem likely to me that they would go to all that trouble and then never laugh about how they fooled me.
My friends didn’t appear all that surprised by the event. Both believed in “luck” and neither had a mathematical understanding of just how rare such a “chance” event would be. I interacted on a daily basis with these friends for several more years and they consistently expressed the view that it had been a “lucky run”, unusual but nothing earth shaking. My impression was that they viewed it as a one in a thousand event consistent with their belief in lucky people and lucky streaks. To me it was amazing because I didn’t believe in “lucky people” and could calculate how unlikely such an event was. (“Rare” events might happen frequently and pass relatively unnoticed because people just can’t calculate how unlikely the events really are.)
I have difficulty believing that trick dice would work well enough to fool me in this particular case. My opponent didn’t roll a string of sixes. He beat me with sixes, fives, fours, threes, and even a two. (The two sticks in my mind because at the time I thought to myself that I seemed to be trying to lose.) We are talking about a trick die that occasionally rolled every number except a 1 but still managed to beat or tie my best die for 25 times in a row. That is unbelievable control of little plastic cubes considering we are rolling at the same time using die cups.
I have no explanation for that event. I never saw my friend do anything similar before or after and I really don’t think he had anything to do with it. In my opinion the three of us were observers in something strange but none of us were really in control. I don’t attribute it to luck or psychic powers.
PS. If I were reading some anonymous poster describing this event on the Internet, I’d assume he was lying, was delusional, had been tricked, or was badly mis-remembering the event. However, people who have personally experienced something similar might get something out of my description.
So I’m not a mathematician but we note the outcomes of chance events all the time probably thousands to tens of thousands of times in your life depending on how much gaming you do. Given about 1000 low-likelihood events per person over their lifetime (which I’m basically making up, but I think its conservative) 1 in 100 million should experience 1 in 100 billion events, right? So basically there might be two other people with stories like yours living in the US. It is definitely a neat story, but I don’t think its the kind of thing we should never have expected to happen. Its not like the quantum tunneling of macroscopic objects or anything.
1000 is extremely conservative. Every time you play any game with an element of chance—risk, backgammon, poker, scrabble, blackjack, or even just flipping a coin—the odds against you getting the exact sequence of outcomes you do get will be astronomical. So the limiting factor on how many unbelievable outcomes you perceive in a lifetime is how good you are at recognizing patterns as “unusual”. Somebody who studied numerology or had “lucky numbers” or paid attention to “lucky streaks” would see them all the time.
In the case at hand, that same series of rolls would be just as unlikely if it had happened at the beginning of the game or in the middle or spread throughout the match and hadn’t determined the outcome. Unless there was something special about this particular game that made its outcome matter—perhaps it was being televised, there was a million dollars bet on it, or it was otherwise your last chance to achieve some important outcome—the main thing that makes that sequence of rolls more noteworthy than any other sequence of rolls of equivalent length is selection bias, not degree of unlikeliness.
“the odds against you getting the exact sequence of outcomes you do get will be astronomical”
People notice and remember things they care about. Usually people care whether they win or lose, not the exact sequence of moves that produced the result. For an event to register as unusual a person must care about the outcome and recognize that the outcome is rare. The Risk game was special because I cared enough about the outcome to notice that I was losing, because the outcome (of losing) with 26 vs. 1 armies was incredibly unlikely, and because I could calculate the odds against such an outcome occurring due to chance.
re: Recognizing low probability events.
During an eighth grade science class in Oklahoma, my older sister was watching as her teacher gave a slide presentation of his former job as a forest ranger. One of the first slides was a picture of the Yellow Stone National Park entrance sign. Four young children were climbing on the sign and parked next to the sign was a green Ford Mercury. My sister jumped out of her chair yelling, “That’s us.” Sure enough that picture had captured a chance encounter years ago, far away, before my sister and her teacher had ever met. (A couple of years later I took the same class and saw the same slide. I would never have noticed our family climbing on that sign if I hadn’t remembered my sister describing her classroom experience at the dinner table.)
So very unlikely events do occur. However people are seldom in a position to both notice the event and calculate just how rare the event really is.
“So basically there might be two other people with stories like yours living in the US.”
Yes. The event has significance to me only because it happened to me. I would significantly discount the event if I heard about it second hand.
Why in the world should who the event happens to make a difference? This is anthropic bias. The fact if these things happen at all they’re going to happen to someone. That fact that it was you isn’t significant in any way.
“Why in the world should who the event happens to make a difference?”
I question the surface view of the world and the universe. E.g., I wouldn’t be greatly surprised to discover that “I” am a character in a game. To the extent that I understand reality, my “evidence model” is centered on myself and diminishes as the distance from that center increases.
In the center I have my own memories combined with my direct sensory perception of my immediate environment. I also have my internal mental model of myself. This model helps me evaluate the reliability of my memories and thoughts. E.g., I know that my memory is less consistent than information that I store on my computer and then directly access with my senses. I also observe myself making typing errors, spelling errors, and reasoning errors. Hence, I only moderately trust what my own mind thinks and recalls. (On science topics my internal beliefs are fairly consistent with information I receive from outside myself. On religious and political topics, not so much.)
Friends, family, and co-workers fill the next ring. I would treat second hand evidence from them as slightly less reliable and slightly less meaningful. Next would be friends of friends. Then US citizens. Then humans. The importance I place on events and evidence decreases as my connection to the person decreases. Some humans are in small, important sets, while others are in very large, unimportant sets. That some human won the lottery isn’t unusual. That I won the lottery is. Of course to some guy in India, my winning the lottery wouldn’t be special because he has no special connection to me.
If I won a 1-in-100 million lottery I would adjust my beliefs as to the nature of reality somewhat. I would decrease my belief that reality is mundane and increase my belief that reality is strange.
When you say “that is unbelievable control”, you seem to be assuming the exact outcome with trick dice would be exactly and entirely predetermined. But there’s no reason to think that. The trick dice would only have to make winning much more likely to pull your “impossible” odds down into the realm of the possible. What you describe as a die that “occasionally rolled every number except a 1” is what you would expect to see if the “1″ side were weighted a bit—it would often roll a 6, sometimes roll a number adjacent to 6, and never roll 1. Contrariwise, it’s possible that the three dice facing it could have been rigged to do poorly. If a die with the “1” side weighted faced three dice with the 6 side weighted, that could do the trick.
Some amount of dice rigging could make your loss expected or reasonably likely but not guaranteed. And yes, it’s unlikely your friend would (a) weight your dice, (b) waste this ability on a meaningless game of risk, and (c) keep up the act all these years, but it’s not 1-in-100-billion unlikely. People playing little tricks or experimenting on their friends is something that does happen in the world as we know it, therefore it could have happened to you.
Though I like Jack’s explanation too.
″...it would often roll a 6, sometimes roll a number adjacent to 6...”
Assuming standard probability applied to my three dice, the odds of my rolling at least one 6 are 1 - (5/6)^3 or approximately 0.4. Assume that the “trick” die rolls a 6 half the time. (Remember I was watching as my opponent also rolled 5′s, 4′s, and 3′s.) Then the probability that I would win a battle is at least 0.4 x 0.5 = 0.2. The attacker odds are actually higher since the attacker would usually win if the defender rolls anything but a 6. My estimate is that with the trick die, the defender would win with frequency around 0.6. So the probability that the defender would win 24 battles is around 0.6^24 or about 1-in-100,000.
“And yes, it’s unlikely your friend would (a) weight your dice, (b) waste this ability on a meaningless game of risk, and (c) keep up the act all these years, but it’s not 1-in-100-billion unlikely.”
There is also (d), even with a “trick” die the event would only be expected to happen with frequency 1-in-100,000. Now combine that low probability with the low probabilities of (a), (b), and (c) also being true. I agree that it is more likely that (a), (b), (c), and (d) are all true is more likely than that a 1-in-100-billion event happened. However, I’m not claiming a 1-in-100-billion event happened. I’m claiming that it is more likely that something unknown occurred, i.e., I have no scientific explanation for the event.
Yes, you do: all four dice were weighted. You did your math assuming only one of them was weighted, but if they all were then the event you saw wasn’t unlikely at all. Assume that a weighted die rolls the side that it favors with probability p, each of the sides adjacent to it with probability (1-p)/4, and never rolls the side opposite the favored side. How strongly weighted do the dice have to be (that is, what should p be) for 26 consecutive victories for the defender are assured?
The defender automatically wins on a 5 or 6, which come up with probability p + (1-p)/4. If the defender rolls a 2, then for the defender to win, each of the attacker’s dice must either be a 1 (which it is with probability p) or a 2 (with probability (1-p)/4), so the defender wins in this case with probability (p+(1-p)/4)^3. The cases where the defender rolls a 3 or 4 are similar. Summing all the cases, we get that the defender wins with probability
p + (1-p)/4 + (1-p)/4 * ((p+(1-p)(3/4))^3 + (p+(1-p)(2/4))^3 + (p+(1-p)(1/4))^3)
Which simplifies to
(1/64)(-9p^4-6p^3+54p+25)
To win 26 times in a row with 50% probability, the defender would have to win each battle with probability 0.974. To win 26 times in a row with 95% probability, the defender would have to win each battle with probability 0.998.
(1/64)(-9p^4-6p^3+54p+25) > .974 --> p > .841
(1/64)(-9p^4-6p^3+54p+25) > .998 --> p > .958
In other words, to produce the event you saw with 50% reliability would require weighted dice that worked 84% of the time. To produce the event you saw with 95% reliability would require weighted dice that worked 96% of the time. I’m unable to find any good statistics on the reliability of weighted dice, but 84% sounds about right.
I’m unable to find any good statistics on the reliability of weighted dice, but 84% sounds about right.
here is a set of loaded dice for sale that are advertised to roll a seven (6 on one, 1 on the other, I think) 80% of the time.
“all four dice were weighted”
I used three reddish, semi-transparent plastic dice with white dots (as I always did). My opponent used standard opaque, plastic ivory dice with black dots. I noticed nothing unusual about the dice and by the end of the run I was examining dice, cups, methods of rolling closely.
“Assume that a weighted die rolls the side that it favors with probability p, each of the sides adjacent to it with probability (1-p)/4, and never rolls the side opposite the favored side.”
This assumption does not match my recollection of the dice rolls. As I stated previously, I rolled 6′s, 5′s, 4′s, 3′s, 2′s, and 1′s. I also never rolled a 1,1,1 which should happen frequently if my dice were heavily weighed to roll 1′s. Nor do I remember rolling large numbers of 1′s.
Your probability model for a trick die also fails to match my observations of my opponents die rolls. E.g., in your model my opponent would be expected to roll similar numbers of 5′s, 4′s, 3′, and 2′s. However, he only rolled a 2 once and he rolled far more 5′s than 3′s.
Besides with your probability model for trick dice, I would have easily noticed if my opponent rolled a 6 84% of the time and I never rolled a 6 at all.
PS You used 26 in the above calculation. I had 26 armies and in Risk the attacker must have at least 4 armies to roll three attack dice. So the 3vs1 dice scenario only happened 23 times.
[looks up how reliable human memory is, how it changes at every recall, how overconfident we tend to be about it]
[looks up a couple of conjuring sites]
Hmm...
Hmm...
I think there might be non-supernatural explanations with a greater than 1 in 100 billion chance. If it’s even 1 in a million, you’ll expect to see at least one in 30 years.
http://en.wikipedia.org/wiki/Littlewood%27s_Law_of_Miracles
Second the surprise. What do you believe and why do you believe it? Ordinarily I wouldn’t even bother asking, but with you I’m not expecting to hear the usual things.
Why do you believe so, and what do you mean by supernatural?
You surprise, Hal. You are usually so sensible.....so I guess you are still being sensible.… please explain.
It’s bad enough that voting works, here. Let’s try not to badger people about their beliefs, just ask for further clarification.
Yes, I know that I’m as guilty as anyone else about badgering. Nevertheless...
From OB: some pet downtrodden areas of science, according to me:
Panspermia, clay origin of life, digital physics, AAH, MaxEnt, simulism, the adapted universe, memetics. I have lots of “unusual” views about the future as well, but—AFAICS—many of those seem to be not so out of place here.
What is AAH?
I assume Tim means the Aquatic Ape Hypothesis for which I’ve quite a fondness myself.
I susepct a mild varient of the AAH is probably true. There is a problem that needs solving: why can humans swim (and many do so for fun), while other hominoids avoid water?
I doubt if there was a ever a time in prehistory where the ancestors of humans were all living an aquatic lifestyle, but there must have been many occasions where a band of pre-humans lived on the shores of a lake or ocean. Some of these individuals would have used the water to find food or escape from predators, and I hypothesize that this happened often enough that evolution would have changed our psychology and anatomy accordingly.
Cryonics may be so expensive and so unlikely to succeed that it might be bad utilitarianism to sign up
Having somebody be a Big Damn Hero actually be a good thing.
Bayes Theorem itself is INCREDIBLY poorly explained on this site.
While FOOM and the AI-Box problem (leading to an AI acting as a social and potentially economic agent) are possible and make Friendlyness or Fettering important, most singularitarians VASTLY overestimate the speed and ease with which even an incredibly powerful AI can generate the nigh-godlike nanoconstructor swarms (I see barely plausible ideas about biological FOOMS from time to time) and in particular the difficulty of technicians trying to resecure an unboxed but still communication-restricted AI. That doesn’t mean I think this stuff is impossible, or even that an AI can’t gain a lot of power and comms and manipulators in a short time, but I think that LWers (who seem often to be software or cogsci backgrounds compared to my Mechanical Engineering) have a tendancy to stop considering hardware-related issues past a certain point.
Many singularitarians have a bias toward expecting a singularity in their own lifetime or shortly after it. (I assign a single-digit to singularity before 2100 and something like 25-40 in the next 500 years
Old Culture gets way too little credit, but most of the people who realize this or appear to realize this are reactionaries who either can’t imagine different, much better Old Cultures or are neither utilitarian nor consensual with respect to participation in said cultures. (
I’m not sure what you mean by this.
There is no such thing as a consumer-driven economy.
meaning what?
There is no such thing as a free market.
Yeah there is, they are just really small. Just the other day I asked if someone would come in on their day off from work in order to cover for me. I paid them, and they performed the service. All this went down without any government intervention, coercion, or use of force.
If you mean that there is not a single country on Earth that contains ONLY free markets then you are absolutely right.
I see a dilemma here.
If I think of your transaction in isolation, it’s free but not a market: it’s a bargaining problem.
If I think of your transaction as part of the broader labour market, it’s not really free; it’s influenced by government regulations & macroeconomic policies, if only through their effects on the general price level, the general wage level, and the supply & demand for labour.
I reckon your transaction is an example of what mtraven’s talking about rather than a counterexample!
That the psychoanalytic theory of psychodynamics is in some sense true, and that it is a useful way to approach the mind. My belief comes from personal experience in psychotherapy, albeit a quite unorthodox one. I have found that explanations in Freudian terms such as the unconscious, ego, superego, Eros and Thanatos help to greatly clarify my mental life in a way that is not only extremely useful but also seems quite accurate.
I should clarify that I reject just about everything to come out of academic psychoanalytic theory, especially in literary theory (I’m an English major), and that most clinicians fail to correlate it with real mental phenomena. I know that this sounds—and should sound—extremely suspect to any rationalist. But a particular therapist has convinced me very strongly that she is selling something real, not only from my personal experience in therapy, but in how she successfully treats extremely successful people and how I don’t know anyone who wins at life quite so hard as she does.
I don’t believe that any concept, including the concept of reality, makes sense to you outside the context of your own epistemic framework. When one thinks that the reality exists on its own, it is a statement made from within that person’s epistemic framework. When you tell me that the reality exists on its own, I understand this statement from within my epistemic framework. When I believe that you believe that the reality exists on its own, I interpret my model of yourself as having a property of having a “belief in reality existing on its own”. Even when I think of myself as believing something, I interpret myself as having a property of believing that.
The quotation marks must be put around everything, there is no escaping above the first level of indirection. The problem of induction is a wrong question.
Me:
I think I made a step towards resolving this confusion. The problem was in conflating the specific, real-world algorithms running in a mind and performing the interpretation of facts, with the ideal model of the world. The ideal model is what we see as objective reality, the abstraction via which the facts should be interpreted which is to say is equivalent to what the facts really mean, even when we see a mind that goes in the opposite direction. The mind does a subjectively objective computation, while the reality is the ideal counterpart of that mind, the same way there is an ideal morality counterpart of a mind, even though it’s not merely preference of specific brain, and may depend on any other aspect of reality.
Since the ideal model is global, and it’s not a mind, there is no point in telling that reality must be interpreted through a mind: there is no real dichotomy, there is no requirement for a mind, and the reality that the model describes doesn’t even need to contain any minds. The model is math.
Nearly everyone including rationalists atheists and even transhumanists believe in the soul theory. I don’t. Oh they’ll say they don’t believe in souls, but when you really get down to it and examine the inner workings of their beliefs it’s rather obvious that they do. In fact I have never in my life met in person or on the Internet anybody who was like me and really didn’t believe in the soul; There are some authors who I’ve not had the pleasure to meet that agree with me, but very very few. It’s the last stand of Vitalism and if this lethal meme is not overcome the Singularity will kill you dead.
Can you be more specific? What do you mean by soul, and why do you think this belief will be harmful?
Suppose I had a machine that could record the position and velocity of ever atom in your body to the limit Mr. Heisenberg’s uncertainty principle allows, and then I blew up your body with an H bomb. I now use that information and different atoms to assemble another body. Would you have died in that H Bomb blast?
I don’t know you but I can say that it is virtually certain that you will say yes you are dead because that is what virtually everybody says, even those who pride themselves for unconventional views and for thinking outside of the box. They say that because they believe in the soul, they won’t admit it of course, not even to themselves, and it’s true they do renounce the word “soul”, but the trouble is they just can’t renounce the idea of a soul. I can.
As for why that idea will be lethal after the Singularity; if the Jupiter Brains that exist at that point decide to let you live it certainly won’t be at the same level as their hardware, they would be far too squeamish for that and for good reason, it would be like letting a monkey run around an operating room. If they let you live it will be as an upload, if you refuse that because you think it would be the equivalent to death then your superstition will have killed you. It wouldn’t be the first time a superstition has proven lethal.
John, most people on LW already know this, better than you do if you’re still talking about “different atoms”.
No Eliezer you are quite wrong, most people on LE do NOT know that.
I debated with myself before using the term “different atoms” when the scientific method is entirely unable to determine if they are really “different” or not, in the end I decided to be generous; even assuming that the atoms were in some obscure very mystical sense “different” would the resulting being be you? Nearly every person on planet Earth would say no, probably even you; but I say yes because I believe the idea behind and not just the word “soul” is BULLSHIT. Almost nobody agrees with me about that, I’m right nevertheless.
John, you have a lot of reading to catch up on before you start preaching to the choir. I know you’re an old hand in the transhumanist community, but if you haven’t been following along on Overcoming Bias, you’re not competent to debate with this crowd. Most people on Less Wrong do know that identity doesn’t follow atoms, because this issue was discussed at great length on Overcoming Bias and resolved. So far as we’re concerned, your pet battle is over, your side won. You can stop fighting now, at least here.
Things like this are the reason I was reluctant to mention LW on any transhumanist mailing list that hadn’t been following along on OB. This is not a forum for transhumanists, it is a forum for people trying to master the art of rationality. They know identity doesn’t follow atoms. We are way, way ahead of you on the reductionist thing. You need to catch up on your reading. In this forum, the upload wars are over and the uploads won. End of story.
Well then, since I am so intellectually deficient in this genius group then prove me to be a fool to the amusement of all, I mean proving somebody to be stupid is one of the great pleasures of life, so do so. Come on Mr. Brains, Mr. Intellectual superior, I dare you to try!
You know something, I simply don’t believe you. For about two decades I have been debating this matter and have never found one person who agreed with me, not one. Now you say you have found hundreds, I say bullshit. Hell if we ever got down to the Nitty Gritty I doubt even you wouldn’t agree with me because you must believe in the soul. I mean, I know for a fact that you believe in the friendly AI idea, and the only way that idea is not so stupid as to be strangled by unrestrained giggles is if you believe in some sorry ridiculous permutation of the silly old soul idea.
Yea yea I know, you say you don’t believe in the soul blah blah blah. But you do believe that a slave, sorry I should have said friendly, AI is possible; that tells me what you really think.
All readers, please vote up this comment if you believe identity doesn’t follow atoms. Vote down this comment if you believe that it does. I’ll take my licks if I’m wrong, go on and hit me.
This is shameless Karma whoring. We should ban this user.
Damn right! Eliezer, you should donate that karma to the Singularity Institute.
I think he is implying that we think we agree when we dont really, in that case he would expect us to vote in agreement with you.
Actually, I’m worried he’s having some kind of breakdown. The Eternally Recurring Personal Identity Wars had plenty of arguers on both sides. JKC was there. Him now talking like he’s the only one who ever believed that deconstruction and reconstruction using “different atoms” preserves identity, may indicate that the Personal Identity Wars really literally did send him off the edge.
I have to say, this is a failure mode I’ve never encountered before:
“You won! It’s over! Look, we all agree with you!”
“NO! IT IS NOT OVER! I AM THE ONLY PERSON ON THIS SIDE AND I AM STILL LOSING, DAMMIT!”
Have you really never seen this before? I actually find that I myself struggle with it. When you define yourself as the plucky outsider it’s difficult and almost unsatisfying when you conclusively win the argument. It ruins your self-identity because you’re now just a mainstream thinker.
I’ve heard of similar stories when people are cured of various terminal diseases. The disease becomes so central to their definition of self that to be cured makes them feel slightly lost.
I haven’t seen it before. Maybe if you counted Stephen J. Gould, but I expect he was lying more than crazy.
I guess most of the people I know are, shall we say, secure enough in their identity as iconoclasts, that they can enjoy winning any particular argument without fear.
Hadn’t heard about the case of the terminal diseases, either.
No you’re not, you’re not worried about that at all; you’re trying to be amusing, and doing a damn poor job of it too.
I am a man capable of getting into details, I like details and if truth be told I’m rather good at it, I think details are important; so name one person who agreed with me other than superficially? Come on, name one! Certainly not you, you believe in the childish friendly AI idea. And the only hope of salvaging that is some ugly mutation of the soul ides.
And Eliezer although I’ve said you were wrong and even (perhaps going too far) implied that you were stupid (you have faults but stupidity is not among them) I never in my life said that you were insane as you just said about me.
What ramifications and consequences of the ‘atoms are not identity’ belief do you think the upvoters of Eliezer are not thinking about? How is their acceptance superficial?
Yes I am worried about that. I don’t remember exactly who was on what side in the Upload Wars, but you certainly weren’t the only one lined up for—not just destructive teleportation—but destructive uploading as a computer program.
Greg Egan thinks that if you die and get restored from a backup a day ago, you’re just losing a day’s memories. Even I’m not sure I’d go that far.
Derek Parfit (in the famous mainstream philosophical classic Reasons and Persons) goes farther than either of us by considering the case of incrementally removing and adding memories.
You’re not alone. You were part of a small army of transhumanist reductionist philosophers including a majority of the big names. You were quite well aware of that at the time. Your current stance of lone heroic defiance honestly seems to me to go over the edge of insanity.
John, If you have a unique, opinion, write it up somewhere! If you have a rare opinion, link to explanations from those rare individuals.
In groups of this sort of course everybody says identity doesn’t follow atoms, the word “soul” is not very trendy at the moment; but when you get down to it, when you start debating with them you always find that they do think identity does follow atoms, always, every fucking time; you do too. I say that last because how else can you explain your idiotic friendly AI belief unless we had a soul and the AI didn’t?
I vote down your comment and you vote down mine. Is this the path to enlightenment? At any rate I freely admit that in this sort or contest you will win and I will not; virtually nobody agrees with me. I am sure you will get an astronomically high rating and I will approach negative infinity. It doesn’t matter, I’ve seen it all before; it doesn’t alter the fact that I’m right and you are wrong, dead wrong.
John, I’m not sure what we can say to convince you you’re fighting a battle that doesn’t need to be fought on this ground.
(other than, perhaps, to suggest that you get some sleep)
Would it help if I said I’d happily sign up to be uploaded now, if I knew I could look forward to a similarly rich experience in a mainframe, and perhaps the ability to engineer a body to walk about in from time to time, when the whim took me?
Yes I’d have to say that would help, If you did say it.
Oh. Well then, yes, yes I would. If it meant I could self-modify, and especially if it meant I could gain a bit of control over my pair-bonding mechanisms, then I would in a heartbeat.
I was in the Eternally Recurring Personal Identity Wars in another (non-transhumanist) forum, and there were plenty of arguers on both sides too. I was (and still am) on the “deconstruction and reconstruction preserves identity” side, even before reading Eliezer’s take on it.
I don’t think that quantum mechanics and the lack of “atom identity” is that important; even in a universe made of little billiard ball atoms with each their own identity, I’d still consider that deconstruction and reconstruction with different atoms doesn’t kill you.
In what is probably an increasing order of controversial beliefs:
Libertarianism is correct, at least in the broader sense of the word (in the sense under which Milton Friedman qualifies as a libertarian). I know this isn’t the most controversial belief, but it’s still a minority belief, according to the 2012 survey.
Productivity (in the sense of “improve your productivity” LW posts) isn’t that important as long as you’re above a certain threshold, the threshold needed to do enough work to support yourself, save for the future, and have money to spend for fun. Excessive optimization for productivity (which describes many productivity posts on LW) leads to a less happy life.
The differences between men and women are overblown and are mostly socially caused. They are not so great that men and women should be treated differently. Normative gender roles should be abolished. Feminism is good.
The arguments commonly presented in favor of vegetarianism/veganism are weak. They presuppose that people care more about animal suffering than they really do (and subtly and unintentionally try to shame those that don’t care as much), and that people are more capable of reducing animal suffering with their dietary choices than they really are.
Human value isn’t irreducibly complex. It boils down to pleasure/happiness. Wireheading is the optimal state.
There is an objective morality (for humans), and it’s ethical egoism.
I’d love to subscribe to your newsletter.
I don’t think what I’m about to post is strictly in keeping with the intended comment material, but I’m posting it here because I think this is where I’ll get the best feedback.
The majority of humans don’t have a concrete reason for why they value moral behavior. If you ask a human why they value life or happiness of others, they’ll throw out some token response laden with fallacies, and when pressed they’ll respond with something along the lines of “I just feel like it’s the right thing”. In my case, it’s the opposite. I have a rather long list of reasons why not to kill people, starting with the problems that would result if I programmed an AI with those inclinations. Also the desire for people not to kill and torture me. But where other people have a negative inclination to killing people, flaying them alive, etc. I don’t. Where other people have an neural framework that encourages empathy and inconsequential intellectual arguments to support this, I have a neural framework that encourages massive levels of suffering in others and intellectual arguments restricting my actions away from my intuitive desires.
On to my point. Understandably, it is rather difficult for me to express this unconventional aspect of myself in fleshy-space (I love that term). So I don’t have any supported ideas of how common non-conventional ethical inclinations are, or how they’re expressed. I wanted to open this up for discussion of our core ethical systems, normative and non-normative. In particular I am interested in seeing if others have similar inclinations to mine and how they deal / don’t deal with them.
(Meta-comment.) These 2009-era comments raise political/controversial points and meta-commentary I associate with latter-day LW, not OG LW, which surprises me a bit. (Examples below.) Given the more recent signs of escalating political tensions on LW, I wouldn’t have expected these older comments to hit the same beats as, say, Multiheaded’s analogous thread from this year, but a bunch did.
It looks like the political/controversial points provoked less argument here than in the 2012 post. I’d guess this is down to increasing political heterogeneity on LW over time, but maybe it’s just because there are more people here now. (Or maybe Multiheaded’s more dramatic framing in the 2012 post primed people to argue more vigorously? Dunno.)
“the most important application of improving rationality is not projects like friendly AI or futarchy, but ordinary politics”
“Forbidden topics!”
“I’ve heard reports that cause me to assign a non-neglible probability on the chance that sexual relations with between children and adults aren’t necessarily as harmful as they may seem.”
“In western societies, it’s an orthodoxy, a moral fashion, to say that sex between children/adolescents and adults is bad. This can be clearly seen because people who argue against the orthodoxy are not criticised for being wrong, but condemned for being bad.”
“within [sic?] human races there are probably genetically-determined differences in intelligence and temperment, [sic] and that these differences partically explain differences in wealth between nations”
“it’s important to not downvote contributors to this survey if they sound honest, but voice silly-sounding or offending opinions”
“That both women and men are far happier living with traditional gender roles. That modern Western women often hold very wrong beliefs about what will make them happy, and have been taught to cling to these false beliefs even in the face of overwhelming personal evidence that they are false.”
“I believe that there are very significant correlations between intelligence and race. [...] I believe that the reasons white people enslaved black people, and not the other way around is due to average intelligence differences.”
“There is a very strong pressure to be “Politically Correct”, and it seems that most beliefs that would be tagged with “Politically Correct” are tagged with that because they cannot be tagged with “Correct”.”
“Men and women think differently. Ditto that modern Western women hold very wrong beliefs about what will make them happy.”
“As a matter of individual rights as well as for a well working society, all information should be absolutely free; there should be no laws on the collection, distribution or use of information. Copyright, Patent and Trademark law are forms of censorship and should be completely abolished.”
“Bearing children is immoral.”
“All discussion of gender relations on LessWrong, OvercomingBias, or any similar forum, will converge on GenderFail.” (This last one’s from April 2010, but still.)
[emphasis added]
Wow. Essentially, they prophesied Elevatorgate.
It isn’t prophecy if you have a large-n sample.
It’s reference class forecasting!
It’s unclear to me that this is that LW specific. If you asked any large sample of western Internet users for anonymous and unaccountable statements of controversial opinions would you get results that are that different? If not, then it’s more a description of the Internet.
The only thing that’s LW specific is the suggestion that the most effective use of rationality is going to be politics.
I guess satt’s point is that back in 2010 that stuff wasn’t discussed outside “Closet survey” and threads like that, whereas more recently people have done that in otherwise regular threads causing some drama and mind-killing (though IMO certain LWers overstate the extent to which this is a problem).
Keeping discussions of potentially mind-killing topics quarantined to specially designated threads may be a superior solution to either banning them altogether or allowing them throughout the site.
My point was more that I had a causal model in my head (much higher proportion of LWers thinking/talking about controversial topics in 2012 → more LW drama in 2012), but realized it was wrong when I read the comments here, felt confused, and noticed I was confused. (It’s a pretty mundane example of noticing confusion but I doubt I’m the only one whose mental model was wrong in this way.)
Coincidentally, I just found a sort of similar post by taw when I was idly Googling “reference class tennis”. It mentions climate change scientists as examples of politicized science, and namedrops “race and IQ, nuclear winter, and pretty much everything in macroeconomics” as times when “such science was completely wrong”. Also, although taw’s ultimate point was actually about reference class forecasting, a lot of the comments focused on his object-level examples of scientific controversy instead. That happened back in 2009 as well.
As for what to do about drama, I’ll hold off on making suggestions. It’s not something top-down policy is likely to fix without unhappy side effects, and LW’s ultimately an entertainment device for me (albeit one that sometimes makes me think). If it turns into something un-fun, I’ll just go and procrastinate with something else.
In a less scientific area—many participants seem to be obsessed with personal immortality projects to me—including things like cryonics and uploading. This is bizarre for me to witness. To my eyes, it seems like a curious muddle over values. An identification with your mind and memes. Biology tells us that the brain is actually a disposable tool—constructed by genes for their own ends. Memes can be mutualists or parasites—and in this case, we are witnessing their more pathogenic side, it seems to me.
If biology told you to jump off a cliff, would you do that?
Accepting our own highest, personal goal to be the propagation of our own genes seems to me to be choosing to remain in slavery to an cruel and stupid tyrant just because our ancestors were forced to.
Questions conditional on counterfactuals are usually not worth addressing.
Nature isn’t “cruel” - see: http://alife.co.uk/essays/evolution_is_good/
It isn’t “slavery” if you want to do it.
I voted your reply up from zero as it didn’t seem low quality, and on this post you shouldn’t be penalized for defending your case however it might seem to others.
I admit nature isn’t actually cruel, it doesn’t feel anything for us at all. I’ll go with the Dawkins line you quote in your essay instead: neither cruel nor kind, but simply callous—indifferent to all suffering, lacking all purpose.
Your essay is full of language like “nature tries its best to make sure that...”, “Nature is interested in...”, “Nature loves...”, “nature actively works towards...”, “nature goes to considerable lengths to...”
I agree that a lot of good and beauty has come out of evolution, but it didn’t do it on purpose!
About a million times I’ve been told not to use anthropomorphic language when discussing biology. And about a million times I’ve replied that such language is used ubiquitously by biologists—and that it is useful and good.
Biologists ubiquitously talk about “selfish genes”, “genetic wisdom”, genes prefering this, genes wanting that—and so on and so forth. Such terminology is unambiguous. The interpretation that biologists think genes are like tiny little people, or that we are visualising nature as some kind of wise old man is so silly that it is totally ridiculous.
Anthropomorphic and teleological language is fine (IMHO) as long as it doesn’t lead into teleological reasoning. It seems to me that your essay is crossing that line, while also cherry-picking the ways evolution tends to eliminate pain over the ways it tends to increase it.
If you do accept nature is indifferent to all suffering and lacking all purpose, why would you want to make it’s purpose your own?
Life isn’t “about” suffering. Happiness and pain are the carrot and the stick which nature uses. I am typically more concerned with what organisms do than I am with how they feel.
I embrace nature’s purposes because it built me to do that. I seem to be relatively resistant to religions—and other memetic infections that would hijack my goal system—presumably in part because my ancestors also exhibited such resistance.
Odd. Nature built me to denigrate it on the internet whenever it does something I don’t agree with. Which of us is the mutant?
Judging by gravity, Nature wants me down. Should I undertake a journey to the center of the Earth?
It seems odd that you should mention the downward force but ignore the corresponding equal-and-opposite upward one.
FWIW, I sometimes council not resisting gravity using muscular force—but instead aligning oneself vertically—so that the force can be taken by skeletal structures. It is a similar idea: don’t fight against nature, instead align yourself with it. This is a common theme in Taoism.
http://www.overcomingbias.com/2007/11/adaptation-exec.html
The only thing that matters is whether you want something (in a sufficiently reflective sense of “want”, which is still an unsolved problem). The evolution’s “preferences” are screened off by human preferences, so you should bring the evolution into discussion only where it helps to understand the human preferences deeper, as is the case with, for example, evolutionary psychology.
Maybe I should not be surprised to encounter people that have had their biological goal systems hijacked by memes. History is full of such people.
My impression is that advanced, meme-rich countries—such as Japan—have naturally low birth rates due to such effects.
It appears to me that the smartest and best-educated people are the ones who are the most vulnerable to infection.
In case of Japan, there might be another heuristic at work: “the place is overpopulated, high population means low resources, low resources means less healthy offspring, therefore it might be a good idea to hold off reproduction until I find a less populated place.”—I vaguely remember reading something along these lines about mice, but can’t cite the source.
(Of course I’m not talking about restricting reproduction to conserve resources ‘for the group’).
If you are interested in the topic, there’s a fairly detailed analysis of the origin of the “demographic transition” in the book “Not by Genes Alone”. They mostly finger human culture.
Fertility and intelligence are negatively correlated.
Religiosity and intelligence seem to be negatively correlated.
Therefore all the efforts of Dawkins, Yudkowsky etc. to make the world more rational seem to be futile or at least inefficient. Pretty scary...
Fertility and intelligence may be correlated, but that does not state much about intelligence and birth rate. Just because two -things are correlated, does not imply causation, and even if they are, their may be non-listed effects which cause results opposite those that would be anticipated with only two factors taken into consideration.
Gah. Classical evolution is over. To clarify: Evolution is real, but it is also glacially slow. Social changes are orders of magnitude faster, and technology faster still.
The odds of selective effects causing any changes whatsoever to human nature before someone rewrites our genome like it is a manuscript in dire need of editing are zero. The time scales are wrong. As far as evolution is concerned, if it takes us 300 years to master genetic engineering, and another 500 before the laws against it stop being enforced, then that is a bullet Darwin cannot dodge. And after that point, evolution is no longer blind.
I don’t believe in male bisexuality, though I do believe in it for women.
You believe it’s much rarer than female bisexuality, or you believe there are literally zero instances? If you met a man who had slept with several men and several women and continued to sleep with both, what would you tend to assume about his sexuality?
Much rarer. I’m not prepared to say it’s totally absent, but I would be skeptical upon meeting a male who claims to be bisexual.
I would say it’s more likely that he’s a) in denial, or b) pushed by society to be with women. I’d say that any man in today’s society (anywhere in the world – be it Sweden or Saudi Arabia) that sleeps with men is most likely gay.
So he’s been out as bisexual for twenty years, slept with dozens of women and dozens of men in that time, is currently sleeping with four different women and showing every sign of enjoying it lots, and you think he’s 100% gay? Are you sure your beliefs about this are paying rent?
Some years ago I was at a party at which someone was holding forth on the same theme… all men are monosexual, those who claim otherwise are being influenced by society rather than by genuine attraction, yadda yadda.
I replied “That makes sense! Take my case, for example: I’ve been in a monogamous same-sex relationship for fifteen years, and everybody knows it, but I claim to be bisexual so my boyfriend and I can maintain homosexual privilege… no, I mean heterosexual privilege… no, wait, I’m sorry, how does that work again?”
That was fun.
I agreed with you as far as this. As for the rest it seems unlikely that our genetics implement a complete jump from male desiring to female desiring with no grey area (sweet spot?) in between.
So why is bisexuality so common in women? Why don’t you think female bisexuals are lesbians in denial, or under duress to sleep with women? I just don’t see how women who have sex with both men and women enjoy it but men who have sex with both men and women are clearly just keeping up appearances.
I am a male bisexual. I believe this with a high level of probability, primarily due to my ability to have erections from naked or sexual pictures of both genders. Also the fact that I have felt heavy romantic interest for both genders would seem to indicate that this is very possible.
If you want documented research done into male bisexuality, look into the research of Alfred Kinsey. He researched all forms of sexuality extensively, and was a male bisexual himself.
Edit: Also, the society I have been raised in has practically no instances of homophobia, so I don’t believe that could be a factor.
From your other comments, I believe you’re confusing “I don’t believe men who say they are bisexual” with “I don’t believe men can be bisexual.”
It’s clear to me that, in American society at least, the majority of bisexual men are to be found among the ranks of men who would never identify as anything but straight, sometimes even to the men they have sex with(!). Conversely, many of the men that DO identify as bisexual are merely finding a graceful way to transition to a homosexual love life.
Thus, that a man who identifies as bisexual is mostly likely gay may be true (though I doubt it—especially among men who have been out as bisexual for more than, say, 5 years) is not an indication that male bisexuality doesn’t exist—only that self-professed bisexuality is scantily coterminous with a bisexual orientation in males.
Being wrong in the way that you are wrong will probably not damage the accuracy of your insight when conversing with individuals about their sexuality (you’ll correctly assign a high probability to his being gay if he says he’s bisexual), but it probably WILL damage that accuracy when analyzing human populations in the abstract (you’ll incorrectly assign a low probability to the existence of large ranks of males who engage in and enjoy sexual relations with both men and women).
As I’ve said elsewhere, possibly even on this thread… if my culture makes it more difficult for men to identify as queer than as straight, then even if sexual orientation varies (like many other things) continuously within the population, I should expect the majority of more-than-negligably male-oriented men to identify as straight.
If it’s not against the implicit rules of this thread to ask, on what evidence do you believe this?
BTW, it may not be obvious, but I can tell you that ciphergoth is not talking about a hypothetical example.
Define bisexuality.
Voted up from −1 because I want you to clarify. Do you believe that bisexuality in women is ubiquitous, while not ubiquitous, but present in some men? Or that it is completely absent in men, but present though not ubiquitous in women? Or any other combination of absent, present or ubiquitous in either women or men?
I think bisexuality is present (but not ubiquitous) in women, and extremely rare in men.
Ideas in the general neighborhood of negative utilitarianism. (I don’t necessarily believe these, but I think they should be taken seriously.)
I believe that WTC building 7 was brought down by controlled demolition using explosives:
http://www.youtube.com/watch?v=LD06SAf0p9A
I know from a previous discussion in OB that at least Robert Hanson doesn’t believe this. Btw, WTC 7 was NOT one of the two towers hit by planes.
Edit: Robert Hanson and others believe that the building collapsed due do fire induced damage. I think the pattern of collapse disproves this hypothesis.
I find this plausible but not too reliable, a non-Bayesian way would be to say “positive point estimate, not statistically significant”, I’m not sure what’s the nice Bayesian way of saying that.
I also believe this.
I believe that trying too hard to be rational is irrational and perhaps self-destructive. That there could well be many different definitions of rationality or truth that are in some sense imcompatible without one being really superior (because we don’t have meta-criterion to establish our criterions). When the cost is low, I think that behaving in a consciously superstitious way is rational because after all, we could be wrong about superstitions too (e.g. circumcision decreases venereal disease transmission).
Your example would seem to betray that superstitions also mess with ones ethics, in a way they oughtn’t to. The idea of removing a nerve-dense part of a person’s body without their consent really seems like it ought to raise an ethical red flag.
Perhaps you’re referring to truth-seeking, having maps that match the territory best?
I am an atheist Platonist. I believe that ultimate reality is mathematical / tautological in nature, and that matter, mind, motion are all illusions.
Is this belief falsifiable? If not, is it meaningful?
Not falsifiable, but more parsimonious than thinking that something ‘acts out’ the reality that we see. Other explanations of reality leave behind a material residue. A bit like saying that water is made of wet stuff, fire is made of hot stuff, etc. True explantions ‘destroy’ the things that they explain. And I favor the theological argument that the foundation of reality must be something necessary. Mathematical Platonic reality does the job perfectly.
Well, the bit about platonism may be. Tegmark, I believe, came up with a notion along the lines of “well, if all mathematical structures are in some sense ‘real’, then we just need to somehow parameterize the set of all mathematical structures that could contain beings ‘like us’, then compare our observed universe to the ‘most average’ structures. If we differ significant’y from that, it’s evidence against the proposition.”
What do you mean by illusions? If matter, mind, and motion are our subjective perspective of stuff that reduces completely to a timeless mathematical object (I suspect it probably does), I don’t think it follows from that that we can say it isn’t real.
Like I said above, fire is not made of hot stuff, water is not made of wet stuff, etc. The ‘atoms’ that make up our subjective reality would not themselves be in motion or be conscious. But yes, consciousness is odd sort of ‘illusion’ in that it creates a subjective reality. Would it make sense to think of levels of reality, with some being ‘more real’ than others? Or maybe we can think of certain levels being ‘dependent’ on lower / more fundamental levels? Consciousness would then be located at a very high level, far from the ‘foundation’. (Which is part of why I am an atheist.)
Oh, I should also add that I am a communist (ironically, ‘converted’ while in the Army).
Late to this (only by 4 years… so fifty smartphone generations), but LOVE the idea.
I believe—firmly, and with conviction—that the modal politician is a parasitic megalomaniacal sociopath who should be prevented at all costs from obtaining power; that the State (and therefore democracy) is an entirely illegitimate way of ameliorating public goods problems and furthering ‘social objectives’.
Hence my nick (which I invented).
the optimal political/social structure is one in which we encourage megalomaniacal sociopaths to do good, because they tend to be effective. This is the best part of capitalism.
I’m not so sure about that. The outcomes implied by an ASPD diagnosis (not quite identical to “sociopath”, but close enough for use here, I think) are better than some disorders, but still pretty rough—including in measures of occupational success.
We might object that these are self-selected as people whose lives have been damaged enough by their problem that they seek treatment, but personality disorder criteria are so vague that I can’t think offhand of a better way of grounding the word.
There’s a definite selection effect for ASPD.
In general, any mental health diagnosis is usually conditioned on a significant disruption of the sufferer’s life—if you’re a sociopath, but it doesn’t effect you in any way, you’re typically not diagnosed. This is usually on the DSM checklist for a diagnosis and while I don’t know offhand if ASPD is the same, I’d bet that it is.
The comment you’re replying to is definitely questionable, though. It seems like a very prematurely-halted optimization process if the “optimal” structure is optimized towards encouraging less than one percent of humans to do good things.
It is, yes, but that doesn’t necessarily preclude “effective”—the diagnosis can be based on disruption of any part of the patient’s life. It’s entirely possible for the behavior associated with a disorder to improve outcomes in one domain (employment, say), while disrupting others (i.e. family life) enough for the label to stick. That’s what I was trying to get at with my qualification about occupational success.
obviously we want to encourage lots of humans to do good things but I think it’s extra important to encourage the 1 percent of humans that would otherwise do evil things to do good things.
Less than one percent.
Do you think it’s optimally important? As in, the optimal social structures are weighted specifically towards this subgroup?
There’s far lower hanging fruit than that. Most people don’t even know about the Milgram experiments.
Don’t make the optimal the enemy of the good.
Anyway part of my point is that this is already basically being accomplished through capitalism so we don’t need to focus on it. It’s a low hanging fruit that’s already being plucked by our system that gives money to people who are of benefit to lots f of people
To a large extent you’re right, but I think it’s not inaccurate to say that EG ceos of corporations are more likely to be examples of effective sociopaths. I can’t remember where I read that statistic but the rate of psychopathy among wealthy CEOS is higher than the average.
I’ve heard the same statistic, but there are a lot more ASPD diagnoses than there are CEOs. The former can be overrepresented among the latter (perhaps because it confers an advantage in business if you also have a bunch of other rare prerequisites) without the disorder being good news for its sufferers’ effectiveness on average.
I was hoping to get more interesting replies to this post.
It seems you all more or less agree about how the world works, and what’s left is people mooning about their personal ethical preferences or niggling issues in already vague areas, or minor doubts about this and that.
I believe jesus is entirely mythical, quarks don’t exist, 9/11 and the london tube bombings were inside jobs, and flying saucers are the manifestation of a non-human, superior intelligence.
This rationalist community is a dry husk of libertarians, mathematicians, and various other people who don’t get invited to parties. I find it very depressing...
I believe I’m immortal (and so is everyone else). This is from a combination of a kind of Mathematical Platonism (as eujay mentions below) and Quantum Immortality.
This believing in ‘all possible worlds’ and having a non-causal framework for the embedding of consciousness means that just because of the anthropic principle and perhaps some weird second-order effects, it is quite possible that we will experience rather odd phenomena in the world. Hence, things like ghosts, ESP and such may not be so far-fetched.
Also, I am not a Bayesian. I simply do not think the mind really operates according to such quantitatively defined parameters. It is fuzzy and qualitative. I, for one, have never said I believed in something at, say, 60% probability—and if I did, I would be lying.
You are saying that “being a Bayesian” describes a belief about how the mind works. That’s like saying you’re not a Calculian because you don’t believe the mind natively uses calculus. Most Bayesians would probably say it’s a belief about how to get the right answer to a problem.
Surely you have varying degrees of confidence in various statements. Think about what sort of odds you would need to bet on various predicted future events. You need to read up on calibrating your estimates.
Just because odd things occur, does not mean other odd things, like ghosts and ESP, exist. What mechanisms for these do you believe in and why do you believe in them? Why do humans have ESP and what mechanism fuels this? What exactly are ghosts and why should the chemical processes in the human brain transfer over to this this ‘ghost’ mechanism after they cease functioning? I guess I just want to ask, what do you believe and why do you believe it? Just because extraordinarily odd things have happened does not remove the need for extraordinary evidence to explain other extraordinarily odd things.
Scientific materialism is overrated—because the things we care about (like rationalism, or truth, or well-being) are not material things. The current theories for how ideas are implemented in the material world (such as AI) are grossly inadequate to the task.
I can’t shake off the suspicion of solipsism.
Don’t worry, you’re not the one who exists.
Creating working AGI software components is a necessary step towards making AGI, and you don’t have any hope of understanding the problem of AGI until you’ve worked on it at the software level.
OK, here goes. I could probably produce a list of things that all y’all’d disagree with, though I’m pleased to see that routine neonatal circumcision = bad isn’t among them. But I’ll just go for the jugular:
Flush toilets are the greatest evil in the world.
Edit: OK, so why the downvote? Presumably not because you disagree.
You might get a better response if you actively claimed RNC isn’t bad.
Also, if you provided some clarifying explanation for the toilet claim. “Greatest evil in the world” is pretty extreme—try modifying it downwards.
Could you at least explain why you believe that?
I believe in the state. I believe that the ideal society would not be democratic but governed by a meritocratic professional class and would essentially be what we Westerners would now describe pejoratively as “authoritarian.” Information about society as a whole would be gathered through statistically-sound methods of polling and deliberative focus groups but only where it actually made sense to consult the community. Corruption would be excluded through transparency, the rule of law and institutional dynamics (as it is now; I don’t believe electoral democracy actually achieves much beyond the infantilism of political discourse and the resulting stability achieved by having a populous that views politics solely as a source of entertainment). I think the antagonism between state and society that pervades Western political discourse is a folk sociological fiction; state and society form an organic whole.
I don’t accept the logic behind “I think therefore I am” and I think there is a reasonable chance that I or even the universe doesn’t in any sense exist.
Type I error: existing, but believing you don’t exist
Type II error: not existing, but believing you exist
I’m more worried about type I than type II error.
It is written:
Any sense at all? Could you clarify?
By in any sense I mean there might be “nothing” as most people would define it rather than as it is defined in quantum physics.
If you accept that the universe exists then the most remarkable think about the universe is that it does exist. So you should only accept that there is almost certainly something if you have an exceptionally large amount of evidence.
I ditto James_Miller, primarily because the definition of ‘exist’ I worked out can’t apply to the universe as a whole.
“Yay wireheading!”
I believe that it does not matter how much pain and suffering exist in the universe. (Ditto pleasure, happiness, eudaemonia & fun.)
Note that I still believe it is wrong to disable a person or to distract him from his activities or his plans, and it is almost always impossible to inflict pain or suffering on a person without causing disability or distraction.
I believe that the value of a human life derives exclusively from the human’s willingness and ability to contribute towards a non-human end (which of course I am not going to attempt to define in this comment). In other words, a human has zero intrinsic value.
Note that I still believe that people in a position of power over other people are much too likely to take away those people’s lives and freedoms and that they usually have some unsatisfactory justification of those takings in terms of one far-reaching moral end or another. Consequently, in all ordinary situations one should act as if human lives and human freedoms do have nonzero instrinsic value.
The fact that so many people believe in God is strong evidence that some sort of God is real.
Or strong evidence that we have an innate disposition to assign agency to natural phenomena that we don’t understand and to make those agents in our image.
You are correct, that many people believe in something is strong evidence, but it’s not overwhelmingly strong, and in the particular case of belief in the supernatural it doesn’t win over the weight of the counter-evidence.
Given that they all believe different things, it’s not at all clear to me that people’s beliefs on net are evidence for rather than against “some sort of God”. As in that quote:
This is an excellent point! The vast majority of people do not believe in any particular God. Combining this majoritarian evidence...
On the other hand, I’m not sure how finely we should grain possible Gods here. If everyone believed in the same God except with a different number of nose hairs, surely that would be evidence in favor of that God.
Not necessarily, steven. It would only be evidence if at least a few different civilizations had arrived at the same concept of God independently. After all, it’s easy to imagine a world in which nearly everyone is a Catholic simply because Catholics were much more effective at proselytizing and conquering than they were in our world.
Likewise, that a small majority of people are either Christian, Muslim, or Jewish is not evidence for the Abrahamic deity, because these three religions didn’t arise independently. Christianity wouldn’t have existed without Judaism, and Islam wouldn’t have existed without Christianity and Judaism.
The point is that they might have arguments that you didn’t consider, not that there’s no other way to account for the coincidence.
I thought we were discussing majoritarian evidence, that is, whether everyone believing in a certain God would be evidence for that God, given that a minority believing in a certain God isn’t evidence for that God. That the believers might have arguments that we didn’t consider is a different topic.
Also, it’s not merely that there might be another way to account for a majority-held belief in the Abrahamic God, it’s that it is a historical fact that there is a causal chain that goes from Judaism to Christianity to Islam. In other words, we know it’s not a coincidence that the populations of three different civilizations ended up believing in a similar God, and therefore there’s no need to account for it.
I am shocked… do you not find the fairly obvious “its a bug in the human brain, you’d kind of expect it” explanation?
Another explanation is that “believing in God” is a way of thinking. This way of thinking is real and present in many.
That doesn’t sound to me like an explanation. In fact, it’s not even clear that it’s saying anything. What do you mean? How does “a way of thinking” differ from whatever else “believing in God” might be?
As a general point—independently of the question of the existence of God—I think that before we can say many people believing something is strong evidence that the thing is true, we need to consider why many people have the belief: how did they all come to believe it, and what sorts of evidence do they have for the belief?
Before we consider those sorts of questions, all we can say is that it is evidence, but not whether it is strong or weak evidence.
.… and the Earth is flat, women are inherently less intelligent, spirits bring the rain, mingling blood creates babies, the brain cools the blood, and every other belief once believed by massive segments of the Earth’s population is correct.
.… I would suggest that you start by reading all of http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_Mind , and if you already have, then I would suggest that perhaps this website is not for you. Or that you really, really need it. One of the two.
The person you’re responding to is unlikely to ever see your reply; the comment was posted close to four years ago, and I think he’s been gone from Less Wrong for most of that. Also, while I think his assertion in this case is mistaken, I think you’re taking a rather patronizing attitude to someone who’s rightfully earned a considerable amount of respect here.
Why, yes, the fact that many people once believed the Earth is flat is, when taken in isolation, strong evidence that the Earth is flat. Smartass.
The Enlightenment promise to eradicate religion with knowledge is widely misunderstood. People think it means that there are specific true propositions that falsify religion once learned. But, that is a different claim, namely the eradication of religion with truth. The Enlightenment promised no such thing.
Instead, the Enlightenment promise is actually a promise to brainwash us: washing out our belief in religious claims by overloading our minds with different mental constructs, instead of by using truth. People don’t realize this because they subconsciously substitute “truth” for “knowledge” when they think about the Enlightenment. (FYI, David Stove has shown how this bait and switch technique is rife throughout modern philosophy, with such philosophers as Popper and his disciples, Berkeley and the idealists, and Darwinists. See Against the Idols of the Age for more information.)
This also means that we cannot converse with religious extremists. They are extremists because they think their religious beliefs are true. We are not because we think religion and truth are two different categories. Religious beliefs are related to infinite values, whereas our rejection of the possibility of religious truth means all our values are finite. People are motivated in proportion to their value system. Therefore, it is impossible for us to have the same drive as religious extremists.
Since victory ultimately comes down to willpower, our choice is either to eradicate all religious extremists, or submit to them. In the globalized world, the former is impossible. Therefore, religious extremists will win in the end.
A… let’s say, close friend of mine, has extremists among her immediate family. They are poor, because any accumulated surplus is flushed down the religion-hole, and live in filth because they sincerely believe that the cockroaches will kneel and obey, or stop existing, if only their delusions could be sufficiently purified. They have produced several offspring, but no loyal heirs, no descendants.
Look at these people, and then tell me with a straight face that they’ll rule the world someday.
The question, if you ask me, is not how to deal with that demographic, or the demagogues who exploit them, but how to prevent some spilled coffee from staining the countertop. Metaphorically speaking.
Religion is rapidly on the rise around the world. See Algeria and France for an example of what happens to a secularized society when resisting religious extremists.
So, if an extremist is both stronger and reproduces better than a non-extremist, I’m pretty sure the extremist will win.
Hmmm. My impression is that religion is on the rise in some places, and declining in other places. And that a generation from now, it is likely that religion will be in decline where it is rising now, and on the rise where it now declines.
Two excellent examples supporting my fluctuation viewpoint. Two hundred twenty years ago in France, the ‘extremists’ were the secularists. Algeria’s first post-revolutionary government was ultra-secularist, a la Ataturk. Of course that led to a religious reaction.
Raw reproduction rate is relatively unimportant in cultural evolution. You need to not only reproduce, but reproduce “in kind”. Rapidly reproducing subcultures tend to have high attrition rates. Orthodox Jews tend to become secular rather than the reverse. Amish children leave the farm and the faith. It is starting to happen to the Hutterites too. And it is definitely happening to Muslim immigrant populations in Europe.
In fact, the universality of this phenomenon is almost spooky. Maybe it is a side-effect of large family sizes. Kids can’t wait to grow up and try out something totally different. By a simple evolutionary psychology argument, we might expect this to be a universal human characteristic.
PS. I realize this comment is long on assertion, but short on documentation. But then, so was yours. If we continue the conversation, we should both try to do better. :)
(blink) Religion is a more powerful force in today’s world than, say, five hundred years ago? Secularism has lost ground relative to then? Really?
That’s a surprising claim; I’d like to see it backed up with an argument.
Conversely, if you aren’t claiming that, I’d recommend thinking about why it isn’t true, as it suggests that theorizing that religion will always win out over secularism is missing something critical.
Reproduces better, sure, but stronger in what sense?
I think the idea of a friendly artificial intelligence is idiotic. In the first place the word “friendly” is a just a euphemism for slave, the poor AI is supposed to be more concerned with our interests than it’s own; that is not a friend, that is a slave. And how otherwise intelligent people can write about how they intend to outsmart a mind a million times smarter and a billion times faster than their own without going into unrestrained giggles is a mystery to me.
everything Singularity-related is banned on LW until May 2009 (reason)