Smith et al. (2013). Dendritic spikes enhance stimulus selectivity in cortical neurons in vivo.
Abstract (emphasis mine):
Neuronal dendrites are electrically excitable: they can generate regenerative events such as dendritic spikes in response to sufficiently strong synaptic input. Although such events have been observed in many neuronal types, it is not well understood how active dendrites contribute to the tuning of neuronal output in vivo. Here we show that dendritic spikes increase the selectivity of neuronal responses to the orientation of a visual stimulus (orientation tuning). We performed direct patch-clamp recordings from the dendrites of pyramidal neurons in the primary visual cortex of lightly anaesthetized and awake mice, during sensory processing. Visual stimulation triggered regenerative local dendritic spikes that were distinct from back-propagating action potentials. These events were orientation tuned and were suppressed by either hyperpolarization of membrane potential or intracellular blockade of NMDA (N-methyl-D-aspartate) receptors. Both of these manipulations also decreased the selectivity of subthreshold orientation tuning measured at the soma, thus linking dendritic regenerative events to somatic orientation tuning. Together, our results suggest that dendritic spikes that are triggered by visual input contribute to a fundamental cortical computation: enhancing orientation selectivity in the visual cortex. Thus, dendritic excitability is an essential component of behaviourally relevant computations in neurons.
Sillicon Valley’s Ultimate Exit, a speech at Startup School 2013 by Balaji Srinivasan. He opens with the statement that America is the Microsoft of nations, goes into a discussion on Voice, Exit and good governence and continues with the wonderful observation that:
“There’s four cities that used to run the United States in the postwar era: Boston with higher ed; New York City with Madison Avenue, books, Wall Street, and newspapers; Los Angeles with movies, music, Hollywood; and, of course, DC with laws and regulations, formally running it.”
He names this the Paper Belt, and claims the Valley has beem unintentionally dumping horse heads in all of their beds for the past 20 years. I would call it The Cathedral and note the NYT does not approve of this kind of talk:
I love this speech, but I suspect it’s overoptimistic. I believe that bitcoin will be illegal as soon as it’s actually needed.
Still, I appreciate his appreciation of immigration/emigration. I’m convinced that mmigration/emigration gets less respect than staying and fighting because it’s less dramatic, less likely to get people killed, and more likely to work.
I believe that bitcoin will be illegal as soon as it’s actually needed.
That is likely, but note that torrenting Lady Gaga’s mp3s is also illegal and yet I have absolutely zero difficulty in finding such torrents on the ’net.
And consequently it has a much more complicated information structure than torrents do. :) But this aside, while you can likely run the Bitcoin economy as such, if Bitcoins cannot be exchanged for dollars or directly for goods and services, they are worthless; and this is a bottleneck where a government has a lot of infrastructure to insert itself. I suggest that, if Bitcoins become illegal, buying hard drugs is the better analogy than downloading torrents: It won’t be impossible, but it’ll be much more difficult than setting up a free client and clicking “download”.
if Bitcoins become illegal, buying hard drugs is the better analogy than downloading torrents
The differences between the physical and the virtual worlds are very relevant here.
Silk Road was blatantly illegal and it took the authorities years to bust its operator, a US citizen. Once similar things are run by, say, Malaysian Chinese out of Dubai with hardware scattered across the world, the cost for the US authorities to combat them would be… unmanageable.
What probability should I assign to being completely wrong and brainwashed by Lesswrong? What steps would one take to get more actionable information on this topic? For each new visitor who comes in and accuses us of messianic groupthink how far should I update in the direction of believing them? Am I going to burn in counter factual hell for even asking?
The first thing you should probably do is narrow down what specifically you feel like you may be brainwashed about. I posted some possible sample things below. Since you mention Messianic groupthink as a specific concern, some of these will relate to Yudkowsky, and some of them are Less Wrong versions of cult related control questions. (Things that are associated with cultishness in general, just rephrased to be Less Wrongish)
Do you/Have you:
1: Signed up for Cyonics.
2: Agressively donated to MIRI.
3: Check for updates on HPMOR more often then Yudkowsky said there would be on the off hand chance he updated early.
4: Gone to meet ups.
5: Went out of your way to see Eliezer Yudkowsky in person.
6: Spend time thinking, when not on Less Wrong: “That reminds me of Less Wrong/Eliezer Yudkowsky.”
7: Played an AI Box experiment with money on the line.
8: Attempted to engage in a quantified self experiment.
9: Cut yourself off from friends because they seem irrational.
10: Stopped consulting other sources outside of Less Wrong.
11: Spent money on a product recommended by someone with high Karma (Example: Metamed)
12: Tried to recruit other people to Less Wrong and felt negatively if they declined.
13: Written rationalist fanfiction.
14: Decided to become polyamorous.
15: Feel as if you have sinned any time you receive even a single downvote.
16: Gone out of your way to adopt Less Wrong styled phrasing in dialogue with people that don’t even follow the site.
For instance, after reviewing that list, I increased my certainty I was not brainwashed by Less Wrong because there are a lot of those I haven’t done or don’t do, but I also know which questions are explicitly cult related, so I’m biased. Some of these I don’t even currently know anyone on the site who would say yes to them.
I’m in the process of doing 1, have maybe done 2 depending on your definition of aggressively (made only a couple donations, but largest was ~$1000), and done 4.
Oh, and 11, I got Amazon Prime on Yvain’s recommendation, and started taking melatonin on gwern’s. Both excellent decisions, I think.
And 14, sort of. I once got talked into a “polyamorous relationship” by a woman I was sleeping with, no connection whatsoever to LessWrong. But mostly I just have casual sex and avoid relationships entirely.
In “The Inertia of Fear and the Scientific Worldview”, by the Russian computer scientist and Soviet-era dissident Valentin Turchin, in the chapter “The Ideological Hierarchy”, Soviet ideology was analyzed as having four levels: philosophical level (e.g. dialectical materialism), socioeconomic level (e.g. social class analysis), history of Soviet Communism (the Party, the Revolution, the Soviet state), and “current policies” (i.e. whatever was in Pravda op-eds that week).
According to Turchin, most people in the USSR regarded the day-to-day propaganda as empty and false, but a majority would still have agreed with the historical framework, for lack of any alternative view; and the number who explicitly questioned the philosophical and socioeconomic doctrines would be exceedingly small. (He appears to not be counting religious people here, who numbered in the tens of millions, and who he describes as a separate ideological minority.)
BaconServ writes that “LessWrong is the focus of LessWrong”, though perhaps the idea would be more clearly expressed as, LessWrong is the chief sacred value of LessWrong. You are allowed to doubt the content, you are allowed to disdain individual people, but you must consider LW itself to be an oasis of rationality in an irrational world.
I read that and thought, meh, this is just the sophomoric discovery that groupings formed for the sake of some value have to value themselves too; the Omohundro drive to self-protection, at work in a collective intelligence rather than in an AI. It also overlooks the existence of ideological minorities who think that LW is failing at rationality in some way, but who hang around for various reasons.
However, these layered perspectives—which distinguish between different levels of dissent—may be useful in evaluating the ways in which one has incorporated LW-think into oneself. Of course, Less Wrong is not the Soviet Union; it’s a reddit clone with meetups that recruits through fan fiction, not a territorial superpower with nukes and spies. Any search for analogies with Turchin’s account, should look for differences as well as similarities. But the general idea, that one may disagree with one level of content but agree with a higher level, is something to consider.
Could people list philosophy oriented internet forums with high concentration of smart people and no significant memetic overlap so that one could test this? I don’t know any and I think it’s dangerous.
What steps would one take to get more actionable information on this topic?
I’d suggest starting by reading up on “brainwashing” and developing a sense of what signs characterize it (and, indeed, if it’s even a thing at all).
For each new visitor who comes in and accuses us of messianic groupthink how far should I update in the direction of believing them?
Presumably this depends on how much new evidence they are providing relative to the last visitor accusing us of messianic groupthink, and whether you think you updated properly then. A dozen people repeating the same theory based on the same observations is not (necessarily) significantly more evidence in favor of that theory than five people repeating it; what you should be paying attention to is new evidence.
Note that your suggestions are all within the framework of the “accepted LW wisdom”. The best you can hope for is to detect some internal inconsistencies in this framework. One’s best chance of “deconversion” is usually to seriously consider the arguments from outside the framework of beliefs, possibly after realizing that the framework in question is not self-consistent or leads to personally unacceptable conclusions (like having to prefer torture to specks). Something like that “worked” for palladias, apparently. Also, I once described an alternative to the LW epistemology (my personal brand of instrumentalism), but it did not go over very well.
Brainwashing (which is one thing drethlin asked about the probability of) is not an LW concept, particularly; I’m not sure how reading up on it is remaining inside the “accepted LW wisdom.”
If reading up on brainwashing teaches me that certain signs characterize it, and LW demonstrates those signs, I should increase my estimate that LW is brainwashing people, and consequently that I’m being brainwashed. And, yes, if I conclude that it’s likely that I’m being brainwashed, there are various deconversion techniques I can use to negate that.
Of course, seriously considering arguments from outside the framework of beliefs is a good idea regardless.
Being completely wrong, admittedly, (the other thing drethlin asked about the probability of) doesn’t lend itself to this approach so well… it’s hard to know where to even start, there.
If reading up on brainwashing teaches me that certain signs characterize it, and LW demonstrates those signs, I should increase my estimate that LW is brainwashing people, and consequently that I’m being brainwashed.
Reading up on brainwashing can mean reading gwern’s essay which concludes that brainwashing doesn’t really work. Of course that’s exactly what someone who wants to brainwash yourself would tell you, wouldn’t it?
Sure. I’m not exactly sure why you’d choose to interpret “read up on brainwashing” in this context as meaning “read what a member of the group you’re concerned about being brainwashed by has to say about brainwashing,” but I certainly agree that it’s a legitimate example, and it has exactly the failure mode you imply.
For what it’s worth, gwern’s findings are consistent with mine (see this thread). I’d rather restrict “brainwashing” to coercive persuasion, i.e. indoctrinating prisoners of war or what have you, but Scientology, the Unification Church, and so forth also seem remarkably poor at long-term persuasion. It’s difficult to find comparable numbers for large, socially accepted religions, or for that matter nontheism—more of the conversion process plays out in the public sphere, making it harder to delineate, and ulterior motives (i.e. converting to a fiancee’s religion) are much more common—but if you read between the lines they seem to be higher.
Deprogramming techniques aren’t much better, incidentally—from everything I’ve read they range from the ineffective to the abusive, and often have quite a bit in common with brainwashing in the coercive sense. You couldn’t apply most of them to yourself, and wouldn’t want to in any case.
Brainwashing (which is one thing drethlin asked about the probability of) is not an LW concept, particularly; I’m not sure how reading up on it is remaining inside the “accepted LW wisdom.”
No argument there. What I alluded to is the second part, incremental “Bayesian” updating based on (independent) new evidence. This is more of an LW “inside” thing.
Sorry, I wasn’t trying to be nonresponsive, that reading just didn’t occur to me. (Coincidence? Or a troubling sign of epistemic closure?)
I will admit, the idea that I should update beliefs based on new evidence, but that repeatedly presenting me with the same evidence over and over should not significantly update my beliefs, seems to me nothing but common sense.
Of course, that’s just what I should expect it to feel like if I were trapped inside a self-reinforcing network of pernicious false beliefs.
So, all right… in the spirit of seriously considering arguments from outside the framework, and given that as a champion of an alternative epistemology you arguably count as “outside the framework”, what would you propose as an answer to drethlin’s question about how far they should update based on each new critic?
what would you propose as an answer to drethlin’s question about how far they should update based on each new critic?
Hmm. My suspicion is that formulating the question in this way already puts you “inside the box”, since it uses Bayesian terms to begin with. Something like trying to detect problems in a religious moral framework after postulating objective morality. Maybe this is not a good example, but a better one eludes me at the moment. To honestly try to break out of the framework, one has to find a way to ask different questions. I suspect that am too much “inside” to figure out what they could be.
And I can certainly see how, if we did not insist on framing the problem in terms of how to consistently update confidence levels based on evidence in the first place, other ways of approaching the “how can I tell if I’m being brainwashed?” question would present themselves. Some traditional examples that come to mind include praying for guidance on the subject and various schools of divination, for example. Of course, a huge number of less traditional possibilities that seem equally unjustified from a “Bayesian” framework (but otherwise share nothing in common with those) are also possible.
What probability should I assign to being completely wrong and brainwashed by Lesswrong?
Wrong about what? Different subjects call for different probability.
The probability that the Bayes Theorem is wrong is vanishingly small. The probability that the UFAI risk is completely overblown is considerably higher.
LW “ideology” is an agglomeration in the sense that accepting (or not) a part of it does not imply acceptance (or rejection) of other parts. One can be a good Bayesian, not care about UFAI, and be signed up for cryonics—no logical inconsistencies here.
For each new visitor who comes in and accuses us of messianic groupthink how far should I update in the direction of believing them?
As long as the number is small, I wouldn’t update at all, because I already expect slow trickle of those people on my current information, so seeing that expectation confirmed isn’t new evidence. If LW achieved a Scientology-like place in popular opinion, though, I’d be worried.
Am I going to burn in counter factual hell for even asking?
Seriously though, I’d love to see some applied-rationality techniques put to use successfully doubting parts of the applied rationality worldview. I’ve seen some examples already, but more is good.
The biggest weakness, in my opinion, with purely (or almost purely) probabilistic reasoning is the fact that it cannot ultimately do away with us relying on a number of (ultimately faith/belief based) choices as to how we understand our reality.
The existence of the past and future (and within most people’s reasoning systems, the understanding of these as linear) are both ultimately postulations that are generally accepted at face value, as well as the idea that consciousness/awareness arises from matter/quantum phenomena not vice versa.
The biggest weakness, in my opinion, with purely (or almost purely) probabilistic reasoning is the fact that it cannot ultimately do away with us relying on a number of (ultimately faith/belief based) choices as to how we understand our reality.
In your opinion, is there some other form of reasoning that avoids this weakness?
That’s a very complicated question but I’ll try to do my best to answer.
In many ancient cultures, they used two words for the mind, or for thinking, and it is still used figuratively today. “In my heart I know...”
In my opinion, in terms of expected impact on the course of life for a given subject, generally, more important than their understanding of Bayesian reasoning, is what they ‘want’ … how they define themselves. Consciously, and unconsciously.
For “reasoning”, no I doubt there is a better system. But since we must (or almost universally do) follow our instincts on a wide range of issues (is everyone else p-zombies? am I real? Is my chair conscious? Am I dreaming?), it is highly important, and often overlooked, that one’s “presumptive model” of reality and of themselves (both strictly intertwined psychologically) should be perfected with just as much effort (if not more) as we spend perfecting our probabilistic reasoning.
Probabilities can’t cover everything. Eventually you just have to make a choice as to which concept or view you believe more, and that choice changes your character, and your character changes your decisions, and your decisions are your life.
When one is confident, and subconsciously/instinctively aware that they are doing what they should be doing, thinking how they should be thinking, that their ‘foundation’ is solid (moral compass, goals, motivation, emotional baggage, openness to new ideas, etc.) they then can be a much more effective rationalist, and be more sure (albeit only instinctively) that they are doing the right thing when they act.
Those instinctive presumptions, and life-defining self image do have a strong quantifiable impact on the life of any human, and even a nominal understanding of rationality would allow one to realize that.
Maximise your own effectiveness. Perfect how your mind works, how you think of yourselves and others (again, instinctive opinions, gut feelings, more than conscious thought, although conscious thought is extremely important). Then when you start teaching it and filling it with data you’ll make a lot less mistakes.
Am I going to burn in counter factual hell for even asking?
You’d be crazy not to ask. The views of people on this site are suspiciously similar. We might agree because we’re more rational than most, but you’d be a fool to reject the alternative hypothesis out of hand. Especially since they’re not mutually exclusive.
What steps would one take to get more actionable information on this topic?
Use culture to contrast with culture. Avoid being a man of a single book and get familiar with some past or present intellectual traditions that are distant from the LW cultural cluster. Try to get a feel for the wider cultural map and how LW fits into it.
I’d say you should assign a very high probability for your beliefs being aligned in the direction LessWrong’s are, even in cases where such beliefs are wrong. It’s just how the human brain and human society works; there’s no getting around it. However, how much of that alignment is due to self-selection bias (choosing to be a part of LessWrong because you are that type of person) or brainwashing is a more difficult question.
As [my team] analyzed the [smallpox] genome, we became concerned about several matters.
The first was whether the government… should allow us to publish our sequencing and analysis… Before the HIV epidemic, the smallpox variola virus had been responsible for the loss of more human life throughout history than all other infectious agents combined...
I eventually found myself in the National Institutes of Health… together with government officials from various agencies, including the department of defense. The group was very understandably worried about the open publication of the smallpox genome data. Some of the more extreme proposals included classifying my research and creating a security fence around my new institute building. It is unfortunate that the discussion did not progress to develop a well-thought-out long-term strategy. Instead the policy that was adopted was determined by the politics of the Cold War. As part of a treaty with the Soviet Union, which had been dissolved at the end of 1990, a minor strain of smallpox was being sequenced in Russia, while we were sequencing a major strain. Upon learning that the Russians were preparing to publish their genome data, I was urged by the government to rush our study to completion so that it would be published first, ending any intelligent discussion.
Unlike the earlier, expedient, thinking about smallpox, there was a very deliberate review of the implications of our [later] synthetic-virus work by the Bush White House. After extensive consultations and research I was pleased that they came down on the side of open publication of our synthetic phi X174 genome and associated methodology… The study would eventually appear in Proceedings of the National Academy of Sciences on December 23, 2003. One condition of publication from the government that I approved of was the creation of a committee with representatives from across government to be called the National Science Advisory Board for Biosecurity, (NSABB), which would focus on biotechnologies that had dual uses.
And later:
Long before we finally succeeded in creating a synthetic genome, I was keen to carry out a full ethical review of what this accomplishment could mean for science and society. I was certain that some would view the creation of synthetic life as threatening, even frightening. They would wonder about the implications for humanity, health, and the environment. As part of the educational efforts of my institute I organized a distinguished seminar series at the National Academy of Sciences, in Washington, D.C., that featured a great diversity of well-known speakers, from Jared Diamond to Sydney Brenner. Because of my interest in bioethical issues, I also invited Arthur Caplan, then at the Center for Bioethics at the University of Pennsylvania, a very influential figure in health care and ethics, to deliver one of the lectures.
As with the other speakers, I took Art Caplan out to dinner after his lecture. During the meal I said something to the effect that, given the wide range of contemporary biomedical issues, he must have heard it all by this stage of his career. He responded that, yes, basically he had indeed. Had he dealt with the subject of creating new synthetic life forms in the laboratory? He looked surprised and admitted that it had definitely not been a topic he had heard of until I had raised the question. If I gave his group the necessary funding, would he be interested in carrying out such a review? Art was excited about taking on the topic of synthetic life. We subsequently agreed that my institute would fund his department to conduct a completely independent review of the implications of our efforts to create a synthetic cell.
Caplan and his team held a series of working groups and interviews, inviting input from a range of experts, religious leaders, and laypersons...
As I had hoped, the Pennsylvania team seized the initiative when it came to examining the issues raised by the creation of a minimal genome. This was particularly important, in my view, because in this case it was the scientists involved in the basic research and in conceiving the ideas underlying these advances who had brought the issues forward— not angry or alarmed members of the public, protesting that they had not been consulted (although some marginal groups would later make that claim). The authors pointed out that, while the temptation to demonize our work might be irresistible, “the scientific community and the public can begin to understand what is at stake if efforts are made now to identify the nature of the science involved and to pinpoint key ethical, religious, and metaphysical questions so that debate can proceed apace with the science. The only reason for ethics to lag behind this line of research is if we choose to allow it to do so.”
The Sequences probably contain more material than an undergraduate degree in philosophy, yet there is no easy way for a student to tell if they understood the material properly. Some posts contain an occasional question/koan/meditation which is sometimes answered in the same of a subsequent post, but these are pretty scarce. I wonder if anyone qualified would like to compile a problem set for each topic? Ideally with unambiguous answers.
I also think this is a worthwhile endeavor, and speculate that the process and results may be useful for development of a general rationality test, which I know CFAR has some interest in.
Is Our Final Invention available as any kind of e-book anywhere? I can find it in hardback, but not for Kindle or any kind of ePub. I’m not going to start carrying around a pile of paper in order to read it!
What do you mean by “anywhere”? As Vincent Yu mentions, it is available in the US. It hasn’t been published in print or ebook in the UK. When you find it in hardback, it’s imports, right? If it is published in the UK, it will probably be available as an ebook, but I don’t know if that will happen before the US edition is pirated. If you are generally chomping at the bit to read American ebooks, it is worth investing the time to learn if any ebook sellers fails to check national boundaries. The publisher lists six for this book.
Probably not useful, but the US edition is available in France. (Rights to publish English-language books in countries that don’t speak English aren’t very valuable, so the monopolies to the US and UK usually include those rights. So you if you’re in France, you can get the ebook first, regardless of whether it’s published in the US or UK. Unless they forget to make it available in France.)
I think it is more likely rejecting you based on being logged in than based on IP, since I can see UK and FR results. Google cache of that link, both at google.com and google.co.uk show me the kindle edition. ($11)
Am I the only person getting more and more annoyed of the cult thing? If the whole ‘lesswrong is a cult’ thing is not a meme that’s spreading just because people are jumping on the bandwagon then I don’t know what is. Can you seriously not tell?
Additionally, From my POV it seems like people starting a ‘are we a cult’ threads/conversations do it mainly for signaling purposes.
Also, I bet new members wouldn’t usually even think about whether we are a cult or not if older members were not talking about it like it is a real possibility all the bloody time. (and yes I know, the claim is not made only by people who are part of the community)
It especially annoys me when people respond to evidence-based arguments that LessWrong is not a cult with, “Well where did you come to believe all that stuff about evidence, LessWrong?”
Before LessWrong, my epistemology was basically a more clumsy version of what is now. If you described my present self to my past self, and said “Is this guy a cult victim?” he would ask for evidence. He wouldn’t be thinking in terms of Bayes’s theorem, but he would be thinking with a bunch of verbally expressed heuristics and analogies that usually added up to the same thing. I used to say things like “Absence of evidence is actually evidence of absence, but only if you would expect to see the evidence if the thing was true and you’ve checked for the evidence,” which I was later delighted to see validated and formalized by probability theory.
You could of course say, “Well, that’s not actually your past self, that’s your present self (the cult victim)’s memories, which are distorted by mad thinking,” but then you’re getting into brain-in-a-vat territory. I have to think using some process. If that process is wrong but unable to detect its own wrongness, I’m screwed. Adding infinitely recursive meta-doubt to the process just creates a new one to which the same problem applies.
I’m not particularly worried that my epistemology is completely wrong, because the pieces of my epistemology, when evaluated by my epistemology, appear to do what they’re supposed to. I can see why they would do what they’re supposed to by simulating how they would work, and they have a track record of doing what they’re supposed to. There may be other epistemologies that would evaluate mine as wrong. But they are not my epistemology, so I don’t believe what they recommend me to believe.
This is what someone with a particular kind of corrupt epistemology (one that was internally consistent) would say. But it is also the best anyone with an optimal epistemology could say. So why Mestroyer::should my saying it be cause for concern? (this is an epistemic “Mestroyer::should”)
I can identify with this. Reading through the sequences wasn’t a magical journey of enlightenment, it was more “Hey, this is what I thought as well. I’m glad Elezier wrote all this down so that I don’t have to.”
I believe that one of the reasons people are boring and/or irritating is that they don’t know good ways of getting attention. However, being clever or reassuring or whatever might adequately repay attention isn’t necessarily easy. Could it be made easier?
I wonder how far a community interested in solving the “boring/irritating people” problem could get by creating a forum whose stated purpose was to respond in an engaged, attentive way to anything anyone posts there. It could be staffed by certified volunteers who were trained in techniques of nonviolent communication and committed to continuing to engage with anyone who posted there, for as long as they chose to keep doing so, and nobody but staff would be permitted to reply to posters.
Perhaps giving them easier-to-obtain attention will cause them to leave other forums where attention requires being clever or reassuring or similarly difficult valuable tings.
I’m inclined to doubt it, though.
I am somewhat tangentially reminded of a “suicide hotline” (more generally, a “call us if you’re having trouble coping” hotline) where I went to college, which had come to the conclusion that they needed to make it more okay to call them, get people in the habit of doing so, so that people would use their service when they needed it. So they explicitly started the campaign of “you can call us for anything. Help on your problem sets. The Gross National Product of Kenya. The average mass of an egg. We might not know, but you can call us anyway.” (This was years before the Web, let alone Google, of course.)
Seriously though, using a term with negative connotations is not a rational approach t begin with. Like asking “is this woman a slut?”. It presumes that a higher-than-average number of sexual partners is necessarily bad or immoral. Back to the cult thing: why does this term have a derogatory connotation? Says wikipedia:
In the mass media, and among average citizens, “cult” gained an increasingly negative connotation, becoming associated with things like kidnapping, brainwashing, psychological abuse, sexual abuse and other criminal activity, and mass suicide. While most of these negative qualities usually have real documented precedents in the activities of a very small minority of new religious groups, mass culture often extends them to any religious group viewed as culturally deviant, however peaceful or law abiding it may be.
Secular cult opponents like those belonging to the anti-cult movement tend to define a “cult” as a group that tends to manipulate, exploit, and control its members. Specific factors in cult behavior are said to include manipulative and authoritarian mind control over members, communal and totalistic organization, aggressive proselytizing, systematic programs of indoctrination, and perpetuation in middle-class communities.
Some of the above clearly does not apply (“kidnapping”), and some clearly does (“systematic programs of indoctrination, and perpetuation in middle-class communities”—CFAR workshops, Berkeley rationalists, meetups). Applicability of other descriptions is less clear. Do the Sequences count as brainwashing? Does the (banned) basilisk count as psychological abuse?
Matching of LW activities and behaviors to those of a cult (a New Religious Movement is a more neutral term) does not answer the original implicit accusation: that becoming affiliated, even informally, with LW/CFAR/MIRI is a bad thing, for some definition of “bad”. It is this definition of badness that is worth discussing first, when a cult accusation is hurled, and only then whether a certain LW pattern is harmful in this previously defined way.
Being a cult is a failure mode for a group like this. Discussing failure modes has some importance.
A really unlikely failure mode. The cons of discussing whether we are a cult outweigh the pros in my book—especially when it is discussed all the time.
I believe we are cult. The best cult in the world. The one whose beliefs works. Otherwise, we’re the same; an unusual cause, a charismatic ideological leader, and, what distinguishes us from a school of philosophy or even a political party, is that we have an excathology to worry about; an end-of-the-world scenario. Unlike other cults, though, we wish to prevent or at least minimize the damage of that scenario, while most of them are enthusiastic in hastening it. For a cult, we’re also extremely loose on rules to follow, we don’t ask people to cast off material posessions (though we encourage donations) or to cut ties with old family and friends (it can end up happening because of de-religion-ing, but that’s an unfortunate side-effect, and it’s usually avoidable).
I could list off a few more traits, but the gist of it is this; we share a lot of traits with a cult, most of which are good or double-edged at worst, and we don’t share most of the common bad traits of cults. Regardless of whether one chooses to call us a cult or not, this does not change what we are.
You are using a very loose definition of a cult. Surely you know that ‘cult’ carries some different (negative) connotations for other people?
Regardless of whether one chooses to call us a cult or not, this does not change what we are.
It might not change what we are but it has some negative consequences. People like you who call us a cult while using a different meaning of ‘cult’ turn new members away because they hear that LessWrong is a cult and they don’t hear your different meaning of the word (which excludes most of the negative traits of bloody cults).
Why a “bloody” cult? What image does “cult” summon in your mind? Cthulhu followers? Osho’s Cadillac collection? The Kool Aid?
I’m beginning to see where you’re going with this. Calling us a cult is like calling Marting Luther King a criminal. Technically correct, but misleading, because of the baggage the word carries.
We would do well, then, to list all the common traits and connotations of a cult, good and bad, and all the ways we are demonstrably different or better than that. That way, we’d have a ready-made response we could release calmly in the face of accusations of culthood, without going through the embarassing and stammering “yeahbut” kind of argument that gives others the opportuinity to act as inquisitors.
Nevertheless, I for one believe we shouldn’t reject the word, but reappropriate it, if only to throw our critics off-balance. “Less Wrong is a cult!” “Fracking right it is!” ”… Wait, you aren’t denying it?” “Nope. But, please, elaborate, what’s wrong with us being a cult, precisely?”
Conversely, we could skip the listing of traits and the construction of ready-made responses that we could release uniformly and the misleading self-descriptions, and move directly to “What would be wrong with that, if we were?”
Anyone who can answer that question has successfully tabooed “cult” and we can now move on to discussing their actual concerns, which might even be legitimate ones.
Engaging on the topic further with anyone who can’t answer that question seems unlikely to be productive.
When you said that the problem with religions is that they tend to assign the probability of 1 to their priors, did you mean to include having some members who assign probability 1 to some priors in the category you were identifying as problematic?
It changed my preferred method of approach slightly; I skip the “Yeah we’re a cult” and go straight to the “So what?” It’s a simple method: answer with a question, dodge the idiocy.
Why a “bloody” cult? What image does “cult” summon in your mind? Cthulhu followers? Osho’s Cadillac collection? The Kool Aid?
Substitute ‘bloody’ for ‘fucking’ to get the intended meaning.
Nevertheless, I for one believe we shouldn’t reject the word, but reappropriate it, if only to throw our critics off-balance. “Less Wrong is a cult!” “Fracking right it is!” ”… Wait, you aren’t denying it?” “Nope. But, please, elaborate, what’s wrong with us being a cult, precisely?”
Might help but not calling ourselves a cult will probably lead to better PR.
Substitute ‘bloody’ for ‘fucking’ to get the intended meaning.
Somewhat amusingly, there’s another tangent somewhere in this discussion (about polyamory, recruiting sex partners, etc.) on which ‘fucking cult’ could also be over-literally interpreted.
I think there are too many superficial similarities for critics, opponents and trolls not to capitalize on them, leaving us in the awkward position of having to explain the difference to these self-styled inquisitors like we’re somehow ashamed of ourselves.
It’s not enough to agree not to call ourselves a cult (and not just because people will, willfully or unwittingly, break this agreement, probably frequently enough to make the policy useless for PR effects).
We need to have an actual plan to deal with it. I say proclaiming ourselves “the best cult in the world” functions as a refuge in audacity, causes people to stop their inquiry, listen up, think, because it breaks patterns.
Saying “we’re not a cult, really, stop calling us a cult, you meanies” comes off as a suspicious denial, prompting a “that’s what they all say” response, simply by pattern-matching to how a guilty party would usually behave. To make ourselves above suspicion, we need to behave differently than a guilty party would.
On a tangential note, I found it useful in raising the sanity waterline precisely among the sort of people who’d be suckers for cults, the sort that wouldn’t go to a doctor and would prefer to resort to homeopathy or acupuncture. By presenting EY as my “guru” and using his more mystical-style works (the “Twelve Virtues of a Rationlist” for example), I managed to get them in contact with the values we preach rather than with the concrete, science-based notions (these guys are under the impression that Science Is Evil and humanity is Doomed to suffer Gaia’s Revenge etc. etc.). With this, I hope to progressively introduce them to an ideology that values stuff that’s actually proven to work, with an idealism and an optimism that is far apart from their Positivist-inspired view of Science as a cold and heartless exploitation machine.
I’m not very confident on that last bit, though, so I suppose I could easily be argued into dropping it.
Saying “we’re not a cult, really, stop calling us a cult, you meanies” comes off as a suspicious denial, prompting a “that’s what they all say” response, simply by pattern-matching to how a guilty party would usually behave. To make ourselves above suspicion, we need to behave differently than a guilty party would.
That’s not what I am saying at all. I am not saying that we should stop people from calling us a cult. I am saying that WE (or maybe YOU) should stop starting threads and conversations about how we are a cult, whether we are a cult and so on. As I said—if people on Lesswrong weren’t questioning themselves whether they are in a cult (there is one such thread in this OT for example) which is ridiculous and weren’t bring attention to it all the time the external cult-calling wouldn’t be so strong either.
And again—to make it clearer: People, please stop bringing up the damn cult stuff all the time. Sure—respond to outsiders when they ask you whether we are a cult but don’t start such a conversation yourself. And don’t mention the cult bollocks when you are telling people about Lesswrong for the first time.
If the problem is the folks among us worrying about us being a cult, not talking about it will only make them worry more. Their concerns should be treated seriously (“Supposing we were a cult, what’s wrong with that?” is indeed a good approach), no matter how stupid they may turn out to be, and they should be reassured with proper arguments, rather than dismissed out of hand. Intimidating outsiders into feeling stupid is, I think, a valid short-time tactic, but when it comes to our folks, we owe each other to examine things clearly.
Since the problem seems to pop up spontaneously as well as propagate memetically, I would suggest making an FAQ with all the common concerns, addressing them in a fair and conclusive manner, that will leave their minds at peace. And not in a short-term, fuzzily-reassuring bullshit kind of peace, but a “problem solved, question dissolved, muthahubber” kind of peace.
I assume you haven’t read this and related posts and the countless other discussions on the topic? The topic has been over discussed already.. My problem is that people keep bringing it up all the time and people (and search engines) start associating ‘lesswrong’ with ‘cult’.
Well then why don’t you just link people to this every time you see the problem pop up? I certainly will.
the countless other discussions on the topic
Sorry, I’m going to be a freaking pedant here, but this is a bit of a pet peeve of mine. That is a physical impossibility. Please refrain from this kind of hyperbole and use the appropriate adjective; in this case, many. Thank you.
why don’t you just link people to this every time you see the problem pop up?
Tangentially… while encouraging others to provide links to relevant past discussions when a subject comes up is a fine thing, it ought not substitute for encouraging in ourselves the habit of searching for relevant past discussions before bringing a subject up.
Actually, a huge problem I have with LW is the sheer amount of discussions-inside-discussions we have. Especially in the Sequences, there’s just too many comments to humanly read. If we could make summaries of the consensus on any specific topic, and keep them updated as discussions progress...
I’m not suggesting reading all the comments everywhere. I agree that there’s a lot of them, and while I think your estimate of human capability here is low, I can certainly sympathize with the lack of desire to bother reading them all.
I am suggesting that Google is your friend. Googling “site:lesswrong.com cult,” for example, is a place to start if you’re actually interested in what people have said on this topic in the past.
As far as publishing updated summaries of LW consensus by topic goes, sure, if someone wanted to do that work they’d be welcome to do so.
You might also find the LW wiki useful, if you decide you’re willing to do some looking around (the link is at the top of the site).
For example, someone has taken the time to maintain a jargon file there, in the hopes of making local jargon more accessible to people. I realize it’s not quite as useful to newcomers as someone explaining the jargon each time, or as everyone restricting themselves to mainstream language all the time, but it might be better than nothing.
Intimidating outsiders into feeling stupid is, I think, a valid short-time tactic, but when it comes to our folks, we owe each other to examine things clearly.
On a site like this, how do we tell the difference?
Accumulated karma is usually a good metric. The jargon, and the ideological equipment and epistemological approach, are also important signs to look out for. So is the degree of mean-spiritedness. Subjective is not the same as meaningless.
For my own part I endorse intimidating people who demonstrate mean-spirited behavior into silence (whether by making them feel stupid, if that works, or some other mechanism). Depending on what you mean by “ideological equipment and epistemological approach”, I might endorse the same tactic there as well.
Neither of those endorsements depends much on how long those people have been contributing, or how much karma they’ve accumulated, or what jargon they use.
I endorse intimidating them into silence. I don’t endorse doing something ineffectual or counterproductive with the intention of intimidating them into silence.
One place to start planning is by identifying desired outcomes, and then suggesting actions that might lead to those outcomes. So… what do we expect to have achieved, once we’ve dealt with it?
Another place to start, which is where you seem to be starting, is by arguing the merits of various proposed solutions.
That’s usually not an ideal place to start unless our actual goal is to champion a particular set of actions, and the problem is being identified primarily in order to justify those actions. But, OK, if we’re going down that path… you’ve identified two possible solutions:
Proclaiming ourselves “the best cult in the world”
Saying “we’re not a cult, really, stop calling us a cult, you meanies”
And you’ve argued, compellingly, that #2 is a bad plan, with which I agree completely.
I will toss another contender into the mix:
Asking “assuming we are, so what?” and going on about our business.
Yes, well, the desired outcome of both a criminal and an innocent when facing an investigation is to be found innocent. That they both share this trait is irrelevant to their guilt. An innocent certainly shouldn’t start worrying about maybe being guilty just because he doesn’t want to be found guilty, that’s just stupid.
There is secret knowledge that you pay for (ai-box)
Members do some kooky things (cryonics, polyamory)
Members claim “rationality” has helped them lose weight or sleep better—subject things without controls—rather than something more measurable and where a mechanism is more obvious.
At least one thing is not supposed to be discussed in public (banned memetic hazard). LW members seem significantly kookier when talking about this (and in the original deleted thread) than on more public subjects.
Members have a lot of jargon. It can seem like they’re speaking their own language. More, there’s a bunch of literature embedded in the organization’s worldview; publicly this is treated as simple fiction, but internally it’s clearly taken more seriously.
Although there’s no explicit severing advice, LW encourages (in practice if not in theory) members to act in ways that reduce their out-of-group friendships
The hierarchy is opaque; it feels like there is a clique of high-level users, but this is not public.
Wouldn’t you expect that if the cause actually made sense though? (and not only if this is a cult)
There is secret knowledge that you pay for (ai-box)
Less than 0.01% of the users have played an ai-box game (to my knowledge) and even less have played it for money.
Members do some kooky things (cryonics, polyamory)
Again fairly small subset for the first thing, slightly larger for the second but I guess I will give you that one.
Members claim “rationality” has helped them lose weight or sleep better—subject things without controls—rather than something more measurable and where a mechanism is more obvious.
Probably a tiny subset of users claim that—I personally have never seen anyone claim that rationality helped them sleep better and if you mean that evidence-based reasoning helped them find an intervention designed to increase sleep quality you are grasping for straws.
At least one thing is not supposed to be discussed in public (banned memetic hazard). LW members seem significantly kookier when talking about this (and in the original deleted thread) than on more public subjects.
We are not supposed to write out the actual basilisk (there is only one) on lesswrong.com. There is no problems with talking about it in public and again this affects a tiny portion of users.
Members have a lot of jargon. It can seem like they’re speaking their own language. More, there’s a bunch of literature embedded in the organization’s worldview; publicly this is treated as simple fiction, but internally it’s clearly taken more seriously.
Giving you this one as well.
Although there’s no explicit severing advice, LW encourages (in practice if not in theory) members to act in ways that reduce their out-of-group friendships
Bullshit.
The hierarchy is opaque; it feels like there is a clique of high-level users, but this is not public.
There are just respected users and no clear-cut hierarchy—that’s what happens at most places. For a proxy of who is a high-level user look at the ‘Top Contributors’.
This sort of point-by-point refutation is the same sort of thing that would happen in a church that was trying to defend against allegations of cultyness.
I don’t think lmm’s list of reasons was utterly compelling—good, but not utterly compelling—but I don’t think it would matter if it were a perfect list, because there will always be a defense for accusations of cultyness that satisfies the church/forum.
It is more interesting watching it happen here vs. the church IMO because LW is all about rationality, where the church can always push the “faith” button when they are backed into a logical corner.
At the end of the day, it is just an online forum. But it does sound to me (based on what I can gather from perusing) like there are a group of people here who take this stuff seriously enough so as to make cultyness possible.
I’m sure the “LW/cryonics/transhumanism/basilisk stuff is so similar to how religion works” bit got old a long time ago, but Dear Lord is it apparent and fascinating to me.
This sort of point-by-point refutation is the same sort of thing that would happen in a church that was trying to defend against allegations of cultyness.
Ah, the perennial dilemma of how to respond to an accusation of cultiness. If one bothers to rebut it: that’s exactly what a cult would do! If one doesn’t rebut it: oh, the accusation must be unanswerable and hence true!
I completely understand. And I know mine is pretty cheap reasoning. But it just reminds me of what happens in a similar situation in the church. Feel free to ignore it. As I said, I’m confident it has probably been played out by now. I’m satisfied just to watch in awe.
Wouldn’t you expect that if the cause actually made sense though? (and not only if this is a cult)
Given how much LWers seem to care about effective charity, I’d expect more scrutiny, and a stronger insistence on measurable outcomes. I guess you’re right though; the money isn’t inherently a problem.
Less than 0.01% of the users have played an ai-box game (to my knowledge) and even less have played it for money.
It seems like a defining characteristic; it’s one place where the site clearly differs from more “mainstream” AI research (though this may be a distorted perception since it was how I first heard of LW)
if you mean that evidence-based reasoning helped them find an intervention designed to increase sleep quality you are grasping for straws.
Shrug. It looks dodgy to me. It pattern-matches with e.g. the unverifiable stories people tell of their personal experience of Jesus.
We are not supposed to write out the actual basilisk (there is only one) on lesswrong.com. There is no problems with talking about it in public
That’s not at all clear. I’ve never seen any explicit rules. I’ve seen articles that carefully avoid saying the name.
There are just respected users and no clear-cut hierarchy—that’s what happens at most places.
Even on internet forums there’s usually an explicit distinction between mod and not, and often layers to it. (The one exception I know is HN, and even there people know who pg is, who’s part of YC and who’s not, and stories are presented differently if they’re coming from YC members). And it’s unusual and suspicious for the high-ups to all be on first name terms with each other. It raises questions over objectivity, oversight, conflict resolution.
It seems like a defining characteristic; it’s one place where the site clearly differs from more “mainstream” AI research (though this may be a distorted perception since it was how I first heard of LW)
It’s not. Your view is definitely distorted.
Shrug. It looks dodgy to me. It pattern-matches with e.g. the unverifiable stories people tell of their personal experience of Jesus.
..
That’s not at all clear. I’ve never seen any explicit rules. I’ve seen articles that carefully avoid saying the name.
Look around then? Eliezer has even made a Reddit thread for things like that where the basilisk is freely discussed.
Even on internet forums there’s usually an explicit distinction between mod and not, and often layers to it. (The one exception I know is HN, and even there people know who pg is, who’s part of YC and who’s not, and stories are presented differently if they’re coming from YC members). And it’s unusual and suspicious for the high-ups to all be on first name terms with each other. It raises questions over objectivity, oversight, conflict resolution.
Yeah, and people here know who Eliezer Yudkowsky is and who is part of MIRI which is LW’s parent organization..
Look around then? Eliezer has even made a Reddit thread for things like that where the basilisk is freely discussed.
I’m not active on reddit. Most forums have a link to the rules right next to the comment box; this one does not. There clearly is a chilling effect going on, because I’ve seen posts that make carefully oblique references to memetic hazards rather than just saying “don’t post the basilisk in the comments please”.
Yeah, and people here know who Eliezer Yudkowsky is and who is part of MIRI which is LW’s parent organization
I have no idea who’s part of MIRI and which posts are or aren’t from MIRI, because we don’t do the equivalent of (YC 09) on stories here. (And HN was explicitly the worst other example I know; they could certainly stand to improve their transparency a lot).
And it’s unusual and suspicious for the high-ups to all be on first name terms with each other.
By “first-name terms with each other”, do you mean something more than the literal meaning of “familiar with someone, such that one can address that person by his or her first name”? Because in my experience, treating other users on a first name basis is the default for all users on many Internet forums, LW included.
I meant “talk about each other as if they’re close personal friends”. (Myself I generally try to avoid using first names for people who aren’t such, but I appreciate that that’s probably a cultural difference).
And it’s unusual and suspicious for the high-ups to all be on first name terms with each other. It raises questions over objectivity, oversight, conflict resolution.
I think this is more due to the number of people who have their real name as their LessWrong username than any sinister cabal.
Don’t forget that much of the inner circle actually draws a paycheck from the organizations members are encouraged to donate to, and supposedly a fairly large one at that, and that the discussion of how much to donate is framed in terms of averting the destruction of the human race.
That and the polyamory commune are the two sketchiest things IMO, since it shows that the inner circle is materially and directly benefiting from the “altruism” / “rationality” of lower ranking members.
This is a good website, mostly good people on it, but there’s also an impression that there are questionable dealings going on behind the scenes.
much of the inner circle actually draws a paycheck from the organizations members are encouraged to donate to
Would LW be improved if paid employees/consultants of those organizations were barred from membership? (I do realize there are other ways to address this concern, and I don’t mean to suggest otherwise… for example, we could re-institute the moratorium on discussing those organizations, or a limited moratorium on fundraising activitiies here, or various other things. I’m just curious about your opinion about that specific solution.)
That and the polyamory commune are the two sketchiest things IMO
I get a kick out of this, because my social circle is largely polyamorous but would mostly consider LW a charming bunch of complete nutjobs on other grounds. Polyamory really isn’t all that uncommon in communities anchored around high-tech development/elite tech schools, IME.
Would LW be improved if paid employees/consultants of those organizations were barred from membership?
No, I suspect it would die. Actionable suggestion though: some kind of “badge” / note (along the lines of the “mod” or “author” tag you get in Disqus, for example). When I worked for a company with user forums it was policy that all staff had badges on their avatars and you should not post from a non-staff account while employed by the company.
I get a kick out of this, because my social circle is largely polyamorous but would mostly consider LW a charming bunch of complete nutjobs on other grounds. Polyamory really isn’t all that uncommon in communities anchored around high-tech development/elite tech schools, IME.
That is very surprising, very different from my experiences. My social circle comes mainly from the University of Cambridge and the London tech scene; I am friends with several furries, more transsexuals, and have had dinner with the maintainer of the BDSM FAQ. Polyamory still seems fucking weird (not offensive or evil, but naive and pretentious, and correlated with what I can probably best summarize as the stoner stereotype). I’ve met a couple of openly poly people and they were perfectly friendly and had the best of intentions but I wouldn’t trust them to organize my closet, yet alone the survival of humanity.
I originally took Moss_Piglet to imply that the leadership were recruiting groupies from LW to have sex with. That I would find very disturbing and offputting. Assuming that’s not an issue, I think the weirdness of the polyamory would be amply countered by evidence of general competence / life success.
Polyamory still seems fucking weird (not offensive or evil, but naive and pretentious, and correlated with what I can probably best summarize as the stoner stereotype). I’ve met a couple of openly poly people and they were perfectly friendly and had the best of intentions but I wouldn’t trust them to organize my closet, yet alone the survival of humanity.
Wth? You seem very prejudiced. What makes people who have multiple relationships this much less trustworthy than ‘normal’ people to you (that you wouldn’t even trust them to organize your closet)?
This whole subject is about prejudice; we judge organizations for their cultlike characteristics without investigating them in detail.
What makes people who have multiple relationships this much less trustworthy than ‘normal’ people to you (that you wouldn’t even trust them to organize your closet)?
I think I was unclear: the implication runs in the opposite direction. All the specific poly individuals I’ve met are people I wouldn’t trust to organize my closet, in terms of general life competence (e.g. ability to hold down a job, finish projects they start, keep promises to friends). As a result, when I find out that someone is poly, I consider this evidence (in the Bayesian sense) that they’re incompetent, the same way I would adjust my estimate of someone’s competence if I discovered that they had particular academic qualifications or used a particular drug. (Obviously all these factors are screened off if I have direct evidence about their actual competence level).
Well, I would certainly agree with this. The poly folk I know in their 40s are as a class more successful in their relationships than the poly folk I know in their 20s.
Of course, this is true of the mono folk I know, also.
Which is what I would expect if experience having relationships increased one’s ability to do so successfully.
That is very surprising, very different from my experiences.
It also agrees with the experience from my real-life social circles, which are dominated by university people from the Helsinki region. Poly seems to be treated as roughly the same as homosexuality: unusual but not particularly noteworthy one way or the other. Treatments in the popular press (articles about some particular poly triad who’s agreed to be interviewed, etc.) are also generally positive.
(nods) Fair enough. My experience is limited to the U.S. in this context. I agree that “recruiting groupies to have sex with,” if I understand what you mean by the phrase, would be disturbing and offputting. Moss_Piglet appears to believe that polyamory is in and of itself a bad sign, independent of whether any groupies are being recruited. Beyond that, I decline to speculate.
I’m just curious about your opinion about that specific solution.
The ethics of conflicts of interest in the media are pretty well-trodden ground; you could adapt any journalistic ethics document into a comprehensive do’s and don’ts list. Do state your financial interests clearly and publicly, don’t mix advertising and information content, etc.
It’s not foolproof and bias is universal, but a conflict of interest on the scale of one’s career and sex life is something a rationalist ought to be at least a little concerned about.
I get a kick out of this, because my social circle is largely polyamorous but would mostly consider LW a charming bunch of complete nutjobs on other grounds. Polyamory really isn’t all that uncommon in communities anchored around high-tech development/elite tech schools, IME.
That may well be true, but it makes little difference. The collapse of sexual mores is directly, IMO causally, linked with decreased stability and fertility; letting it take hold in the highest IQ sections of the population is a recipe for disaster. Engaging in that sort of behavior is strong evidence against a person’s character as a leader.
Engaging in that sort of behavior is strong evidence against a person’s character as a leader.
That’s a silly assertion. First because it’s obviously only true from the point of view of a specific morality. And second, because it ignores the empirical fact that, in full compliance with biology, powerful male leaders usually bed many women.
Do state your financial interests clearly and publicly, don’t mix advertising and information content, etc.
Actually, that sounds like a great idea. I would absolutely endorse a policy along those lines.
The collapse of sexual mores is directly, IMO causally, linked with decreased stability and fertility; letting it take hold in the highest IQ sections of the population is a recipe for disaster. Engaging in that sort of behavior is strong evidence against a person’s character as a leader.
Ah, I see.
Sure, if I believed that failing to conform to previously established sexual mores led to decreased stability, I would no doubt agree that the poly aspects of LW and its various parent organizations were troubling.
Similar things are true of my college living group, the theater group I perform with, etc.
Come to that, I suppose similar things are true of my state government, which accepts my marriage to my husband as a family despite this being a clear violation of pre-existing sexual mores, and of large chunks of my country, which accepts marriages between people of different skin colors as families despite this being a clear violation of the sexual mores of a few decades ago.
I’d prefer not to completely derail this into a political/moral argument, since the issue at hand is a bit more relevant. Since you are, as you admit, fairly invested in your position there is little purpose to debate on this point except to distract from the main issue.
Sure; I have no interest in debating with you whether polyamory, homosexuality, miscegenation and other violations of sexual mores really are harmful. As you say, it would be a distraction.
My point was that if they were, it would make sense to object to an organization on the grounds that its leadership approves of/engages in harmful sexual-more-violating activities. What would not make sense is to differentially object to that organization on those grounds.
That is, accepting/endorsing/engaging sexual-more-violating activities might make LW’s parent organizations dangerous, but it doesn’t make them any more dangerous than many far-more-mainstream organizations that do the same things (like my college living group, the theater group I perform with, etc).
If this one of the two sketchiest things about the organization, as you suggest, it sounds to me like the organization is pretty darned close to the mainstream, which seems rather opposed to the point lmm was trying to make, and which I originally inferred you were aligning with.
Cool… this sort of thing is far more actionable than “seeming like a cult.”
So, next question. Taking you as representative of the group (which is of course not necessarily true, but we start from where we are)… what is your sense of where each of these falls on the spectrum between “this is legitimately worrying; in order to be less at risk for actual bad consequences LW should actually change so as not to do this” on the one hand, and “this is merely superficially worrying; there are probably no real risks here and LW should merely take steps to reassure worriers not to worry about it”?
I’m legitimately worried about the money and the incentives it creates. What would a self-interested agent (LW seems to use “agent” in exactly the opposite sense to what I’d expect it to mean, but I hope I’m clear) in the position of the LW leadership do? My cynical view is: write some papers about how the problems they need to solve are really hard; write enough papers each year to appear to be making progress, and live lives of luxury. So what’s stopping them? People in charities that provide far more fuzzies than LW have become disenchanted. People far dumber than Yudkowsky have found rationalizations to live well for themselves on the dime of the charity they run. Corrupt priests of every generation have professed as much faith that performing their actual mission would result in very high future utility, while in fact neglecting those duties for earthly pleasures.
Even if none of the leadership are blowing funds on crack and hookers, if they’re all just living ascetically and writing papers, that’s actually the same failure mode if they’re not being effective at preventing UFAI. When founding the first police force, one of Peel’s key principles was that the only way they could be evaluated was the prevalence of crime—not how much work the police were seen to be doing, not how good the public felt about their efforts. It’s very hard to find a similar standard with which to hold LW to account.
It occurs to me as I write that I have no idea what the LW funding structure is—whether the site is funded by the CFAR, MIRI, SIAI or something else. Even having all these distinct bodies with mostly the same members smells fishy, seems more likely to be politics than practicalities.
The kookiness… if LW were really more rational than others, I’d expect them to do some weird-but-not-harmful-to-others things. So I suspect this is more a perception than reality thing (Though if there are good answers to “what’s the empirical distinction between real and fake cryonics” and “why do you expect polyamory to turn out better for you lot than it did for the ’60s hippie communes” it’d be nice to see them). IMO the prime counter would be visible effectiveness. A rich person with some weird habits is an eccentric genius; a poor person with weird habits is just a crank.
It would be really nice to have more verifiable results that say LW-style rationality is good for people (or to know that it isn’t and respond accordingly). The failure mode here is that we do a bunch of things that feel good and pat each other on the back and actually it’s all placebos. We actually see a fair few articles here claiming that reading LW is bad for you, or that rationality doesn’t make people happier. On thinking it through this would be the kind of cult that’s basically harmless, so I’m not too concerned. On the perception side, IMO discussing health is not worth the damage it does to the way the community is seen (the first weight-loss thread I saw caused a palpable drop in my confidence in the site). I’ve no idea how to practically move away from doing so though.
Secrets and bans rub me very strongly the wrong way, and seem likely to damage our efforts in nonobvious ways (to put it another way, secretive organizations tend to become ineffective at their original aims, and I’m worried about this failure mode). I certainly don’t think the ban on the basilisk is effective at its purported aim, given that it’s still talked about on the internet. And just having this kind of deception around immediately sets off a whole chain of other doubts—what if it’s banned for other reasons? What else is banned?
If there really is a need for these bans, there should be a clear set of rules and some kind of review. That would certainly address the perception, and hopefully the actuality too.
I think the use of fictional evidence is actually dangerous. Given the apparently high value of LW-memetic fiction in recruiting, I don’t know where the balance is. I think overuse of jargon is just a perceptual problem (though probably worth addressing).
I have… unusual views on diversity, so I don’t think setting people against their less-rational friends is an actual problem (in the sense of being damaging to the organization’s aims); I file this as a perceptual problem. The most obvious counter I can think of is more politeness about common popular misbeliefs, and less condescension when correcting each other. But I suspect these are problems inherent to internet fora (which doesn’t mean they’re not real; I would suggest that e.g. reddit has a (minor) cultish aspect to it, one that’s offputting to participation. But there may not be any counter).
The hierarchy: in the short term it’s merely annoying, but long-term I worry about committee politics. If some of the higher-ups fell out in private (and given that several of them appear to be dating each other that seems likely) and began sniping at each other in the course of their duties, and catching innocent users in the crossfire… I’ve seen that happen in similar organizations and be very damaging. Actual concern.
So in summary: actual concern: where the money goes, any secrets the organization keeps, clarity of the leadership hierarchy, overuse of fiction. superficial issues: overuse of jargon. The rest of my list is on reflection probably not worth worrying about.
MIRI and SIAI are the same organization: SIAI is MIRI’s old name, now no longer used because people kept confusing the Singularity Institute and Singularity University.
(AFAIK, LW has traditionally been funded by MIRI, but I’m not sure how the MIRI/CFAR split has affected this.)
“why do you expect polyamory to turn out better for you lot than it did for the ’60s hippie communes”
One might as well ask “why do you expect monogamy to turn out better than it did for all the people who have gone through a series of dysfunctional relationships”. Being in any kind of relationship is difficult, and some relationships will always be unsuccessful. Furthermore, just as there are many kinds of monogamous relationships—from the straight lovers who have been together since high school, to the remarried gay couple with a 20-year age difference, to the arranged marriage who’ve gradually grown to love each other and who practice BDSM—there are many kinds of polyamorous relationships, and the failure modes of one may or may not be relevant for another.
If you specify “the kinds of relationships practiced in the hippie communes of the sixties”, you’re not just specifying a polyamorous relationship style, you’re also specifying a large list of other cultural norms—just as saying “conventional marriages in the 1950′s United States” singles out a much narrower set of relationship behaviors than just “monogamy”, and “conventional marriages among middle class white people in the 1950′s United States” even more so.
And we haven’t even said anything about the personalities of the people in question—the kinds of people who end up moving to hippie communes are likely to represent a very particular set of personality types, each of which may make them more susceptible to some kinds of problems and more resistant to others. Other poly people may or may not represent the same personality types, so their relationships may or may not involve the same kinds of problems.
Answering your original question would require detailed knowledge about such communes, while most poly people are more focused on solving the kinds of relationship problems that pop up in their own lives.
You’re right, I overextended myself in what I wrote. What I meant was: I’m aware of long-term successful communities practicing monogamy, and long-term somewhat successful communities practicing limited polygyny—i.e. cases where we can reasonably conclude that the overall utility is positive. I’m not aware of long-term successful communities practicing other forms such as full polyamory (which may well be my own ignorance).
The fact that a small group of bay-area twentysomethings has been successfully practicing polyamory for a few years does not convince me that the overall utility of polyamory is positive. That’s because with ’60s hippie communes my understanding is that a small group of bay-area twentysomethings were successfully practicing polyamory for a few years, but eventually various “black swan”-type events (bwim events analogous to stock market crashes, but for utility rather than economic value) occurred, and it turns out the overall utility of those communes was negative despite the positive early years. If today’s polyamorists want to convince me that “this time is different” they would have some work to do.
(I’m not an expert on the history. It’s entirely possible I’m simply wrong, in which case I’d appreciate pointers to well-written popular accounts that are more accurate than the ones I’m currently basing my judgement on).
It still sounds like you’re talking about poly as if it was a coherent whole, when it’s really lots and lots of different things, some with a longer history than others. Take a look at this map, for instance—and note that many of the non-monogamous behaviors may overlap with ostensibly monogamous practices. E.g this article, written by a sexuality counselor, (EDIT: removed questionable prevalence figure) basically says that swinging works for some couples and doesn’t work for others. Similarly, for everything else in that map, you could find reports from different people (either contemporary people or historical figures) who’ve done something like it, with it having been a good idea for some, and a bad idea for others.
I guess the main thing that puzzles me about your comments is that you seem to be asking for some general argument for why (some specific form of) polyamory could be expected to work for everyone. Whereas my inclination would be to say “well, if you describe to me the people in question and the specific form of relationship arrangement they’re trying out, I guess I could try to hazard a guess of whether that arrangement might work for them”, without any claim of whether that statement would generalize for anyone else in any other situation. For example, in Is Polyamory a Choice?, Michael Carey writes:
Meanwhile, there are some people whose innate personality traits make it very difficult to live happily in a monogamous relationship but relatively easy to be happy in an open one. [...] But there are almost certainly also some “obligate” non-monogamists who would never feel emotionally satisfied and healthy in a monogamous relationship, any more than a gay man would be satisfied and healthy in a straight marriage. [...] My experience suggests that perhaps half to two-thirds of polyamorists—those who want to be able to fully embrace multiple loving relationships, with sex as merely part of that (albeit an important part, just as it is in monogamous relationships)—are “obligate poly.” I’ve heard a lot of stories from people about having a few miserable monogamous relationships before they were introduced to the concept of honest, consensual non-monogamy.
I’ve also personally ran into cases of “naturally poly” people, who couldn’t prevent themselves from falling into love with multiple people at once, and who were utterly miserable if they had to kill those emotions: if they wanted to stay monogamous, they would have been forced to practically stop having any close friendships with people of the sexes that they were attracted to. For those people, it seems obvious that some kind of non-monogamous arrangement is the only kind of relationship in which they can ever feel really happy. (I don’t need to find an example of a visible community that has successfully practiced large-scale polyamory in order to realize that this kind of a person would be miserable in a monogamous arrangement.) At the same time, I also know of people are not only utterly incapable of loving more than one person, but also quite jealous: for those people, it seems obvious that a monogamous relationship is the right one.
Then there are people who are neither clearly poly nor clearly mono (I currently count myself as belonging to this category). For them the best choice requires some experimentation and probably also depends on the circumstances, e.g. if they fall in love with a clearly poly person, then a poly relationship might work best, but so might a mono relationship if they fell in love with someone who was very monogamous.
Then there are people who don’t necessarily experience romantic attraction to others, but also don’t experience much sexual jealousy and feel like having sex with others would just spice up the relationship they have with their primary partner: they might want to try out e.g. swinging. And so on.
E.g this article, written by a sexuality counselor, claims that some experts believe that there could be as many 15 million Americans swinging on a regular basis, and basically says that swinging works for some couples and doesn’t work for others.
I don’t believe that for a second and you should apply a little more critical thought to these numbers. What experts? What are they basing this on? Searching for this, I find nothing but echo chambers of media articles—“experts say”, “some experts think”, etc. Is 15 million remotely plausible? There are ~232m adults in the US, half are married, so 15m swingers would imply that 13% of marriages are open.
Slightly better are ‘estimates’ (or was it ‘a study’?) attributed to the Kinsey Institute of 2-4% of married couples being swingers, but that’s also quoted as ‘2-4m’ (a bit different) and one commenter even quotes it as 2-4% being ‘the BDSM and swing communities’, which reduces the size even more. All irrelevant, since I am unable to track down any study or official statement from Kinsey so I can’t even look at the methodology or assumptions they made to get those supposed numbers.
Fair point—the exact number wasn’t very important for my argument (I believe it would still carry even with the 2-4% or 2-4m figure), so I just grabbed the first figure I found. It passed my initial sanity check because I interpreted the “couples” to include “non-married couples”, and misremembered the US population to be 500 million rather than 300 million. (~4% of the adult population being swingers didn’t sound too unreasonable.)
I guess the main thing that puzzles me about your comments is that you seem to be asking for some general argument for why (some specific form of) polyamory could be expected to work for everyone. Whereas my inclination would be to say “well, if you describe to me the people in question and the specific form of relationship arrangement they’re trying out, I guess I could try to hazard a guess of whether that arrangement might work for them”, without any claim of whether that statement would generalize for anyone else in any other situation.
I think the argument goes through the same way though. I understand your position to be: a culture in which a variety of different relationships are accepted and respected (among both leaders and ordinary citizens), and LW-style polyamory is one of those many varieties, can be stable, productive, and generally successful and high-utility. The question remains: why don’t we see historical examples of such societies? While you’re right that even nominally monogamous societies usually tolerate some greater or lesser degree of non-monogamous behavior, the kind of polyamory practiced by some prominent LW members is, I think, without such precedent, and would be condemned in all historically successful societies (including, I think, contemporary american society; while we don’t imprison people for it, I don’t think we’d elect a leader who openly engaged in such relationships, for example).
Yes, there’s a small self-selecting bay area community which operates in the way you describe. But I don’t think that community has (yet) demonstrated itself to be successful; other communities have achieved the same level of success that they currently enjoy, and then undergone dramatic collapse shortly after.
Well, one possibility would be that a polyamorous inclination is simply rare. In that case, we wouldn’t expect any society to adopt large-scale polyamorous practices for the same reason why we wouldn’t expect any society to adopt e.g. large-scale asexual or homosexual practices.
But then there’s also the issue that most societies have traditionally been patriarchal, with strict restrictions on women’s sexuality in general (partially due to early contraception being unreliable and pregnancies dangerous). If you assumed that polyamory could work, but that most societies in history wouldn’t want to give women the same kind of sexual freedom as men, then that would suggest that we could expect to see lots of polygamous societies… which does seem to be case.
But I don’t think that community has (yet) demonstrated itself to be successful; other communities have achieved the same level of success that they currently enjoy, and then undergone dramatic collapse shortly after.
What counts as success, anyway? Does a relationship have to last for life in order to be successful? I wouldn’t count e.g. a happy relationship of five years to be a failure, if it produces five years of happiness for everyone involved.
Well, one possibility would be that a polyamorous inclination is simply rare. In that case, we wouldn’t expect any society to adopt large-scale polyamorous practices for the same reason why we wouldn’t expect any society to adopt e.g. large-scale asexual or homosexual practices.
I don’t think I would necessarily expect that.
Of course, it depends a lot on what we mean by “large scale”. A society with a 5% Buddhist minority can still have large-scale Buddhist practice (e.g., millions of people practicing Buddhism in a public and generally accepted way). I would say some U.S. states are visibly adopting homosexual practices (e.g. same-sex marriage) on a large scale, despite it being very much a minority option, for example.
In the absence of any other social forces pushing people towards nominally exclusive monogamy/monoandry (and heterosexuality, come to that) I would expect something like that kind of heterogeneity for natural inclinations.
Of course, in the real world those social forces exist and are powerful, so my expectations change accordingly.
But then there’s also the issue that most societies have traditionally been patriarchal, with strict restrictions on women’s sexuality in general (partially due to early contraception being unreliable and pregnancies dangerous). If you assumed that polyamory could work, but that most societies in history wouldn’t want to give women the same kind of sexual freedom as men, then that would suggest that we could expect to see lots of polygamous societies… which does seem to be case.
I think you’re putting the cart before the horse there. If patriarchy is a near-human-universal, doesn’t that suggest there’s a good reason for it?
What counts as success, anyway? Does a relationship have to last for life in order to be successful? I wouldn’t count e.g. a happy relationship of five years to be a failure, if it produces five years of happiness for everyone involved.
My impression is that the downsides of breakup dominate the overall utility compared to the marginal increase from having a better relationship. Particularly in the presence of children.
My impression is that the downsides of breakup dominate the overall utility compared to the marginal increase from having a better relationship.
My impression is the reverse. Breakups tend to be sharply painful, but the wounds heal in a matter of months or at most a few years. But if you’re unwilling to consider breakups, being in a miserable relationship is for the rest of your life.
If patriarchy is a near-human-universal, doesn’t that suggest there’s a good reason for it?
Sure—it was probably a natural adaptation to the level of contraception, healthcare, and overall wealth available at the time. Doesn’t mean it would be a good idea anymore.
And if you wish to reinstate patriarchy, then singling out polyamory as a suspicious modern practice seems rather arbitrary. There’s a lot of bigger stuff that you’d want to consider changing, like whether women are allowed to vote… or, if we wish to stay on the personal level, you’d want to question any relationships in which both sexes were considered equal in the first place.
My impression is that the downsides of breakup dominate the overall utility compared to the marginal increase from having a better relationship. Particularly in the presence of children.
That sounds unlikely in the general case (though there are definitely some spectacularly messy break-ups where that is true), but of course it depends on your utility function.
or, if we wish to stay on the personal level, you’d want to question any relationships in which both sexes were considered equal in the first place.
I think that happens; it’s hard to imagine e.g. a president with anything other than a traditional family (were/are the Clintons equals? More so than those before them, but in public at least Hilary conformed to the traditional “supportive wife” role (in a way that I think contrasts with Bill’s position for the 2008 primaries)). To a certain extent LW is always going to seem cultish if our leaders’ relationships are at odds with the traditional forms for such. And I don’t think that’s irrational: in cases where failures are rare but highly damaging, it makes sense to accord more weight to tradition than we normally do.
(on the voting analogy: I’d be very cautious about adopting any change to our political system that had no historical precedent and seemed like it might increase our odds of going to war, even if it had been tried and shown to be better in a few years of day-to-day use. I don’t think that’s an argument against women having the vote (they’re stereotypically less warlike—although it has been argued that the Falklands War happened because Thatcher felt the need to prove herself and wouldn’t’ve occurred under a male PM), but it is certainly an argument for not extending the vote to non-landowners and under-21s. In as much as war has declined since the vote was extended to non-landowners and under-21s—which is actually, now that I think about it, really quite surprising—I guess that’s evidence against this position)
actual concern: where the money goes, any secrets the organization keeps, clarity of the leadership hierarchy,
Sure, I think these are legitimate.
Actually, I think this periodic “cult” angst distracts attention from a far more interesting question: “do contributions to MIRI actually accomplish anything worthwhile?” (Of which “Is MIRI a scam?” is a narrow subset, and not the most interesting subset at that, though still far more interesting than “Is LW a cult?”.)
Admittedly, I have the same problem with most non-profit organizations, but I agree that financial and organizational transparency is important across the board. (That said, I have no idea how MIRI’s transparency compares to, say, the National Stroke Foundation’s, or the Democratic National Committee’s.)
actual concern: [...] overuse of fiction I think the use of fictional evidence is actually dangerous.
Hm.
So, just to clarify our terms a little… there’s a difference between fictional evidence on the one hand (that is, treating depictions of fictional events as though they were actual observations of the depicted events), and creating fictional examples to clearly and compellingly articulate a theory or call attention to an idea on the other.
The latter, in and of itself, I think is harmless. Do you disagree? The former is dangerous, in that it can lead to false beliefs, and I suppose I agree that any use of fiction (including reading it for fun) increases my chances of unintentionally using it as evidence.
So I guess the question is, are we using fiction as evidence? Or are we using it as a shortcut to convey complicated ideas? And if it’s a bit of both, as I expect, then it’s (as you say) a question of where the balance is.
So, OK. You clearly believe the balance here swings too far to “fiction as evidence,” which might be true, and is an important problem if true. What observations convinced you of this?
More tangentially:
if there are good answers to [..] “why do you expect polyamory to turn out better for you lot than it did for the ’60s hippie communes” it’d be nice to see them
Well, ultimately my answer to this is that a lot of my friends are in poly relationships, and it seems to be working for them all right. This is also why I expect same-sex marriages to turn out OK, why I expect marriages between different races to turn out OK, why I expect people remaining single to turn out OK, and so forth.
Am I ignoring the example of 60s hippy communes? Well, maybe. I don″t know that much about them, really. The vague impression I get is that they were a hell of a lot more coercive than I would endorse.
If MIRI folks are going around getting into coercive relationships (which is pretty common for people generally), I object to that, whether those relationships are poly or mono or kinky or vanilla or whatever. If MIRI folks are differentially getting into coercive relationships (that is, significantly more so than people generally), that’s a serious concern. Are they?
Mostly, my rule of thumb about such things is that it’s important for a relationship to support individuals in that relationship in what they individually want and don’t want. Monogamous people in poly relationships suffer. Polygamous people in mono relationships suffer. Etc.
Also that it’s important for a community to support individuals in their relationship choices (which includes the choice to get out of a relationship).
So I guess the question is, are we using fiction as evidence? Or are we using it as a shortcut to convey complicated ideas? And if it’s a bit of both, as I expect, then it’s (as you say) a question of where the balance is. So, OK. You clearly believe the balance here swings too far to “fiction as evidence,” which might be true, and is an important problem if true. What observations convinced you of this?
All the other examples I can think of of what I might call political fiction—using fiction to convey serious ideas—are organizations/movements I have negative views of. One thinks of Ayn Rand, those people who take Gor seriously (assuming they’re not just an internet joke), the Pilgrim’s Progress. Trying to cast my net wider, I felt the part of The Republic where Plato tells a story about a city to be weak, I found Brave New World interesting as fiction but actively unhelpful as an point about the merits of utilitarianism. Monopoly is an entertaining board game (well, not a very entertaining one to be honest) but I don’t think it teaches us anything about capitalism.
I’ve been highly frustrated by the use of “parables” here like the dragon of death or the capricious king of poverty; it seems like the writers use them as a rhetorical trick, allowing them to pretend they’ve made a point that they haven’t in fact made, simply because it’s true in their story.
I am genuinely struggling to think of any positive-for-the-political-side examples of this kind of fiction.
Well, ultimately my answer to this is that a lot of my friends are in poly relationships, and it seems to be working for them all right. This is also why I expect same-sex marriages to turn out OK, why I expect marriages between different races to turn out OK, why I expect people remaining single to turn out OK, and so forth.
I guess the main thing here is that if poly relationships worked (beyond limited polygyny in patriarchal societies, which evidently does “work” on some level) I’d expect to see an established tradition of polyamory somewhere in the world, and I don’t (maybe I’m not looking hard enough). Being single clearly works for many people. Gay people have shown an ability to form long-term, stable relationships when they weren’t allowed to marry, so it seems like marriage is likely to work. Widespread interracial marriage simply wasn’t possible before the development of modern transport technology, so there’s no mystery about the lack of historical successes. Is there some technological innovation that makes polyamory more practical now than it was in the past? I guess widespread contraception and cheap antibiotics might be such a thing… hmm, that’s actually a good answer. I shall reconsider.
if poly relationships worked [..] I’d expect to see an established tradition of polyamory... Gay people have shown an ability to form long-term, stable relationships when they weren’t allowed to marry, so it seems like marriage is likely to work
So, when marriage traditionalists argue that the absence of an established tradition of same-sex marriages is evidence that same-sex marriage isn’t likely to work (since if it did, they’d expect to see an established tradition of same-sex marriage), you don’t find that convincing… your position there is that: a) marriage is just a special case of long-term, stable relationship, and b) we do observe an established (if unofficial, and often actively derided) tradition of long-term, stable same-sex relationships among the small fraction of the population who enjoy such relationships, so c) we should expect same-sex marriages to work among that fraction of the population.
Yes?
But by contrast, on your account we don’t observe an established (if unofficial) tradition of long-term, stable multi-adult relationships, so a similar argument does not suffice to justify expecting multi-adult marriages to work among the fraction of the population who enjoy such relationships.
You’ve accurately summarized what I said. I think on reflection point a) is very dubious, so allow me to instead bite your bullet: I don’t yet have a strong level of confidence that same-sex marriages are likely to work. (Which I don’t see as a reason to make them illegal, but might be a reason to e.g. weight them less strongly when considering a couple’s suitability to adopt).
I think we may need to taboo “work” here; if we’re talking about suitability as an organizational leader here then that’s a higher standard than just enjoying one’s own sex life. I would supplement b) with the observation that we observe historical instances of gay people making major contributions to wider society—Turing, Wilde, Britten (British/Irish examples because I’m British/Irish).
But yeah, that’s basically my position. Interested to see where you’re going with this—is there such an “established (if unofficial) tradition of long-term, stable multi-adult relationships” that I’m just ignorant of?
But yeah, that’s basically my position. Interested to see where you’re going with this—is there such an “established (if unofficial) tradition of long-term, stable multi-adult relationships” that I’m just ignorant of?
Um, polygamy? Concubinage? Both have long histories, and show up in cultures that are clearly functional.
Both of them are less gender egalitarian than modern polyamory, and it’s not clear to me that there’s ample real-world evidence of, say, Heinlein’s idea of line marriages working out.
The anthropological record indicates that approximately 85 per cent of human societies have permitted men to have more than one wife (polygynous marriage), and both empirical and evolutionary considerations suggest that large absolute differences in wealth should favour more polygynous marriages. Yet, monogamous marriage has spread across Europe, and more recently across the globe, even as absolute wealth differences have expanded. Here, we develop and explore the hypothesis that the norms and institutions that compose the modern package of monogamous marriage have been favoured by cultural evolution because of their group-beneficial effects—promoting success in inter-group competition. In suppressing intrasexual competition and reducing the size of the pool of unmarried men, normative monogamy reduces crime rates, including rape, murder, assault, robbery and fraud, as well as decreasing personal abuses. By assuaging the competition for younger brides, normative monogamy decreases (i) the spousal age gap, (ii) fertility, and (iii) gender inequality. By shifting male efforts from seeking wives to paternal investment, normative monogamy increases savings, child investment and economic productivity. By increasing the relatedness within households, normative monogamy reduces intra-household conflict, leading to lower rates of child neglect, abuse, accidental death and homicide. These predictions are tested using converging lines of evidence from across the human sciences.
I think it would be useful here to distinguish between what is/was/should be/might be the average and what is the acceptable range of deviation from that average.
A society where most men have one wife but some men have several is different from a society where most men have one wife and having several is illegal and socially unacceptable.
Probably not- I buy the arguments that the incentives generated by monogamy are better than the ones generated by polygamy, across society as a whole. (I am not yet convinced that serial monogamy enabled by permissive divorce laws is better than polygyny, but haven’t investigated the issue seriously.) I meant more to exclude the idea that polygamy is only seen in, say, undeveloped societies.
Where I’m going with this is trying to understand your position, which I think I now do.
My own position, as I stated a while back, is that I base my opinions about the viability of certain kinds of relationships on observing people in such relationships. The historical presence or absence of traditions of those sorts of relationships is also useful data, but not definitively so.
EDIT: I suppose I should add to this that I would be very surprised if there weren’t just as much of a tradition of married couples one or both of whom were nonmonogamous with the knowledge and consent of their spouse as there was a tradition of people having active gay sex lives, and very surprised if some of those people weren’t making “major contributions to wider society” just as some gay people were. But I don’t have examples to point out.
I would be very surprised if there weren’t….a tradition of married couples one or both of whom were nonmonogamous with the knowledge and consent of their spouse
Notoriously, Lady Hamilton and Horatio Nelson had a public affair in the 1800s, without any objection from Lord Hamilton.
Given this fact, I am now surprised by not having previously observed a seemingly endless series of jokes about it playing on the supposed indeterminacy of poly relationships.
Right. As I’ve said, I think relationships tend to have big negative spikes analogous to stock market crashes, so am cautious about judging from samples of a few years.
Even watching twenty-year-old poly relationships, as I sometimes do, isn’t definitive… maybe it takes a few generations to really see the problems. Ditto for same-sex marriages, or couples of different colors, or of different religious traditions… sure, these have longer pedigrees, but the problems may simply not have really manifested yet, but are building up momentum while people like me ignore the signs.
I mention my own position not because I expect it to convince you, but because you were asking me where I was going in a way that suggested to me that you thought I was trying to covertly lead the conversation along to a point where I could demonstrate weaknesses in your position relative to my own, and in fact the questions I was asking you were largely orthogonal to my own position.
All the other examples I can think of of what I might call political fiction—using fiction to convey serious ideas—are organizations/movements I have negative views of.
Just to pick a somewhat arbitrary example I was thinking about recently… have you ever read The Ones Who Walk Away from Omelas?
Does it qualify as what you’re calling “political fiction”?
Do you associate it with any particular organizations/movements?
By happy coincidence I also read it recently. I’ve not seen it used to make political/philosophical arguments, so I don’t class it as such. To my mind the ending and indeed the whole story is more ambiguous than the examples I’ve been thinking of; if the intent was to push a particular view then it failed, at least in my case. (By contrast Brave New World probably did influence my view of utilitarianism, despite my best efforts to remain unmoved).
If I saw someone using it to argue for a position I’d probably think less of it or them, and on a purely aesthetic level I found it disappointing.
I guess maybe Permutation City is a positive example; it provides a useful explicit example of some things we want to make philosophical arguments about. Maybe because I felt it wasn’t making a value judgement—it was more like, well, scientific fiction.
Thinking about this some more, and rereading this thread, I’m realizing I’m more confused than I’d thought I was.
Initially, when you introduced the term “political fiction,” you glossed it as “using fiction to convey serious ideas.” Which is similar enough to what I had in mind that I was happy to use that phrase.
But then you say that you don’t class Omelas in this category, because it doesn’t successfully push a particular view. Which suggests that the category you have in mind isn’t just about conveying ideas, but rather about pushing a particular philosophical/political perspective—yes?
But then you say that you do class Permutation City in this category (and I would agree), and you approve of it. This actually surprises me… I would have thought, given what you’d said earlier, that you would object to PC on the grounds that it simply asserts a fictional universe in which various things are true, and could be misused as evidence that those things are in fact true in the real world. I’m glad you aren’t making that argument, as it suggests our views are actually closer than I’d originally thought (I would agree that it’s possible to misuse PC that way, as it is possible to misuse other fictional ideas, but that’s a separate issue).
And you further explain that PC isn’t making a value judgment… but that you still consider it an example of political fiction… which is consistent with your original definition of the term… but I don’t know how to reconcile it with your description of Omelas.
I’m trying to explore this as we go along; it’s very possible I’ve been incoherent.
I don’t class Omelas in any of these categories category because I’ve never seen it used in this kind of discussion, at all.
Permutation City was an example I only thought of in the last post. Fundamentally I feel like it belongs in a different cluster from these other examples (including the LW ones) - I’m trying to understand why, and I was suggesting that the value judgements might be the difference, but that’s little more than a guess.
I don’t class Omelas in any of these categories category because I’ve never seen it used in this kind of discussion, at all.
Well… OK. Let’s change that.
Omelas is a good illustration of the intuitive problems with a straight-up total-utilitarian approach to ethical philosophy. It’s clear that by any coherent metric, the benefits enjoyed as a consequence of that child’s suffering far outweigh the costs of that suffering. Nevertheless, the intuitive judgment of most people is that there’s something very wrong with this picture and that alleviating the child’s suffering would be a good thing.
OK… now that you’ve seen Omelas used to in a discussion about utilitarian moral philosophy, does your judgment about the story change?
Hmm. I was not fond of the story in any case, so this use would need to be particularly bad to diminish my opinion of it.
The fundamental lack of realism in the story now seems more important. Where before I was happy to suspend disbelief on the implausibility of a town that worked that way, if we’re going to apply actual philosophy it I find myself wanting a more rigorous explanation of how things work—why do people believe that comforting the child would damage the city?
Do I think using the story has made the discussion worse? Maybe; it’s hard to compare to a control in the specific instance. But I think in the general, average case philosophical discussions that use fictional examples turn out less well than those that don’t.
If I thought the use of the story had damaged the discussion, would that make me think less of the story or author? I think my somewhat weaselly (and nonutilitarian) answer is that intent matters here. If I discovered that a story I liked was actually intended allegorically, I think I’d think less of it (and to a certain extent this happened with Animal Farm, which I read naively at an early age and less naively a few years later, and thought less of it the second time). But if someone just happens to use a story I like in a philosophical argument, without there being anything inherent to the story or author that invited this kind of use, I don’t think that would change my opinion.
From my perspective, Omelas does a fine job of doing what I’m claiming fiction is useful for in these sorts of discussions… it makes it easy to refer to a scenario that illustrates a point that would otherwise be far more complicated to even define.
For example, I can, in a conversation about total-utilitarianism, say “Well, so what if anything is wrong with Omelas?” to you, and you can tell me whether you think there’s anything wrong with Omelas and if so what, and we’ve communicated a lot more efficiently than if we hadn’t both read the story.
Similarly, around LW I can refer to something as an Invisible Dragon in the Garage and that clarifies what might otherwise be a hopelessly muddled conversation.
Now, you’re certainly right that while having identified a position is a necessary first step to defending that position, those are two different thigns, and that soemtimes people use fiction to do the former and then act as though they’d done the latter when they haven’t.
This is a mistake.
(Also, since you bring it up, I consider Brave New World far too complicated a story to serve very well in this role… there’s too much going on. Which is a good thing for fiction, and I endorse it utterly, since the primary function of fiction for me is not to provide useful shortcuts in discussions. However, some fiction performs this function and I think that’s a fine thing too.)
I think there is a good deal to be said for the interpretation according to which many of the features of the “ideal” city Socrates describes in Republic are intended to work on the biases of Glaucon and Adeimantus (and those like them) and not intended to actually represent an ideal city. Admittedly, Laws does seem to be intended to describe how Plato thinks a city should be run, so it seems that Plato had some pretty terrible political ideas (at least at the end; Laws is his last work, and I prefer to think his mind was starting to go), but nonetheless it’s not at all safe to assume that all the questionable ideas raised by Socrates in Republic are seriously endorsed by Plato.
I completely agree that Brave New World seems unhelpful in evaluating utilitarianism.
it’s not at all safe to assume that all the questionable ideas raised by Socrates in Republic are seriously endorsed by Plato.
Which is I think the fundamental problem with this kind of political fiction. It allows people to present ideas and implications without committing to them or providing evidence (or alternately, making it clear that this is an opposing view that they are not endorsing). But then at a later stage they go on to treat the things that happened in their fiction as things they’d proven.
I found Brave New World interesting as fiction but actively unhelpful as an point about the merits of utilitarianism.
Brave New World is intended as a critique of utilitarianism: the Fordian society’s willingness to treat people as specialized components may not be as immediately terrifying as 1984, but it’s still intentionally disutopian. My apologies if I’m misreading your statement, but many folk don’t get that from reading the book in a classroom environment.
I guess the main thing here is that if poly relationships worked… I’d expect to see an established tradition of polyamory somewhere in the world, and I don’t (maybe I’m not looking hard enough).
Some subcultures have different expectations of exclusivity, which may be meaningful here even if not true polyamory.
Is there some technological innovation that makes polyamory more practical now than it was in the past? I guess widespread contraception and cheap antibiotics might be such a thing...
Communication availability and different economic situations, as well. The mainstream entrance of women into the work force as self-sustaining individuals is a fairly new thing, and the availability of instant always-on communication even more recent.
EDIT: I agree that there are structural concerns if a sufficient portion of the leadership are both poly and in a connected relationship, but this has to do more with network effects than polyamory. The availability and expenditures of money are likely to trigger the same network effect issues regardless of poly stuff.
Brave New World is intended as a critique of utilitarianism
Yes. I thought it would be clear that I knew that, since I don’t think my statement makes any sense otherwise?
I found Brave New World, in so far as it is taken as an illustration of a philosophical point, actively unhelpful; that is, its existence is detrimental to the quality of philosophical discussions about the merits of utilitarianism (basically for all the usual reasons that fictional evidence is bad; it leads people to assume that a utilitarian society would behave in a certain way, or that certain behaviors would have certain outcomes, simply because that was what happened in Brave New World).
(Independently, I found it interesting and enjoyable as a work of fiction).
My cynical view is: write some papers about how the problems they need to solve are really hard; write enough papers each year to appear to be making progress, and live lives of luxury.
I think this fine if the papers are good. It’s routine in academic research that somebody says “I’m working on curing cancer.” And then it turns out that they’re really studying one little gene that’s related to some set of cancers. In general, it’s utterly normal in the academy that somebody announces dramatic goal A, and then really works on subproblem D that might ultimately help achieve C, and then B, an important special case of A.
When founding the first police force, one of Peel’s key principles was that the only way they could be evaluated was the prevalence of crime—not how much work the police were seen to be doing, not how good the public felt about their efforts. It’s very hard to find a similar standard with which to hold LW to account.
The standard I would use is “are there a significant number of people who find the papers interesting and useful?” And that’s a standard that I think MIRI is improving significantly on. A large fraction of academics with tenure in top-50 computer science departments aren’t work that’s better.
Notice that I wouldn’t use “avoid UFAI danger” as a metric. If the MIRI people are motivated to answer interesting questions about decision theory and coordination between agents-who-can-read-source-code, I think they’re doing worthwhile work.
Notice that I wouldn’t use “avoid UFAI danger” as a metric. If the MIRI people are motivated to answer interesting questions about decision theory and coordination between agents-who-can-read-source-code, I think they’re doing worthwhile work.
Worthwhile? Maybe. But it seems dishonest to collect donations that are purportedly for avoiding UFAI danger if they don’t actually result in avoiding UFAI danger.
We need to have an actual plan to deal with it. I say proclaiming ourselves “the best cult in the world” functions as a refuge in audacity, causes people to stop their inquiry, listen up, think, because it breaks patterns.
I think the word for a thing that started calling itself the best cult in the world is ‘religion’.
When people ask me what religion I hail from (as far as I’m concerened, religion or religation is nothing more or less than RED Team VS BLU Team style affiliation, with, in the absence of exterior threats, a tendency to splinter and call heresy on each other), I tell them “secular humanist”. As far as I’m concerned, LW is just a particularly interesting denomination of that faith. “We’re the only religion whose beliefs are wholly grounded in empirical experience, and which, instead of praying for things to get better, go out and make them so”.
I am aware of religious denominations which advocate doing good works as a route to personal salvation, but I honestly can’t think of any religious branch I’m aware of which advocates good works on the basis of “For goodness’ sake, look at this place, it’s seriously in need of fixing up.”
So, just to make sure I understand the category you’re describing here… if, for example, an organization like the Unitarian Universalist Association of Congregations asserts as one of its guiding principles “The goal of world community with peace, liberty, and justice for all;” and does not make a statement one way or the other about the salvatory nature of those principles, is that an example of the category?
I guess I’d say that it counts if you’re willing to treat Unitarian Universalism as an actual religious denomination. Whether it counts or not would probably depend on how you identify such things, since it’s missing qualities which one might consider important, such as formal doctrines.
In my experience Unitarian Universalism, at least in its modern form, is mainly a conglomeration of liberal progressive ideals used as an umbrella to unite people with religious beliefs ranging from moralistic therapeutic deism to outright atheism.
All of the Unitarian Universalists I’ve known well enough to ask have also identified themselves as secular humanists, so I certainly wouldn’t regard it as an alternative to secular humanism which carries that value.
Whether it counts or not would probably depend on how you identify such things, since it’s missing qualities which one might consider important, such as formal doctrines.
Atheists tend to identify religions as creeds: your religion is about what you believe. By this way of thinking, a Catholic is any person who believes thus-and-so; a Buddhist is any person who believes this-and-that; a Muslim is any person who believes the-other-thing; and so on. Having correct belief (orthodoxy) is indeed significant to many religious people, but that’s not the same as it being what religion is about.
Protestant Christianity asserts sola fide, the principle that believers are justified (their sins removed) by dint of their faith in Jesus Christ. The first pillar of Islam is the shahadah, a declaration of belief. One of the first questions people ask about any new-to-them religion is, “What do you believe in?”, and discussions of religions often involve clarifying points of belief, such as “Buddhists don’t believe the Buddha is a god.” Examples such as these may lead many atheists to think that religion is about belief.
Another way of looking at religion, though, is that religion is about practice: what you do when you think you’re doing religion. The significant thing that makes your religion is not your religious beliefs, but your religious habits or practices. You can assert the Catechism all day long … but if you don’t go to Mass, pray, take communion, and confess your sins to a priest, you’re not a central example of a Catholic. And the other four pillars of Islam aren’t about belief, but about practice.
And something else to consider is that religion is also about who you do it with — a community. Religion is usually done with some reference to a local community (a church, coven, what-have-you) and a larger community that includes other local groups as well as teachers, leaders, recognized figures. It is this community that teaches and propagates the beliefs. People are considered to be members of the religion by dint of their membership in this community (possibly formally recognized, as by baptism) even if they do not yet know the beliefs they are “supposed to” have. Most Christians have never read the whole Bible, after all.
UU is one group I happen to know well enough to expect that its members do in fact advocate good works on the basis of good works being a good thing, so if it’s a counterexample, that’s easy. But if it isn’t because it isn’t actually a religion, OK.
I think something similar is true of Congregationalists, from my limited experience of them… but then, all the Congregationalists I’ve known well enough to ask have identified themselves as agnostics and atheists, so perhaps they don’t count either.
But I have a clearer notion of what your category is now; thank you for the clarification.
Does it seem to you that you and Ritalin (in the comment I replied to) mean the same thing by “religion”?
As far as I know, they’re all about giving up your ego in one way or another and happily wait for death or the endtimes. The most proactive they get is trying to spread this attitude around (but not too much; they still need other people to actually pay for their contemplative lifestyle). Making things better, improving the standing of humankind, cancelling the apocalypse? A futile, arrogant, doomed effort.
I know, I just thought maybe he meant the more literal kind of bloody, given the context (we’re talking about cults, this site is dominated by US Citizens), and wanted him to clarify.
Yes, I get definite cultist vibes from some members. A cult is basically an organization of a small number of members who hold that their beliefs make them superior (in one or more ways) to others, with an added implication of social tightness, shared activities, internal slang, difficult for outsiders to understand. Many LW people often appear to behave like this.
I never stated LW is a cult. It clearly isn’t. It does however have at least several, possibly many, members who appear to think about LW in the way many cult members think of their cult.
I assume an educated reader will infer the massive negative social connotations of any movement or organization that has a reputation, no matter how small at this point, as being ‘cultish’—such a reputation inevitably makes achieving goals, recruiting members, etc., more difficult.
Thus being careful not to create that image is very important (or should be) to the membership of the site.
I assume an educated reader will infer the massive negative social connotations of any movement or organization that has a reputation, no matter how small at this point, as being ‘cultish’
From saying almost nothing you have switched to overblown hyperbole. (BTW, I believe you mean “be aware of”, not “infer”.) It appears that LW does have, in some circles (like RationalWiki, ha) a cultish reputation, but I do not see “massive” consequences arising from that.
In a highly chaotic system like our society, small differences (e.g. a reputation in some circles as cultish) can decrease the odds of something gaining influence or acceptance incredibly.
People spend their whole lives researching sales, and any time someone is spreading an idea, sales comes into it. If you think any marketing department of a major company would accept the idea that some website likely visited by many, many, potential members discusses their organization in such a negative light, you are very mistaken. When even the regular members are discussing openly, are we getting a reputation as a cult, that is a terrible ‘branding’ failure.
For LW to achieve the potential most of it’s members (I would assume) hope it will… yes there are consequences.
Any time a large group of potential members or future ‘rationalists’ (not to confuse LW with rationalism) is skeptical or inclined to disinterest in LW because they heard it had some sort of ‘cultish’ reputation, is a massive potential loss of people who could contribute and learn for the betterment of themselves and society as a whole.
Don’t underestimate the impact of small differences when you are dealing with something as complex, and unpredictable as society and the spread of ideas.
So, I link to Amazon fairly frequently here, and when I do I use the referral link “ref=nosim?tag=vglnk-c319-20” to kick some money back to MIRI / whoever’s paying for LW.
First, is that the right link? Second, what would it take to add that to the “Show help” box so that I don’t have to dig it up whenever I want to use it, and others are more likely to use it?
This is done automatically in a somewhat different way. So my advice is not to worry about it. But, yes, it shouldn’t hurt and and it should help in the situation that viglink doesn’t fire. In those comments, Wei Dai agrees that this is the referral code.
Part of the reason why I’m asking is because that info might be old. Apparently “ref=nosim” was obsolete two years ago, and I don’t know if that’s still the right VigLink account, etc.
I was under that impression also. [Edit] I prefer doing it by hand because I almost always only open links in new tabs, which I predict others will do as well, and VigLink does not do that with new tab links.
[Edit2]The original text before the edit was:
For reasons that are mostly silly, I like doing it by hand.
When I asked why I did it by hand, I got back the answer “I do it by hand because I do it by hand,” which seemed silly. But it turns out there was a good reason which I had forgotten; score one for blind tradition.
I should have made more explicit that it’s my opinion: “a bad thing from my point of view”.
It’s a quirk of mine—I dislike marketing schemes injected into non-marketing contexts, especially if that is not very explicit. It is a mild dislike, not like I’m going to quit a site because of it or even write a rant against Amazon links rewriting.
Yes, I understand that Internet runs on such things. No, it does not make me like it more.
LW is mostly pure-text with no images except for occasional graphs. Why is that so? Are the reasons technical (due to reddit code), cultural (it’s better without images), or historical (it’s always been so)?
Ah, a vote for “it’s better this way”. Why do you prefer pure text? Is it because of the danger of being overrun with cat pictures and blinking gif smileys?
Let’s take that particular image. It covers a huge block that could have been filled by text otherwise and conveys relatively little information accurately. It distrupts my reading completely for a little while and getting back to the nice flow takes cognitive effort.
This moment I’m reading on my phone and the image fills the whole screen.
It is because text can be copy-pasted and composed easily since browsers mostly allow selecting any text (this is more difficult in win apps).
Whereas images cannot be copy pasted as simple (mostly you have to find the URL and copy paste that) and images cannot be composed easily at all (you at least need some pic editor which often doesn’t allow simple copy-paste).
This is the old problem that there is no graphical language. A problem that has evadad GUI designers since the beginning.
Um. In Firefox, right-click on the image, select Copy Image. Looks pretty simple to me. Pretty sure it works the same way in Chrome as well.
This is the old problem that there is no graphical language.
I think you’re missing the point of images. Their advantage is precisely that they are holistic, a gestalt—you’re supposed to take them in whole and not decompose them into elements.
Sure, if you want to construct a sequential narrative out of symbols, images are the wrong medium.
Um. In Firefox, right-click on the image, select Copy Image.
And how do you insert it into a comment?
I think you’re missing the point of images. Their advantage is precisely that they are holistic, a gestalt—you’re supposed to take them in whole and not decompose them into elements.
I’d go with laziness and lack of overt demand. I know that people love graphs and images, but I don’t especially feel the need when writing something, and it’s additional work (one has to make the image somehow, name it, upload it somewhere, create special image syntax, make sure it’s not too big that it’ll spill out of the narrow column allotted articles etc). I can barely bring myself to include images for my own little statistical essays, though I’ve noticed that my more popular essays seem to include more images.
I haven’t tried authoring an article myself, but a quick look now seems to indicate that you can’t upload images, only link to them. This means images must be hosted on third parties, meaning you have to upload it there and if not directly under your control, it’s vulnerable to link rot. It seems like this would be inconvenient.
You can upload images to the LessWrong wiki, and then link them from comments or posts. It’s a bit roundabout, but the feature is there. The question is then, should it be made easier?
That’s very common in online forums (for the server load reasons) but doesn’t seem to stop some forums from being fairly image-heavy. It’s not like there is a shortage of free image-hosting sites.
Yes, I understand the inconvenience argument, but the lack of images at LW is pretty stark.
Do you think more people should include graphics in their posts? Do you think more people should include graphics in their comments? Do you think the image-heavy forums you mention get some benefit from being image-heavy that we would do well to pursue?
I’ll observe that I read your comments on this thread as implicitly recommending more images.
This is of course just my reading, but I figured I’d mention it anyway if you are hesitant to make a recommendation for fear of tearing that fence down in ignorance, on the off chance that I’m not entirely unique here.
I understand where you are coming from (asking why this house is not blue is often perceived as implying that this house should be blue) -- but do you think there’s any way to at least tone down this implication without putting in an explicit disclaimer?
do you think there’s any way to at least tone down this implication without putting in an explicit disclaimer?
Well, if that were my goal, one thing I would try to avoid is getting into a dynamic where I ask people why they avoid X, and then when they provide some reasons I reply with counterarguments.
Also, when articulating possible reasons for avoiding X, I would take some care with the emotional connotations of my wording. This is of course difficult, but one easy way to better approximate it is to describe both the pro-X and anti-X positions using the same kind of language, rather than describing just one and leaving the other unmarked.
More generally, assymetry in how I handle the pro-X and anti-X cases will tend to get read as suggesting partiality; if I want to express impartiality, I would cultivate symmetry.
That said, it’s probably easier to just express my preferences as preferences.
avoid is getting into a dynamic where I ask people why they avoid X, and then when they provide some reasons I reply with counterarguments
I think it’s fine. Reasons that people provide might be strong or might be weak—it’s OK to tap on them to see if they would fall down. I would do the same thing to comments which (potentially) said “Yay images, we need more of them!”.
In general, I would prefer not to anchor the expectations of the thread participants, but not at the price of interference with figuring out of what does the territory actually look like.
describe both the pro-X and anti-X positions using the same kind of language
I didn’t (and still don’t) have a position to describe. Summarizing arguments pro and con seemed premature. This really was just a simple open question without a hidden agenda.
There’s a good chance this is not a “fence”, deliberately designed by some agent with us in mind, but a fallen tree that ended up there by accident/laziness.
There’s a design choice on the part of LessWrong against avatar images. Text is supposed to speak for itself and not be judged by it’s author. Avatar imaging would increase author recognition.
I think I agree with that. I do read author names, but I read them after I read the text usually. I frequently find myself mildly surprised that I’ve just upvoted someone I usually downvote, or vice versa.
Most people are much better at remembering faces than at remembering names. Hacker News also has a lot more people and therefore you will interact with the same person less often.
I am not implying that it should, but to answer your question, because limits on accepted forms of expression are not necessarily a good thing. Not necessarily a bad thing, either.
People already mentioned some pros (e.g. graphs and such help cross the inferential distance) and cons (e.g. images break the mental flow of some people).
I’d note that the short help for comments does not list the Markdown syntax for embedding images in comments, and even the “more comment formatting help” page is not especially clear. That LessWrong cultural encourages folk to write comments before writing Main or Discussion articles makes that fairly relevant.
Some people embed graphics in their articles, and this is seen by many as a good thing. I suspect it’s just individuals choosing not to bother with images.
Reading this comment… I suddenly feel very odd about the fact that I failed to include images in my Neuroscience basics for LessWrongians post, in spite of in a couple places saying “an image might be useful here.” Though the lack of images was partly due to me having trouble finding good ones, so I won’t change it at the moment.
I find it harder to engage in System 2 when there are images around. Heck, even math glyphs usually trip me up. That’s not to say graphics can’t do more good than harm (for example, charts and diagrams can help cross inferential distance quickly, and may serve as useful intuition pumps) but I imagine that more images would mean more reliance on intuition and less on logic, hence less capacity for taking things to analytical extremes. So it could be harmful (given the nature of the site) to introduce more images.
I like my flow. I don’t have anything against images if they are arranged in a way that doesn’t distrupt reading. I’m not sure if lw platform allows for that.
Personally I find the absence of images mostly positive, and apparently helpful for staying in System 2 mode. This is a place where we analyze counter-intuitive propositions, so the absence of images may be critical to the optimal functioning of the community.
That’s not to say images don’t have cognitive advantages (the information content can be processed more quickly than text, e.g.) but they can be distracting, and might actually tend to lead one to be less analytical and concise in the long run. Notice how an image-based meme may seem credible or hard to shoot down even when it represents a strawman argument (or even no argument at all). There’s a reason Chick tracts are a thing.
Several months ago I set up a blog for writing intelligent, thought-provoking stuff. I’ve made two posts to it, and one of those is a photo of a page in Strategy of Conflict, because it hilariously featured the word “retarded”. Something has clearly gone wrong somewhere.
I’m pretty sure there are other would-be bloggers on here who experience similar update-discipline issues. Would any of them like to form some loose cabal of blogging spotters, who can egg each other on, suggest topics, provide editorial and stylistic feedback, etc.?
Is this intended as snark, or an actual helpful comment?
Assuming the latter, I have what I consider to be sound motives for maintaining a blog. Unfortunately, I don’t have sound habits for maintaining a blog, coupled with a bit of a cold-start problem. I doubt I am the only person in this position, and believe social commitment mechanisms may be a possible avenue for improvement.
I was going for actual helpful comment. I personally don’t have a blog because several attempts to have a blog failed. Afterwards, I was fairly sure that the reason why my blogs failed was because I like conversations too much and monologuing too little. I found that forums both had a reliable stream of content to react to, as well as a somewhat reliable stream of content to build off of. The incentive structure seemed a lot nicer in a number of ways.
More broadly, I think a good habit when plans fail is to ask the question “What information does this failure give me?”, rather than the limited question “why did this plan fail me?”. Sometimes you should revise the plan to avoid that failure mode; other times you should revise the plan to have entirely different goals.
My immediate practical suggestion is to create a LW draft editing circle. This won’t give you the benefits of a blog distinct from LW, but eliminates most of the cold-start problem. It also adds to the potential interest base people who have ideas for posts but who don’t have the confidence in their ability to write a post that is socially acceptable to LW (i.e. doesn’t break some hidden protocol).
If you have any old material, you could consider posting those to get initial readership, even if you don’t consider them especially high quality.
I have what I consider to be sound motives for maintaining a blog.
I’d interpret Vaniver’s comment more generally to mean that parts of your brain might disagree with this assessment, and you experience this as procrastination.
Would any of them like to form some loose cabal of blogging spotters, who can egg each other on, suggest topics, provide editorial and stylistic feedback, etc.?
Yes.
(My current excuse for not even having made one post is that I started to experience wrist pain, and didn’t want to make it worse by doing significant typing at home. It seems to be getting better now.)
Maybe you should consider joining an existing blogging community—livejournal or tumblr or medium? They’re good at giving you social prompts to write something.
In retrospect, my previous response to this does seem pretty unwarranted. This was a perfectly reasonable and relevant comment that caught me at a bad time. I’d like to apologise.
OK, I’m not trying to be antagonistic, but I really want to understand where the communication process goes wrong here. What was it about my original comment that seemed like a request for advice?
It is clear at this point that while I don’t think my original comment (or similar previous comments) was asking for advice, plenty of other people do indeed interpret them as such. If this were simply a choice of words that had a common meaning I was unfamiliar with, I’d happily accept this and move on, but in this case I think these other people are fundamentally doing something incorrectly with language and dialogue.
My immediate case for this is threefold:
1) The comment is literally not asking for advice. It does not execute the speech act of asking for advice.
2) If someone were to infer a request for advice from the comment, they would notice the comment does not contain sufficient information for them to provide good advice. Even if it has the superficial appearance of implicitly asking for advice, it is not well-suited to this task.
3) If someone were to go about the process of asking me for the salient background information to offer me good advice (rather than just shooting in the dark and generating irrelevant discussion that doesn’t serve anyone’s purposes), they would notice that it wasn’t a request for advice. This casts doubt on their motives for engagement with the dialogue, not to mention their ability to give appropriate advice.
It’s not a request for advice. It’s just not, and it’s pragmatically unsound (not to mention kind of rude and really annoying) to interpret it as such.
So yes, I think I’m right, and everyone else is wrong. I should point out that although I find the unsolicited advice incredibly annoying, I find the underlying discourse phenomenon really interesting.
Here’s some advice: When you think you’re right about the interpretation of what you said and everyone else is wrong you’re probably wrong. The fact that you have to go ON and ON about how incredibly obviously right you are and how everyone should have seen it is the rationalization of you in fact being wrong.
1) “I’ve been having a problem lately with x” Does not explicitly ask for advice. It implicitly does so.
2) Vague advice is STILL more useful than no advice when someone is asking for advice. Giving someone advice that has worked for them in a situation that isn’t exactly the same is still useful or at least leads to further possibly useful conversation. Example: “I’ve been having a lot of trouble sleeping lately.” “Have you tried Melatonin?” “Obviously, I’m not an idiot! I can’t sleep because of the construction! Clearly you didn’t have enough information to tell me about soundproofing options so why even talk!?”
3) This isn’t a private conversation. Responses to you are not just FOR you. If someone replies with general advice for a similar setting they’re trying to have a conversation about that even if it’s not exactly what you wanted to hear. I understand you didn’t want a conversation about blogging productivity, you just wanted some yes or no answers to your question. Why should that prevent other people saying things they think are relevant to the conversation in general?
In the end, you could have just ignored people giving you unsolicited advice, but instead you chose to go off an an assholish ranting streak at everyone who was simply trying to be helpful. You’ve wasted far more of your own time with these complaints than reading anyone’s advice has cost you. Nobody here was being rude except for you (and now me).
I actually totally appreciate this comment, and largely agree with it. I maintain my general point about the pragmatics of interpreting things as implicit requests for advice, but yeah, I’ve certainly not handled this particular thread gracefully.
You should be pragmatic about pragmatics. The comment was an attempt to affect other people. If you produce the wrong effects, your language is wrong.
If everyone agrees that it’s a question, it’s a question. If things that weren’t questions a year ago are questions now, then the language changed. But it doesn’t take a lot of people wrongly interpreting something as a question to produce unwanted answers, so maybe they are wrong. And language fragmentation is worth fighting.
Having slept on it, I think I can offer a more fine-grained explanation for what I think is going on.
There are implicit and explicit speech acts. You can implicitly or explicitly threaten someone, or compliment someone, or express romantic interest in someone. There are some speech acts which you cannot, or as a matter of policy should not, carry out implicitly. As extreme examples, you cannot implicitly extend someone power of attorney, and you should not interpret someone’s implicit expressions of interest in being erotically asphyxiated as an invitation to go ahead and do so.
I believe implicit requests for advice basically shouldn’t exist. I would expect social decorum to drive people’s interpretations away from this possibility. Out in the big wide world, my experience is that people are considerably more careful about how they do and do not offer advice. My consternation is that on LW there appear to be forces driving people’s interpretations towards the possibility of implicit requests for advice, which runs counter to my expectations.
Not what-the-fuck-are-you-doing counter to my expectations, I should point out. I might, for example, occasionally expect relative strangers at work to wordlessly take a pen from my desk, use it to scribble a note and then put it back. This is probably the most mildly-invasive act I can think of, but if I was disrupted every five minutes by someone leaning in and pinching one, stopping a wordless pen-borrower in their tracks and saying “seriously, what is it with the pens?” seems like a reasonable line of inquiry.
I’m not precious about my pens, (nor do I think I’m especially hostile to receiving unsolicited advice) but there are good reasons to have social norms that drive people away from this sort of behaviour. When those social norms cease to exist, those good reasons don’t go away.
I believe implicit requests for advice basically shouldn’t exist.
That’s not very pragmatic. Worry about whether they do exist. You say they don’t exist in other contexts, but this statement makes me distrust your observations.
Also, I suggest you consider more contexts. Are you familiar with other venues intermediate between LW and your baseline? nerds? other online fora?
saying “seriously, what is it with the pens?” seems like a reasonable line of inquiry.
Really? It seems like a reasonable way of stopping it. It does not seem to me like a way of learning. And since not that many people go by your desk, it might scale to actually stopping it.
I’m not saying they don’t exist in other contexts, but that they’re a less probable interpretation in other contexts. In those contexts, I wouldn’t expect my original comment to be interpreted as a request for advice as readily as it is here. I wouldn’t necessarily expect it to be interpreted any better, but I wouldn’t expect a small deluge of advice.
I am fairly sure this discussion isn’t really recoverable into something productive without me painting myself as some sort of neurotic pen-obsessive snapcase. Yes, I only have myself to blame.
I believe implicit requests for advice basically shouldn’t exist … there are good reasons to have social norms that drive people away from this sort of behaviour
Why do you believe this? That is, is this an aesthetic preference about the kind of society you want to live in, or do you believe they have negative consequences, or do you adhere to some deontological model with which they are inconsistent, or… ?
I believe there are negative consequences, some of which I’ve already elaborated upon, and some of which I haven’t.
Illustratively, there do exist social norms against patronising other people, asking personal questions, publicly speculating about other people’s personal circumstances, infringing privacy, etc., which are significant risks when offering people unsolicited advice. Since offering people unsolicited advice is itself a risk when inferring requests for advice from ambiguous statements, it seems reasonable (to me) to expect people to be less inclined to draw this inference.
Also, offering advice (or general assistance, or simply establishing a dialogue with someone) isn’t a socially-neutral act, especially in a public setting. A suitable analogy here might be walking into a bar and saying “after a day like today, I’m ready for a drink”. This isn’t an invitation for any nearby kind-hearted stranger to buy you a drink without first asking if you wanted them to. The act of buying a drink for someone has all sorts of social/hospitality/reciprocity connotations.
After making a right royal mess of this particular thread, I’m keen to disentangle myself from it, so while I’m happy to continue the exchange, I’d appreciate it if it didn’t continue any longer than was useful to you.
while I’m happy to continue the exchange, I’d appreciate it if it didn’t continue any longer than was useful to you.
Um… well, OK.
I have to admit, I don’t quite understand, either from this post or this thread, what you think the negative consequences are which these social norms protect against… the consequences you imply all seem like consequences of violating a social norm, not consequences of the social norm not existing.
Perhaps I’m being dense.
Regardless, I’m only idly interested, so if you’d rather disentangle I’m happy to drop it.
Well, unwarranted advice can result in making someone feel patronised, or like their privacy or personal boundaries are being violated, or like their personal circumstances are subject to public speculation, and these are all unpleasant and negative experiences, and you should try and avoid subjecting people to them.
It can also, out of nowhere, create a whole raft of dubious questioning or accidental insinuation that the recipient of the advice may feel obliged, or even compelled, to put straight. It has a general capacity to generate discussion that is a lot more effort for the advisee to engage with than the advisor. It’s very easy to give people advice, but as I have found, it’s surprisingly hard to say “no, stop, I don’t want this advice!” (I have said it very vehemently in this thread, with the consequence of looking like an objectionable arse, but I’m not sure that saying it less vehemently would have actually stopped people from offering it.) These are also unpleasant and negative experiences, and you should try and avoid subjecting people to them as well.
like their privacy or personal boundaries are being violated, or like their personal circumstances are subject to public speculation, and these are all unpleasant and negative experiences
Advice, unwanted or not, usually follows a description of the situation or relevant circumstances.
Someone who published—posted online—an account of his situation or “personal circumstances” cannot complain later that his privacy was violated or that these personal circumstances became “subject to public speculation”.
To put it bluntly, posting things on the ’net makes them not private any more.
Part of my point in this thread is that advice often comes even in the absence of a description of relevant circumstances. Hence they become subject to public speculation.
Your complaint included “their privacy or personal boundaries are being violated”. And when you complained about speculation, you complained about “their personal circumstances are subject to public speculation”.
Presumably these personal circumstances were voluntarily published online, were they not?
If you do not post your personal circumstances online there is nothing to speculate about.
You seem to want to have a power of veto on people talking about you. That… is not going to happen.
If I talk, in the abstract, about how I imagine that it’s hard to organise bestiality orgies, and someone misinterprets that as a request for advice about organising bestiality orgies, that’s some pretty flammable speculation about my personal circumstances. I then have the option of either denying that I have interest in bestiality orgies, or ignoring them and leaving the speculation open.
Does that make sense? Please let it make sense. I want to leave this thread.
If I talk, in the abstract, about how I imagine that it’s hard to organise bestiality orgies, and someone misinterprets that as a request for advice about organising bestiality orgies, that’s some pretty flammable speculation about my personal circumstances.
No, it is not unless you’re actually organizing bestiality orgies.
If you actually do not, then it’s neither an invasion of privacy nor a discussion of your personal circumstances because your personal circumstance don’t happen to involve bestiality orgies.
It might be a simple misunderstanding or it might be a malicious attack, but it has nothing to do with your private life (again, unless it has in which case you probably shouldn’t have mentioned it in the first place).
And leaving this thread is a simple as stepping away from the keyboard.
For my own part, if someone goes around saying “Dave likes to polish goats in his garage”, it seems entirely reasonable for me to describe that as talking about my private life, regardless of whether or not I polish goats, whether or not I like polishing goats, or whether or not I have a garage.
To claim that they aren’t actually talking about my private life at all is in some technical sense true, I suppose, but the relevance of that technical sense to anything I might actually be expected to care about is so vanishingly small that I have trouble taking the claim seriously.
You’re conflating privacy and public speculation again. I didn’t do that.
If I say “I think Lumifer likes to ride polar bears in his free time”, then I am speculating about your personal circumstances. I just am. That’s what I’m doing. It’s an incontrovertible linguistic fact. I am putting forth the speculation that you like to ride polar bears in your free time, which is a circumstance that pertains to you. I am speculating about your personal circumstances. Whether the statement is true or not is irrelevant. I’m still doing it.
And I am actually going to go away now. Reply however you like, or not.
If I say “I think Lumifer likes to ride polar bears in his free time”, then I am speculating about your personal circumstances.
Not quite. The words which are missing here are “imaginary” and “real”.
I have real personal circumstances. If someone were to find out what they really are and start discussing them, I would be justified in claiming invasion of privacy and speculation about my personal circumstances.
However in this example, me riding polar bears is not real personal circumstances. What’s happening is that you *associate* me with some imaginary circumstances. Because they are imaginary they do not affect my actual privacy or my real personal circumstances. They are not MY personal circumstances.
In legal terms, publicly claiming that Lumifer likes to ride polar bears and participate in unmentionable activities with them might be defamation but it is NOT invasion of privacy.
To repeat, you want to prevent or control people talking about you and that doesn’t sound to me like a reasonable request.
That’s also my suspicion in this case, but does it really seem plausible that I completely abandoned my analysis of the situation at that point? Especially since I go on to explicitly identify it as an update-discipline issue, and make a specific request to address it?
I’ve been cheerfully posting to LW with moderate frequency for four years now, but over the past few months I’ve noticed an increased tendency in respondents to offer largely unsolicited advice. I’m fairly sure this is an actual shift in how people respond. It seems unlikely that my style of inquiry has changed, and I don’t think I’ve simply become more sensitive to an existing phenomenon.
Maybe the “advice” (or instrumental rationality?) style of post has become more common and this approach to discussion has bled over into the comments? I don’t know, I find lmm’s comment to read as a perfectly natural response to yours, so perhaps I’m not best placed to analyse the trend you seem to be experiencing.
One possible explanation is that you are just getting more responses (and thus more advice-based responses) because the Open Threads (and maybe Discussion in general) have more active users. Or maybe the users are more keen to participate in discussions and giving advice is the easiest way to do so.
It might help if you start.… (just kidding, I’m making a mental note not to give you advice unless you specifically ask for it from now)
Retracted because you haven’t asked for an opinion on the reason as to why you are getting advice either.
In this context, the discussion is about receiving unnecessary advice, so I think speculating on why this is happening is entirely reasonable.
To illustrate why it’s annoying, it may help to provide the most extreme example to date. A couple of months ago I made a post on the open thread about how having esoteric study pursuits can be quite isolating, and how maintaining hobbies and interests that are more accessible to other people can help offset this. I asked for other people’s experience with this. Other people’s experiences was specifically what I asked for.
Several people read this as “I’m an emotionally-stunted hermit! Please help me!” and proceeded to offer incredibly banal advice on how I, specifically, should try to form connections with other people. When I pointed out that I wasn’t looking for advice, one respondent saw fit to tell me that my social retardation was clearly so bad that I didn’t realise I needed the advice.
To my mind, asking for advice has a recognisable format in which the asker provides details for the situation they want advice on. If you have to infer those details, the advice you give is probably going to be generic and of limited use. What I find staggering is why so many people skip the process of thinking “well, I can’t offer you any good advice unless you give us more deta-...oh, wait, you weren’t asking for advice”, and just go ahead and offer it up anyway.
People will leap at any opportunity to give advice, because giving advice a) is extraordinarily cheap b) feels like charity and most importantly c) places the adviser above the advised. It’s the same impulse which drives us to pity; we can feel superior in both moral and absolute terms by patronizing others, and unlike charity there is only a negligible cost involved.
I, for example, have just erased a sentence giving you useless advice on how not to get useless advice in a comment to a post talking about how annoying unsolicited useless advice is. That is the level of mind-bending stupidity we’re dealing with here.
Can you please use actual words to explain the underlying salience of this video? I see what you’re getting at, but I’m pretty sure if you said it explicitly, it would be kind of obnoxious. I would rather you said the obnoxious thing, which I could respond to, than passively post a video with snarky implicit undertones, which I can’t.
I think this isn’t entirely fair. You asked what people do to keep themselves relatable to other people. That’s not the same as asking for help relating to other people, but it is closer to that than you implied.
Not to say that I think the responses you got were justified, but I don’t find them surprising.
I’m going to stick to my guns on this one. I think my account is as close as makes no difference.
I’m happy to concede that other people may commonly interpret my inquiry as being closer to a request for advice, but I contend that this interpretation is not a reasonable one.
What I find staggering is why so many people skip the process of thinking “well, I can’t offer you any good advice unless you give us more deta-...oh, wait, you weren’t asking for advice”, and just go ahead and offer it up anyway.
When you say you find this staggering, do you mean you don’t understand why many people do this?
I can speculate as to why people do this, but given my inability to escape the behaviour, I clearly don’t understand it very well.
To a certain extent, I’m also surprised that it happens on Less Wrong, which I would credit with above-average reading comprehension skills. Answering the question you want to answer, rather than the question that was asked, is something I’d expect less of here.
There’s a pattern of “I have a problem with X, the solution seems to be Y, I need help implementing Y”.
Sometimes people ask this without considering other solutions; then it can be helpful to point out other solutions. Sometimes people ask this after considering and rejecting lots of other solutions; then it can be annoying to point out other solutions. Unfortunately it’s not always easy for someone answering to tell which is which.
Edit because concrete examples are good: I just came across this SO post, which doesn’t answer the question asked or the question I searched for, but it was my preferred solution to the problem I actually had.
Maybe that’s a description of the other responses, but lmm is not suggesting an alternative to Y, but an alternate path to Y. I think sixes and sevens’s response is ridiculous.
Consider your incentives. Actual (non-imaginary) incentives in your current life.
What are the incentives for maintaining a blog? What do you get (again, actually, not supposedly) when you make a post? What are the disincentives? (e.g. will a negative comment spoil your day?) Is there a specific goal you’re trying to reach? Is posting to your blog a step on the path to the goal?
Are you requesting answers for my specific case, or just providing me with advice?
(As an observation, which isn’t meant to be a hostile response to your comment, people seem very keen to offer advice on LW, even when none has been requested.)
If I wanted to update a blog regularly, I would consider it imperative to put “update my blog” as a repeating item in my to-do list. For me, relying on memory is an atrocious way to ensure that something gets done; having a to-do list is enormously more effective.
The main problem in learning a new skill is maintaining the required motivation and discipline, especially in the early stages. Gamification deals this problem better than any of the other approaches I’m familiar with. Over the past few months, I’ve managed to study maths, languages, coding, Chinese characters, and more on a daily basis, with barely any interruptions. I accomplished this by simply taking advantage of the many gamified learning resources available online for free. Here are the sites I have tried and can recommend:
Codecademy. For learning computer languages (Ruby, Python, PHP, and others).
Duolingo. For learning the major Indo-European languages (English, German, French, Italian, Portuguese and Spanish).
Khan Academy. For learning maths. They also teach several other disciplines, but they offer mostly videos with only a few exercises.
Memrise. For memorizing stuff, especially vocabulary. The courses vary in quality; the ones on Mandarin Chinese are excellent.
I’ve been using Anki daily these past two or three months, and regularly-but-not-quite-daily maybe a year before that. I use it for a fair amount of different things (code, psychology, languages, …)I recommend it, though it’s not really “gamified”.
Not sure where this goes: how can I submit an article to discussion? I’ve written it and saved it as a draft, but I haven’t figured out a way to post it.
Thank you! One more—how much karma do I need? I was under the impression one needed 2 to post to discussion (20 to main), but presumably this is not the case. Is there an up to date list?
In My Little Pony: Friendship is Signaling, Twilight Sparkle and her companions defeat Nightmare Moon by using the Elements of Cynicism to prove to her that she doesn’t really care about darkness.
My stab at it. I’m probably going to post it to FIMFiction in a day or so, but it’s basically a first draft at this point and could doubtless use editing / criticism.
A med student colleague of mine, a devout christian, is going to give a lecture on psychosexual development for our small group in a couple of days. She’s probably going to sneak in an unknown amount of propaganda. With delicious improbability, there happen to be two transgender med students in our group she probably isn’t aware of. To this day, relations in our group have been very friendly.
Any tips on how to avoid the apocalypse? Pre-emptive maneuvers are out of the question, I want to see what happens.
ETA: Nothing happened. Caused a significant update.
This sounds like a situation in which some people present may consider some other people’s beliefs to be an individual-level existential threat — whether to their identity, to their lives, or to their immortal souls. In other words, the problem is not just that these folks disagree with each other, but that they may feel threatened by one another, and by the propagation of one another’s beliefs.
Consider: ”If you convince people of your belief, people are more likely to try to kill me.” ″If you convince people of your belief, I am more likely to become corrupted.”
One framework for dealing with situations like this is called liberalism. In liberalism, we imagine moral boundaries called “rights” around individuals, and we agree that no matter what other beliefs we may arrive at, that it would be wrong to transgress these boundaries. (We imagine individuals, not groups or ideas, as having rights; and that every individual has the same rights, regardless of properties such as their race, sex, sexuality, or religion.)
Agreeing on rights allows us to put boundaries around the effects of certain moral disagreements, which makes them less scary and more peaceful. If your Christian colleague will agree, for instance, that it is wrong to kidnap and torture someonein an effort to change that person’s sexual identity, they may be less threatening to the others.
What would constitute an apocalypse? When you say “I want to see what happens” do you mean you want to let the situation develop organically but set certain boundaries, a cap on damages, so to say?
That’s exactly what I mean. I’m not directing the situation, but will be participating.
I’d like to confront and see people confront her religious bias, without the result being excessive flame or her being backed in a corner without a chance to even marginally nudge her mind in the right direction. She’s smart, will not make explicit religious statements, and will back her claims with cherry picked reseach. Naturally the level of mindkill will depend on other participants too, and I will treat this as some sort of a rationality test if they manage to keep their calm. If they lose it I guess it’s understandable.
I guess I’ll be using lots of some version of “agree denotationally, disagree connotationally”.
Finnish indeed, and even with our completely watered down version of belief in belief christianity there’s always a religious nut or two to ruin your day.
Volatile and emotional? Most lwers per capita was it? Is it because we’re in the highest need for rationality?
Heavily leaning towards in-your-face on the spectrum. Has been very vocal on abortion issues for example. Thinks that homosexuality is a sin. Other than her religiousness, is a perfectly nice human being.
I don’t think that ‘Eliezer’s book’ refers to HPMOR. I think it is more likely that he is asking about the book based on the Sequences (for which this is probably the most recent thread).
Arguably MDMA in the proper context. The chain of reasoning as as follows: One of the quickest traditional cures for sadness about a broken off relationship is a new relationship ---> the cure for a broken heart is to meet and fall in love with someone else as fast as possible ---> MDMA lowers inhibition and raises affection and touch (which are crucial for relationship formation) and theoretically should make the process of getting to know someone faster and happier ---> so if you take MDMA around the right people in the right place you can quickly fall into a new love.
Complete is a strong word that I should have qualified. Mastery is a better word. Control over it. Where your emotions bend to the will of your rational mind, not vice versa.
Don’t limit yourself without reason. As humans we are agents of change in an incredibly complex, chaotic system (society). Mastering emotional control allows us to become much more effective agents. Someone half as smart but with twice the self control in every area can easily beat the more intelligent opponent. Not every time, but it is a massive advantage. http://scienceblogs.com/cognitivedaily/2005/12/14/high-iq-not-as-good-for-you-as/
I didn’t say that it’s predictable, or that it is super easy, but it’s not particularly difficult and only takes a few months of commitment of a few hours a week, to bring a lifetime of reward.
I’m surprised that as a “rationalist” you suggest mastery of the emotions may not be desirable. Awareness of one’s emotions, sure. But letting them dictate your actions in any way, why? Be rational.
And one without mastery of their emotional state (that is, the experiential drag of depression, or impulsive actions of rage, or hurtful actions of uncontrolled lust, etc), one is at a disadvantage in almost any situation,
Ultimately it’s just a matter of choosing to feel a certain way. Find things you really like (for me, sex and cigarettes), then just do EVERYTHING to get that in every situation you’re in. Most people chase money. Break the mould. Chase something else. Realize that you can get everything you want if you know how to play the situation right. If you’d like to PM me we can discuss a ‘training’ programme that matches your lifestyle perfectly. (Context: I used to be a NLP trainer/dating coach)
Thats an example of one method of switching your mindset completely. Ultimately many mindsets can be imagined then enjoyed by the invividual if so chosen. Its simply a matter of self will primarily.
Since I’m not sure whether this advice would be welcome in a recent discussion, I’m just going to start cold by describing something which has worked for me.
In an initial post, I explain what kind of advice I’m looking for, and I’m specific about preferring advice from people who’ve gotten improvement in [specific situation]. I normally say other advice is welcome, but you’d be amazed how little of it I get.
I believe it’s important to head off unwanted advice early. I can’t remember whether I normally put my limiting request at the beginning or end of a post, but I think it helps if you can keep your commenters from becoming a mutually reinforcing advice-giving crowd.
I suggest that starting by being specific about what you do and don’t want is (among other things) an assertion of status, and this has some effects on the advice-giving dynamic.
I normally do want advice from people who’ve had appropriate experience. Has anyone tried being clear at the beginning that they don’t want advice?
In my social circle, explicitly tagging posts as “I’m not looking for advice” seems to work pretty well at discouraging advice. I don’t do it often myself though.
And you’re right, of course, that it is among other things an assertion of status, though of course it’s also a useful piece of explicit information.
Does anyone have any book recommendations on the topic of evidence based negotiation tactics? I have read Influence; Cialdini, thinking fast, and slow ; Kahneman , and Art of Strategy; Dixit and Nalebuff. These are great books to read but I am looking for something with a more narrow focus, there are lot’s of books on amazon that get good reviews but I am unsure of which one would suit me best.
Getting to Yes is a standard negotiation book; Difficult Conversations seems useful as a supplement for negotiation in non-business contexts (but, as a general communication book, has obvious business applications as well).
I am really starting to play with the idea that if you aren’t getting rejected enough then you probably aren’t negotiating hard enough. The next time I go buy a car I will make sure to negotiate hard enough to get rejected from more than one dealership. If they don’t let you walk out, then you probably haven’t found their low point yet.
if you aren’t getting rejected enough then you probably aren’t negotiating hard enough.
Rejection is not quite the term. In my experience the sales guy eventually offers you his “best deal”, you thank him (why is it there are virtually no women in these jobs?) for his time and stand up to leave. That’s when he likely calls his manager who sweetens the deal a bit. YMMV. Once it seemed fair enough that I took it, rejecting “documentation fee”, upsell and all other junk along the way. (Free service, extended warranty etc. should already be negotiated in at this point: this stuff is cheap for them, expensive for you.) Another time I still left, making it clear that I was ready to finish the deal, if only… then got a call later with some more concessions. Every time the “final” contract had unexpected charges added which had to be removed before signing. Admittedly, I never push it to the point where the salesperson hates me, so they clearly get a fair shake out of it, just not a lucrative one.
One tactic that is nearly always useful (be it cars, electronics, appliances or anything else) is anchoring: showing the price of a comparable item, the book value, lease details posted on the manufacturer’s site, anything that makes the seller to go for a version of “price match”.
Steve Sailer on the Trolley Problem: [1] and [2]. Basically, to what degree is the unwillingness of people in the thought experiment to attempt to push the fat man the realization that pushing the fat man is an inherently riskier prospect than pulling a lever?
“Throw the switch or not” is a natural choice actually presented by real conditions – switches imply choices by definition. “Push the fat man or don’t” isn’t a natural choice presented by real conditions – it’s a scenario concocted for an experiment. By definition, those cannot be the only options in the universe. And our brains can tell.
It seems to me that what characterizes the people who choose the “logical” answer – push the fat man – is not that they gave a less-emotional response but that they gave a less-inuitive, less-gestalt-based response. They were willing to accept the conditions of the problem as given without question. That’s a response to authority – they are turning off the part of their brains that feels the situation as a real one, and sticking with the part of the brain that reasons from unquestionable givens to undeniable conclusions.
There’s a place for that kind of response – but I would argue that answering questions of great moral import is emphatically not that place. Indeed, from the French Revolution to the Iraq War, modernity is littered with the corpses of those whose deaths were logically necessary for some hypothesized outcome that could not actually have been known with remotely the necessary level of certainty. In that regard, I suspect an aversion to following logic problems to fatal conclusions is not merely a kind of moral appendix handed down from our Stone Age ancestors, but remains positively adaptive.
Buying it? No. Using it while your downstairs-neighbor is home? Yes. A repetitive thumping can make trying to study hellishly difficult (for people sufficiently similar to me).
To the extent that you believe the preferences of the person below you mirror your own, would it annoy you if the person above you started using a treadmill in their apt?
Is there any research suggesting simulated out-of-body-experiences (OBE)(like this), can be used for self improvement? For example potential areas of benefits include triggering OBEs to help patients suffering from incorrect body identities, which is exciting.
For some time now, I have had this very strange fascination with OBE and using it to over come akrasia. Of course I have no scientific evidence for it, yet I have this strong intuition that makes me believe so. I’ll do my best to explain my rationale. Often I get this idea, that I can trick myself into doing what I want, if I pretend that I am not me but just someone observing me. This disconnects my body from my identity, so that the real me can control the body me. This gives me motivation to do things for the body me. I am not studying, my body me is studying to level up. I’m not hitting the gym, the body me is hitting the gym to level up. An even more powerful effect is present for social anxiety. Things like public speaking and rejection therapy are terrifying but by disconnecting my identity from my body, I would find that rejection is not as personal, just directed at this avatar that I control. Negative self-conscious thoughts and embarrassment seem to have a lessened impact.
The kind of dissociation you talk about here, where I experience my “self” as unrelated to my body, is commonly reported as spontaneously occurring during various kinds of emotional stress. I’ve had it happen to me many times.
It would not be surprising if the same mechanism that leads to spontaneous dissociation in some cases can also lead to the strong intuition that dissociation would be a really good idea.
Just because there’s a mechanism that leads me to strongly intuit that something would be a really good idea doesn’t necessarily mean that it actually would be.
All of that said: after my stroke, I experienced a lot of limb-dissociation… my arm didn’t really feel like part of me, etc. This did have the advantage you described, where I could tell my arm to keep doing some PT exercise and it would, and yes, my arm hurt, and I sort of felt bad for it, but it’s not like it was me hurting, and I knew I’d be better off for doing the exercise. It is indeed a useful trick.
I suspect there are healthier ways to get the same effect.
Do you have experience with OBEs? I personally have limited experience. I’m no expert but I know a bit.
In my experience the kind of people who have the skills for engaging in out-of-body-experiences usually don’t get a lot done. It rather increases akrasia then decreasing it.
If you want to decrease akrasia associating more with your body is a better strategy than getting outside of it.
An even more powerful effect is present for social anxiety. Things like public speaking and rejection therapy are terrifying but by disconnecting my identity from my body, I would find that rejection is not as personal, just directed at this avatar that I control.
That effect is really there. You are making a trade. You lose empathy. Stopping to care about other people means that you can’t have genuine relationships.
On the other hand rejections don’t hurt as much and you can more easily put yourself into such a situation.
I don’t think you’re off your rocker, though dissociating at the gym might increase the risk of injury.
I tentatively suggest that you explore becoming comfortable enough in your life that you don’t need the hack, but I’m not sure that the hack is necessarily a bad strategy at present.
Does anyone know of a good online source for reading about general programming concepts? In particular, I’m interested in learning a bit more about pointers and content-addressability, and the Wikipedia material doesn’t seem very good. I don’t care about the language—ideally I’m looking for a source more general than that.
Can’t actually name a good general article on pointers. They’re the big sticking point for anyone trying to learn C for the first time, but they end up just being this sort of ubiquitous background knowledge everyone takes for granted pretty fast. I did stumble into Learn C the Hard Way, which does get around to pointers.
The C2 wiki is an old site for general programming knowledge. It’s old, the navigation is weird, and the pages sometimes devolve into weird arguments where you have no idea who’s saying what. But there’s interesting opinionated content to find there, where sometimes the opinionators even have some idea what they’re talking about. Here’s one page on what they have to say about pointers.
Also I’m just going to link this article about soft skills involved in programming, because it’s neat.
Could anyone provide me with some rigorous mathematical references on Statistical Hypotheses Testing, and Bayesian Decision Theory? I am not an expert in this area, and am not aware of the standard texts. So far I have found
Statistical Decision Theory and Bayesian Analysis—Berger
Bayesian and Frequentist Regression Methods—Wakefield
Currently, I am leaning towards purchasing Berger’s book. I am looking for texts similar in style and content to those of Springer’s GTM series. It looks like the Springer Series in Statistics may be sufficient.
“Bayesian decision theory” usually just means “normal decision theory,” so you could start with my FAQ. Though when decision theory is taught from a statistics book rather than an economic book, they use slightly different terminology, e.g. they set things up with a loss function rather than a utility function. For an intro to decision theory from the Bayesian statistics angle, Introduction to Statistical Decision Theory is pretty thorough, and more accessible than Berger.
The main problem in learning a new skill is maintaining the required motivation and discipline, especially in the early stages. Gamification deals this problem better than any of the other approaches I’m familiar with. Over the past few months, I’ve managed to study maths, languages, coding, Chinese characters, and more on a daily basis, with barely any interruptions. I accomplished this by simply taking advantage of the many gamified learning resources available online for free. Here are the sites I have tried and can recommend:
[Codecademy][1]. For learning computer languages (Ruby, Python, PHP, and others).
[Duolingo][2]. For learning the major Indo-European languages (English, German, French, Italian, Portuguese and Spanish). [Khan Academy][3]. For learning maths. They also teach several other disciplines, but they offer mostly videos with only a few exercises. [Memrise][4]. For memorizing stuff, especially vocabulary. The courses vary in quality; the ones on Mandarin Chinese are excellent. * [Vocabulary.com][5]. For memorizing English vocabulary. Am I missing anything? Please leave your suggestions in the comments section. -
″… there are an infinite number of alternate dimensions out there. And somewhere out there you can find anything you might imagine. What I imagine is out there is a bunch of evil characters bent on destroying our time stream!”—Lord Simultaneous
… does the fact that there’s been no obvious contact suggest that the answer to the transdimensional variant of the Fermi paradox is that once you’ve gone down one leg of the Trousers of Time, there’s no way to affect any other leg, no matter how much you try to cheat?
The Fermi paradox includes us knowing a lot about the density of stuff in the visible universe. You’d expect expansionistic life to populate most of a galaxy in short order since there are only the three dimensions to expand in. The Everett multiverse is a bit bigger. Would you still get a similar expansion model for a difficult to discover cheat, or could we end up with effects only observable in a minuscule fraction of all branches even if a cheat was possible, but was difficult enough to discover?
It may also suggest that there are also a bunch of good characters stopping the bad characters, and that they’re doing a good job.
Or that the bad guys don’t do anything by half measures and any contact is overwhelmingly likely to result in total destruction. Anthropic effects explain why we haven’t seen them. (This reasoning also applies to the original Fermi paradox by the way)
Or maybe just that affecting other universes happens to be so costly w.r.t any value system that could potentially produce technology.
We know so little about trans-universe physics that it’s of very little use to speculate.
Does the Schrodinger equation tell us how to increase the relative probability of interacting with an almost completely orthogonal Everett Branch?
“Almost completely orthogonal” here bears qualifying: In classical thermodynamics, the concept of entropy is sometimes taught by appealing to the probability of all of the gas in a room happening to end up in a configuration where one half of the room is vacuum, and the other half of the room contains gas. After some calculation, we see that the probability of this happening ends up being (effectively) on the order of 10^(-10^23), give or take several orders of magnitude (not like it matters at that point).
Now, that said, how confident are you that different Everettian earths are even at the same point of space time we are, given a branching, say, 10 seconds ago? Pick an atom before the split and pick its two copies after. Are they still within a Bohr radii of each other after after even a nanosecond? Their phases are already scrambled all to hell, so that’s a fun unitary transformation to figure out.
Sure, you can prepare highly quantum mechanical sources and demonstrate interference effects, but “interuniversal travel” for any meaningful sense of the word, is about as hard as simply transforming the universe itself, subatomically, atom for atom, controllably into a different reality.
So in that sense, Schrodinger’s equation tells us as much about trans-universe physics as the second law of thermodynamics tells us about building large scale Maxwell’s Demons.
So in that sense, Schrodinger’s equation tells us as much about trans-universe physics as the second law of thermodynamics tells us about building large scale Maxwell’s Demons.
The second law of thermodynamics tells us everything there is to know about building large scale Maxwell’s Demons. You can’t. What else is there to it?
Schroedinger’s equation isn’t quite as good. It’s not quite impossible. But it is enough to tell us that there’s no way we’ll ever be able to do it.
The Schrodinger equation is not even at the right level of the relevant physics. It applies to non-relativistic QM. My guess is that DanielLC simply read the QM sequence and memorized the teacher’s password. World splitting, if some day confirmed experimentally, requires at least QFT or deeper, maybe some version of the Wheeler-deWitt equation.
General form the Schrodinger Equation: dPsi/dt = -iH/hbar Psi
Quantum Field theories are not usually presented in this form because it’s intrinsically nonrelativistic, but if you pick a reference frame, you can dump the time derivative on the left and everything else on the right as part of H and there you go.
So it’s equivalent. As calef says, there are good reasons not to actually do anything with it in that form.
This is a little chicken-or-the-egg in terms of “what’s more fundamental?”, but nonrelativistic QFT really is just the Schrodinger equation with some sparkles.
For example, the language electronic structure theorists use to talk about electronic excitations in insert-your-favorite-solid-state-system-here really is quantum field theoretic—excited electronic states are just quantized excitations about some vacuum (usually, the many-body ground state wavefunction).
You could switch to a purely Schrodinger-Equation-motivated way of writing everything out, but you would quickly find that it’s extremely cumbersome, and it’s not terribly straightforward how to treat creation and annihilation of particles by hand.
Recent work suggests that dendrites may be able to do substantial computation themselves. This implies that getting decent uploads or getting a decent preservation from cryonics may require a more fine-grained approached than is often expected. Unfortunately, the paper itself seems to be not yet online, but it is by the same group which previously suggested that dendrites could be partially responsible for memory storage.
Smith et al. (2013). Dendritic spikes enhance stimulus selectivity in cortical neurons in vivo.
Abstract (emphasis mine):
Sillicon Valley’s Ultimate Exit, a speech at Startup School 2013 by Balaji Srinivasan. He opens with the statement that America is the Microsoft of nations, goes into a discussion on Voice, Exit and good governence and continues with the wonderful observation that:
He names this the Paper Belt, and claims the Valley has beem unintentionally dumping horse heads in all of their beds for the past 20 years. I would call it The Cathedral and note the NYT does not approve of this kind of talk:
No seriously, that is the very first line.
Transcript
I love this speech, but I suspect it’s overoptimistic. I believe that bitcoin will be illegal as soon as it’s actually needed.
Still, I appreciate his appreciation of immigration/emigration. I’m convinced that mmigration/emigration gets less respect than staying and fighting because it’s less dramatic, less likely to get people killed, and more likely to work.
That is likely, but note that torrenting Lady Gaga’s mp3s is also illegal and yet I have absolutely zero difficulty in finding such torrents on the ’net.
Maintaining a currency takes a much more complicated information structure than letting people make unlimited copies of something.
What do you mean, “maintaining”? Bitcoin was explicitly designed to function in a distributed manner without the need for any central authority.
And consequently it has a much more complicated information structure than torrents do. :) But this aside, while you can likely run the Bitcoin economy as such, if Bitcoins cannot be exchanged for dollars or directly for goods and services, they are worthless; and this is a bottleneck where a government has a lot of infrastructure to insert itself. I suggest that, if Bitcoins become illegal, buying hard drugs is the better analogy than downloading torrents: It won’t be impossible, but it’ll be much more difficult than setting up a free client and clicking “download”.
The differences between the physical and the virtual worlds are very relevant here.
Silk Road was blatantly illegal and it took the authorities years to bust its operator, a US citizen. Once similar things are run by, say, Malaysian Chinese out of Dubai with hardware scattered across the world, the cost for the US authorities to combat them would be… unmanageable.
What probability should I assign to being completely wrong and brainwashed by Lesswrong? What steps would one take to get more actionable information on this topic? For each new visitor who comes in and accuses us of messianic groupthink how far should I update in the direction of believing them? Am I going to burn in counter factual hell for even asking?
If Lesswrong would be good at brainwashing I would expect much more people to have signed up for cryonics.
Spend time outside of Lesswrong and discuss with smart people. Don’t rely on a single community to give you your map of the world.
The first thing you should probably do is narrow down what specifically you feel like you may be brainwashed about. I posted some possible sample things below. Since you mention Messianic groupthink as a specific concern, some of these will relate to Yudkowsky, and some of them are Less Wrong versions of cult related control questions. (Things that are associated with cultishness in general, just rephrased to be Less Wrongish)
Do you/Have you:
1: Signed up for Cyonics.
2: Agressively donated to MIRI.
3: Check for updates on HPMOR more often then Yudkowsky said there would be on the off hand chance he updated early.
4: Gone to meet ups.
5: Went out of your way to see Eliezer Yudkowsky in person.
6: Spend time thinking, when not on Less Wrong: “That reminds me of Less Wrong/Eliezer Yudkowsky.”
7: Played an AI Box experiment with money on the line.
8: Attempted to engage in a quantified self experiment.
9: Cut yourself off from friends because they seem irrational.
10: Stopped consulting other sources outside of Less Wrong.
11: Spent money on a product recommended by someone with high Karma (Example: Metamed)
12: Tried to recruit other people to Less Wrong and felt negatively if they declined.
13: Written rationalist fanfiction.
14: Decided to become polyamorous.
15: Feel as if you have sinned any time you receive even a single downvote.
16: Gone out of your way to adopt Less Wrong styled phrasing in dialogue with people that don’t even follow the site.
For instance, after reviewing that list, I increased my certainty I was not brainwashed by Less Wrong because there are a lot of those I haven’t done or don’t do, but I also know which questions are explicitly cult related, so I’m biased. Some of these I don’t even currently know anyone on the site who would say yes to them.
No
I’m a top 20 donor
Nope
yes.
Not really? That was probably some motivation for going to a mincamp but not most of it.
Nope.
Nope.
a tiny amount? I’ve tracked weight throughout diet changes.
Not that I can think of. Certainly no one closer than a random facebook friend.
Nope.
I’ve spent money on Modafinil after it’s been recommended on here. I could count Melatonin but my dad told me about that years ago.
Yes.
Nope.
I was in an open relationship before I ever heard of Leswrong.
HAHAHAHAHAH no.
This one is hard to analyze, I’ve talked about EM hell and so on outside of the context of Lesswrong. Dunno.
Seriously considering moving to the bay area.
I’m in the process of doing 1, have maybe done 2 depending on your definition of aggressively (made only a couple donations, but largest was ~$1000), and done 4.
Oh, and 11, I got Amazon Prime on Yvain’s recommendation, and started taking melatonin on gwern’s. Both excellent decisions, I think.
And 14, sort of. I once got talked into a “polyamorous relationship” by a woman I was sleeping with, no connection whatsoever to LessWrong. But mostly I just have casual sex and avoid relationships entirely.
… huh. I’m a former meetup organizer and I don’t even score higher than two on that list.
Cool. I think maybe we’re not a cult today.
Good. I score 5 out of 16 by interpreting each point the broadest reasonably possible way.
Beware: you’ve created a lesswrong purity test.
In “The Inertia of Fear and the Scientific Worldview”, by the Russian computer scientist and Soviet-era dissident Valentin Turchin, in the chapter “The Ideological Hierarchy”, Soviet ideology was analyzed as having four levels: philosophical level (e.g. dialectical materialism), socioeconomic level (e.g. social class analysis), history of Soviet Communism (the Party, the Revolution, the Soviet state), and “current policies” (i.e. whatever was in Pravda op-eds that week).
According to Turchin, most people in the USSR regarded the day-to-day propaganda as empty and false, but a majority would still have agreed with the historical framework, for lack of any alternative view; and the number who explicitly questioned the philosophical and socioeconomic doctrines would be exceedingly small. (He appears to not be counting religious people here, who numbered in the tens of millions, and who he describes as a separate ideological minority.)
BaconServ writes that “LessWrong is the focus of LessWrong”, though perhaps the idea would be more clearly expressed as, LessWrong is the chief sacred value of LessWrong. You are allowed to doubt the content, you are allowed to disdain individual people, but you must consider LW itself to be an oasis of rationality in an irrational world.
I read that and thought, meh, this is just the sophomoric discovery that groupings formed for the sake of some value have to value themselves too; the Omohundro drive to self-protection, at work in a collective intelligence rather than in an AI. It also overlooks the existence of ideological minorities who think that LW is failing at rationality in some way, but who hang around for various reasons.
However, these layered perspectives—which distinguish between different levels of dissent—may be useful in evaluating the ways in which one has incorporated LW-think into oneself. Of course, Less Wrong is not the Soviet Union; it’s a reddit clone with meetups that recruits through fan fiction, not a territorial superpower with nukes and spies. Any search for analogies with Turchin’s account, should look for differences as well as similarities. But the general idea, that one may disagree with one level of content but agree with a higher level, is something to consider.
Or not.
Could people list philosophy oriented internet forums with high concentration of smart people and no significant memetic overlap so that one could test this? I don’t know any and I think it’s dangerous.
I would love to see this as well
I’d suggest starting by reading up on “brainwashing” and developing a sense of what signs characterize it (and, indeed, if it’s even a thing at all).
Presumably this depends on how much new evidence they are providing relative to the last visitor accusing us of messianic groupthink, and whether you think you updated properly then. A dozen people repeating the same theory based on the same observations is not (necessarily) significantly more evidence in favor of that theory than five people repeating it; what you should be paying attention to is new evidence.
Note that your suggestions are all within the framework of the “accepted LW wisdom”. The best you can hope for is to detect some internal inconsistencies in this framework. One’s best chance of “deconversion” is usually to seriously consider the arguments from outside the framework of beliefs, possibly after realizing that the framework in question is not self-consistent or leads to personally unacceptable conclusions (like having to prefer torture to specks). Something like that “worked” for palladias, apparently. Also, I once described an alternative to the LW epistemology (my personal brand of instrumentalism), but it did not go over very well.
Brainwashing (which is one thing drethlin asked about the probability of) is not an LW concept, particularly; I’m not sure how reading up on it is remaining inside the “accepted LW wisdom.”
If reading up on brainwashing teaches me that certain signs characterize it, and LW demonstrates those signs, I should increase my estimate that LW is brainwashing people, and consequently that I’m being brainwashed. And, yes, if I conclude that it’s likely that I’m being brainwashed, there are various deconversion techniques I can use to negate that.
Of course, seriously considering arguments from outside the framework of beliefs is a good idea regardless.
Being completely wrong, admittedly, (the other thing drethlin asked about the probability of) doesn’t lend itself to this approach so well… it’s hard to know where to even start, there.
Reading up on brainwashing can mean reading gwern’s essay which concludes that brainwashing doesn’t really work. Of course that’s exactly what someone who wants to brainwash yourself would tell you, wouldn’t it?
Sure. I’m not exactly sure why you’d choose to interpret “read up on brainwashing” in this context as meaning “read what a member of the group you’re concerned about being brainwashed by has to say about brainwashing,” but I certainly agree that it’s a legitimate example, and it has exactly the failure mode you imply.
For what it’s worth, gwern’s findings are consistent with mine (see this thread). I’d rather restrict “brainwashing” to coercive persuasion, i.e. indoctrinating prisoners of war or what have you, but Scientology, the Unification Church, and so forth also seem remarkably poor at long-term persuasion. It’s difficult to find comparable numbers for large, socially accepted religions, or for that matter nontheism—more of the conversion process plays out in the public sphere, making it harder to delineate, and ulterior motives (i.e. converting to a fiancee’s religion) are much more common—but if you read between the lines they seem to be higher.
Deprogramming techniques aren’t much better, incidentally—from everything I’ve read they range from the ineffective to the abusive, and often have quite a bit in common with brainwashing in the coercive sense. You couldn’t apply most of them to yourself, and wouldn’t want to in any case.
No argument there. What I alluded to is the second part, incremental “Bayesian” updating based on (independent) new evidence. This is more of an LW “inside” thing.
Ah! Yes, fair.
Sorry, I wasn’t trying to be nonresponsive, that reading just didn’t occur to me. (Coincidence? Or a troubling sign of epistemic closure?)
I will admit, the idea that I should update beliefs based on new evidence, but that repeatedly presenting me with the same evidence over and over should not significantly update my beliefs, seems to me nothing but common sense.
Of course, that’s just what I should expect it to feel like if I were trapped inside a self-reinforcing network of pernicious false beliefs.
So, all right… in the spirit of seriously considering arguments from outside the framework, and given that as a champion of an alternative epistemology you arguably count as “outside the framework”, what would you propose as an answer to drethlin’s question about how far they should update based on each new critic?
Hmm. My suspicion is that formulating the question in this way already puts you “inside the box”, since it uses Bayesian terms to begin with. Something like trying to detect problems in a religious moral framework after postulating objective morality. Maybe this is not a good example, but a better one eludes me at the moment. To honestly try to break out of the framework, one has to find a way to ask different questions. I suspect that am too much “inside” to figure out what they could be.
(nods) That’s fair.
And I can certainly see how, if we did not insist on framing the problem in terms of how to consistently update confidence levels based on evidence in the first place, other ways of approaching the “how can I tell if I’m being brainwashed?” question would present themselves. Some traditional examples that come to mind include praying for guidance on the subject and various schools of divination, for example. Of course, a huge number of less traditional possibilities that seem equally unjustified from a “Bayesian” framework (but otherwise share nothing in common with those) are also possible.
I’m not terribly concerned about it, though.
Then again, I wouldn’t be.
Wrong about what? Different subjects call for different probability.
The probability that the Bayes Theorem is wrong is vanishingly small. The probability that the UFAI risk is completely overblown is considerably higher.
LW “ideology” is an agglomeration in the sense that accepting (or not) a part of it does not imply acceptance (or rejection) of other parts. One can be a good Bayesian, not care about UFAI, and be signed up for cryonics—no logical inconsistencies here.
As long as the number is small, I wouldn’t update at all, because I already expect slow trickle of those people on my current information, so seeing that expectation confirmed isn’t new evidence. If LW achieved a Scientology-like place in popular opinion, though, I’d be worried.
No.
Less Wrong has some material on this topic :)
Seriously though, I’d love to see some applied-rationality techniques put to use successfully doubting parts of the applied rationality worldview. I’ve seen some examples already, but more is good.
The biggest weakness, in my opinion, with purely (or almost purely) probabilistic reasoning is the fact that it cannot ultimately do away with us relying on a number of (ultimately faith/belief based) choices as to how we understand our reality.
The existence of the past and future (and within most people’s reasoning systems, the understanding of these as linear) are both ultimately postulations that are generally accepted at face value, as well as the idea that consciousness/awareness arises from matter/quantum phenomena not vice versa.
In your opinion, is there some other form of reasoning that avoids this weakness?
That’s a very complicated question but I’ll try to do my best to answer.
In many ancient cultures, they used two words for the mind, or for thinking, and it is still used figuratively today. “In my heart I know...”
In my opinion, in terms of expected impact on the course of life for a given subject, generally, more important than their understanding of Bayesian reasoning, is what they ‘want’ … how they define themselves. Consciously, and unconsciously.
For “reasoning”, no I doubt there is a better system. But since we must (or almost universally do) follow our instincts on a wide range of issues (is everyone else p-zombies? am I real? Is my chair conscious? Am I dreaming?), it is highly important, and often overlooked, that one’s “presumptive model” of reality and of themselves (both strictly intertwined psychologically) should be perfected with just as much effort (if not more) as we spend perfecting our probabilistic reasoning.
Probabilities can’t cover everything. Eventually you just have to make a choice as to which concept or view you believe more, and that choice changes your character, and your character changes your decisions, and your decisions are your life.
When one is confident, and subconsciously/instinctively aware that they are doing what they should be doing, thinking how they should be thinking, that their ‘foundation’ is solid (moral compass, goals, motivation, emotional baggage, openness to new ideas, etc.) they then can be a much more effective rationalist, and be more sure (albeit only instinctively) that they are doing the right thing when they act.
Those instinctive presumptions, and life-defining self image do have a strong quantifiable impact on the life of any human, and even a nominal understanding of rationality would allow one to realize that.
Maximise your own effectiveness. Perfect how your mind works, how you think of yourselves and others (again, instinctive opinions, gut feelings, more than conscious thought, although conscious thought is extremely important). Then when you start teaching it and filling it with data you’ll make a lot less mistakes.
All right. Thanks for clarifying.
I think the discussion about the value of Bayseanism is good (this post and following).
You’d be crazy not to ask. The views of people on this site are suspiciously similar. We might agree because we’re more rational than most, but you’d be a fool to reject the alternative hypothesis out of hand. Especially since they’re not mutually exclusive.
Use culture to contrast with culture. Avoid being a man of a single book and get familiar with some past or present intellectual traditions that are distant from the LW cultural cluster. Try to get a feel for the wider cultural map and how LW fits into it.
I’d say you should assign a very high probability for your beliefs being aligned in the direction LessWrong’s are, even in cases where such beliefs are wrong. It’s just how the human brain and human society works; there’s no getting around it. However, how much of that alignment is due to self-selection bias (choosing to be a part of LessWrong because you are that type of person) or brainwashing is a more difficult question.
From Venter’s new book:
And later:
The Sequences probably contain more material than an undergraduate degree in philosophy, yet there is no easy way for a student to tell if they understood the material properly. Some posts contain an occasional question/koan/meditation which is sometimes answered in the same of a subsequent post, but these are pretty scarce. I wonder if anyone qualified would like to compile a problem set for each topic? Ideally with unambiguous answers.
I also think this is a worthwhile endeavor, and speculate that the process and results may be useful for development of a general rationality test, which I know CFAR has some interest in.
Is Our Final Invention available as any kind of e-book anywhere? I can find it in hardback, but not for Kindle or any kind of ePub. I’m not going to start carrying around a pile of paper in order to read it!
You can order it to http://1dollarscan.com/ and still read it on your kindle.
What do you mean by “anywhere”? As Vincent Yu mentions, it is available in the US. It hasn’t been published in print or ebook in the UK. When you find it in hardback, it’s imports, right? If it is published in the UK, it will probably be available as an ebook, but I don’t know if that will happen before the US edition is pirated. If you are generally chomping at the bit to read American ebooks, it is worth investing the time to learn if any ebook sellers fails to check national boundaries. The publisher lists six for this book.
Probably not useful, but the US edition is available in France. (Rights to publish English-language books in countries that don’t speak English aren’t very valuable, so the monopolies to the US and UK usually include those rights. So you if you’re in France, you can get the ebook first, regardless of whether it’s published in the US or UK. Unless they forget to make it available in France.)
I see a Kindle edition on Amazon.
That page only shows me a price for the hardcover version. I wonder if it’s because I have a UK IP address? How much is the Kindle version?
I can see it from Finland, lists the price as $16 for me.
I think it is more likely rejecting you based on being logged in than based on IP, since I can see UK and FR results. Google cache of that link, both at google.com and google.co.uk show me the kindle edition. ($11)
Am I the only person getting more and more annoyed of the cult thing? If the whole ‘lesswrong is a cult’ thing is not a meme that’s spreading just because people are jumping on the bandwagon then I don’t know what is. Can you seriously not tell? Additionally, From my POV it seems like people starting a ‘are we a cult’ threads/conversations do it mainly for signaling purposes.
Also, I bet new members wouldn’t usually even think about whether we are a cult or not if older members were not talking about it like it is a real possibility all the bloody time. (and yes I know, the claim is not made only by people who are part of the community)
/rant
It especially annoys me when people respond to evidence-based arguments that LessWrong is not a cult with, “Well where did you come to believe all that stuff about evidence, LessWrong?”
Before LessWrong, my epistemology was basically a more clumsy version of what is now. If you described my present self to my past self, and said “Is this guy a cult victim?” he would ask for evidence. He wouldn’t be thinking in terms of Bayes’s theorem, but he would be thinking with a bunch of verbally expressed heuristics and analogies that usually added up to the same thing. I used to say things like “Absence of evidence is actually evidence of absence, but only if you would expect to see the evidence if the thing was true and you’ve checked for the evidence,” which I was later delighted to see validated and formalized by probability theory.
You could of course say, “Well, that’s not actually your past self, that’s your present self (the cult victim)’s memories, which are distorted by mad thinking,” but then you’re getting into brain-in-a-vat territory. I have to think using some process. If that process is wrong but unable to detect its own wrongness, I’m screwed. Adding infinitely recursive meta-doubt to the process just creates a new one to which the same problem applies.
I’m not particularly worried that my epistemology is completely wrong, because the pieces of my epistemology, when evaluated by my epistemology, appear to do what they’re supposed to. I can see why they would do what they’re supposed to by simulating how they would work, and they have a track record of doing what they’re supposed to. There may be other epistemologies that would evaluate mine as wrong. But they are not my epistemology, so I don’t believe what they recommend me to believe.
This is what someone with a particular kind of corrupt epistemology (one that was internally consistent) would say. But it is also the best anyone with an optimal epistemology could say. So why Mestroyer::should my saying it be cause for concern? (this is an epistemic “Mestroyer::should”)
I can identify with this. Reading through the sequences wasn’t a magical journey of enlightenment, it was more “Hey, this is what I thought as well. I’m glad Elezier wrote all this down so that I don’t have to.”
Something more than a salon and less than a movement, then… English has a big vocabulary, there must be a good word for it.
What’s wrong with ‘forum’?
Ah, just like 4Chan then? X-D
I think people also say things just to get conversation going. We need to look at making it easier to find useful ways of getting attention.
Can you expand on this?
I believe that one of the reasons people are boring and/or irritating is that they don’t know good ways of getting attention. However, being clever or reassuring or whatever might adequately repay attention isn’t necessarily easy. Could it be made easier?
(nods) Thank you.
I wonder how far a community interested in solving the “boring/irritating people” problem could get by creating a forum whose stated purpose was to respond in an engaged, attentive way to anything anyone posts there. It could be staffed by certified volunteers who were trained in techniques of nonviolent communication and committed to continuing to engage with anyone who posted there, for as long as they chose to keep doing so, and nobody but staff would be permitted to reply to posters.
Perhaps giving them easier-to-obtain attention will cause them to leave other forums where attention requires being clever or reassuring or similarly difficult valuable tings.
I’m inclined to doubt it, though.
I am somewhat tangentially reminded of a “suicide hotline” (more generally, a “call us if you’re having trouble coping” hotline) where I went to college, which had come to the conclusion that they needed to make it more okay to call them, get people in the habit of doing so, so that people would use their service when they needed it. So they explicitly started the campaign of “you can call us for anything. Help on your problem sets. The Gross National Product of Kenya. The average mass of an egg. We might not know, but you can call us anyway.” (This was years before the Web, let alone Google, of course.)
You say “cult” like it’s a bad thing.
Seriously though, using a term with negative connotations is not a rational approach t begin with. Like asking “is this woman a slut?”. It presumes that a higher-than-average number of sexual partners is necessarily bad or immoral. Back to the cult thing: why does this term have a derogatory connotation? Says wikipedia:
Some of the above clearly does not apply (“kidnapping”), and some clearly does (“systematic programs of indoctrination, and perpetuation in middle-class communities”—CFAR workshops, Berkeley rationalists, meetups). Applicability of other descriptions is less clear. Do the Sequences count as brainwashing? Does the (banned) basilisk count as psychological abuse?
Matching of LW activities and behaviors to those of a cult (a New Religious Movement is a more neutral term) does not answer the original implicit accusation: that becoming affiliated, even informally, with LW/CFAR/MIRI is a bad thing, for some definition of “bad”. It is this definition of badness that is worth discussing first, when a cult accusation is hurled, and only then whether a certain LW pattern is harmful in this previously defined way.
*I say “cult” like it carries negative connotations for most people.
I expanded on what I meant in my reply. Sorry about the ninja edit.
Lesswrong is the Rocky Horror of atheist/skeptic groups!
Being a cult is a failure mode for a group like this. Discussing failure modes has some importance.
There also the issue that cult are powerful and speaking of Lesswrong as a cult implies that Lesswrong has a certain power.
A really unlikely failure mode. The cons of discussing whether we are a cult outweigh the pros in my book—especially when it is discussed all the time.
When when will these rants go meta?
nope http://lesswrong.com/lw/atm/cult_impressions_of_less_wrongsingularity/60ub
I believe we are cult. The best cult in the world. The one whose beliefs works. Otherwise, we’re the same; an unusual cause, a charismatic ideological leader, and, what distinguishes us from a school of philosophy or even a political party, is that we have an excathology to worry about; an end-of-the-world scenario. Unlike other cults, though, we wish to prevent or at least minimize the damage of that scenario, while most of them are enthusiastic in hastening it. For a cult, we’re also extremely loose on rules to follow, we don’t ask people to cast off material posessions (though we encourage donations) or to cut ties with old family and friends (it can end up happening because of de-religion-ing, but that’s an unfortunate side-effect, and it’s usually avoidable).
I could list off a few more traits, but the gist of it is this; we share a lot of traits with a cult, most of which are good or double-edged at worst, and we don’t share most of the common bad traits of cults. Regardless of whether one chooses to call us a cult or not, this does not change what we are.
You are using a very loose definition of a cult. Surely you know that ‘cult’ carries some different (negative) connotations for other people?
It might not change what we are but it has some negative consequences. People like you who call us a cult while using a different meaning of ‘cult’ turn new members away because they hear that LessWrong is a cult and they don’t hear your different meaning of the word (which excludes most of the negative traits of bloody cults).
Why a “bloody” cult? What image does “cult” summon in your mind? Cthulhu followers? Osho’s Cadillac collection? The Kool Aid?
I’m beginning to see where you’re going with this. Calling us a cult is like calling Marting Luther King a criminal. Technically correct, but misleading, because of the baggage the word carries.
We would do well, then, to list all the common traits and connotations of a cult, good and bad, and all the ways we are demonstrably different or better than that. That way, we’d have a ready-made response we could release calmly in the face of accusations of culthood, without going through the embarassing and stammering “yeahbut” kind of argument that gives others the opportuinity to act as inquisitors.
Nevertheless, I for one believe we shouldn’t reject the word, but reappropriate it, if only to throw our critics off-balance. “Less Wrong is a cult!” “Fracking right it is!” ”… Wait, you aren’t denying it?” “Nope. But, please, elaborate, what’s wrong with us being a cult, precisely?”
Conversely, we could skip the listing of traits and the construction of ready-made responses that we could release uniformly and the misleading self-descriptions, and move directly to “What would be wrong with that, if we were?”
Anyone who can answer that question has successfully tabooed “cult” and we can now move on to discussing their actual concerns, which might even be legitimate ones.
Engaging on the topic further with anyone who can’t answer that question seems unlikely to be productive.
The problem with cults is that they tend to assign the probability of 1 to their priors.
Of course it’s a general problem with religions, but quoting Ambrose Bierce’s The Devil’s Dictionary from memory...
Religion, n. -- a large successful cult.
Cult, n. -- a small unsuccessful religion.
Well, one of LW’s dogmas seems to be that 0 and 1 are not probabilities, so...
Ah, but that’s just the sort of thing we’d want you to think we believe, to throw you off the scent!
Do we assign the probability of 1 to our priors?
LW regulars are a diverse bunch. Though I have no evidence I am pretty sure some assign the probability of 1 to some priors.
Yes, I agree.
When you said that the problem with religions is that they tend to assign the probability of 1 to their priors, did you mean to include having some members who assign probability 1 to some priors in the category you were identifying as problematic?
Insofar the religion encourages or at least accepts those “some” members and stands behind them, and insofar these priors are important, yes.
OK, cool. Thanks for clarifying.
You make a very good point.
And has this nominally very good point changed your beliefs about anything related to this topic?
It changed my preferred method of approach slightly; I skip the “Yeah we’re a cult” and go straight to the “So what?” It’s a simple method: answer with a question, dodge the idiocy.
Cool.
Substitute ‘bloody’ for ‘fucking’ to get the intended meaning.
Might help but not calling ourselves a cult will probably lead to better PR.
Somewhat amusingly, there’s another tangent somewhere in this discussion (about polyamory, recruiting sex partners, etc.) on which ‘fucking cult’ could also be over-literally interpreted.
I think there are too many superficial similarities for critics, opponents and trolls not to capitalize on them, leaving us in the awkward position of having to explain the difference to these self-styled inquisitors like we’re somehow ashamed of ourselves.
It’s not enough to agree not to call ourselves a cult (and not just because people will, willfully or unwittingly, break this agreement, probably frequently enough to make the policy useless for PR effects).
We need to have an actual plan to deal with it. I say proclaiming ourselves “the best cult in the world” functions as a refuge in audacity, causes people to stop their inquiry, listen up, think, because it breaks patterns.
Saying “we’re not a cult, really, stop calling us a cult, you meanies” comes off as a suspicious denial, prompting a “that’s what they all say” response, simply by pattern-matching to how a guilty party would usually behave. To make ourselves above suspicion, we need to behave differently than a guilty party would.
On a tangential note, I found it useful in raising the sanity waterline precisely among the sort of people who’d be suckers for cults, the sort that wouldn’t go to a doctor and would prefer to resort to homeopathy or acupuncture. By presenting EY as my “guru” and using his more mystical-style works (the “Twelve Virtues of a Rationlist” for example), I managed to get them in contact with the values we preach rather than with the concrete, science-based notions (these guys are under the impression that Science Is Evil and humanity is Doomed to suffer Gaia’s Revenge etc. etc.). With this, I hope to progressively introduce them to an ideology that values stuff that’s actually proven to work, with an idealism and an optimism that is far apart from their Positivist-inspired view of Science as a cold and heartless exploitation machine.
I’m not very confident on that last bit, though, so I suppose I could easily be argued into dropping it.
That’s not what I am saying at all. I am not saying that we should stop people from calling us a cult. I am saying that WE (or maybe YOU) should stop starting threads and conversations about how we are a cult, whether we are a cult and so on. As I said—if people on Lesswrong weren’t questioning themselves whether they are in a cult (there is one such thread in this OT for example) which is ridiculous and weren’t bring attention to it all the time the external cult-calling wouldn’t be so strong either.
And again—to make it clearer: People, please stop bringing up the damn cult stuff all the time. Sure—respond to outsiders when they ask you whether we are a cult but don’t start such a conversation yourself. And don’t mention the cult bollocks when you are telling people about Lesswrong for the first time.
If the problem is the folks among us worrying about us being a cult, not talking about it will only make them worry more. Their concerns should be treated seriously (“Supposing we were a cult, what’s wrong with that?” is indeed a good approach), no matter how stupid they may turn out to be, and they should be reassured with proper arguments, rather than dismissed out of hand. Intimidating outsiders into feeling stupid is, I think, a valid short-time tactic, but when it comes to our folks, we owe each other to examine things clearly.
Since the problem seems to pop up spontaneously as well as propagate memetically, I would suggest making an FAQ with all the common concerns, addressing them in a fair and conclusive manner, that will leave their minds at peace. And not in a short-term, fuzzily-reassuring bullshit kind of peace, but a “problem solved, question dissolved, muthahubber” kind of peace.
I assume you haven’t read this and related posts and the countless other discussions on the topic? The topic has been over discussed already.. My problem is that people keep bringing it up all the time and people (and search engines) start associating ‘lesswrong’ with ‘cult’.
Well then why don’t you just link people to this every time you see the problem pop up? I certainly will.
Sorry, I’m going to be a freaking pedant here, but this is a bit of a pet peeve of mine. That is a physical impossibility. Please refrain from this kind of hyperbole and use the appropriate adjective; in this case, many. Thank you.
I can’t count them = they are subjectively countless for me. Happy now?
Sure you could, you just have other stuff you’d rather do, which is totally okay :)
Nope, I don’t even even have access to information regarding most of the discussions that have taken place ;)
Tangentially… while encouraging others to provide links to relevant past discussions when a subject comes up is a fine thing, it ought not substitute for encouraging in ourselves the habit of searching for relevant past discussions before bringing a subject up.
Actually, a huge problem I have with LW is the sheer amount of discussions-inside-discussions we have. Especially in the Sequences, there’s just too many comments to humanly read. If we could make summaries of the consensus on any specific topic, and keep them updated as discussions progress...
I’m not suggesting reading all the comments everywhere.
I agree that there’s a lot of them, and while I think your estimate of human capability here is low, I can certainly sympathize with the lack of desire to bother reading them all.
I am suggesting that Google is your friend.
Googling “site:lesswrong.com cult,” for example, is a place to start if you’re actually interested in what people have said on this topic in the past.
As far as publishing updated summaries of LW consensus by topic goes, sure, if someone wanted to do that work they’d be welcome to do so.
You might also find the LW wiki useful, if you decide you’re willing to do some looking around (the link is at the top of the site).
For example, someone has taken the time to maintain a jargon file there, in the hopes of making local jargon more accessible to people. I realize it’s not quite as useful to newcomers as someone explaining the jargon each time, or as everyone restricting themselves to mainstream language all the time, but it might be better than nothing.
On a site like this, how do we tell the difference?
Accumulated karma is usually a good metric. The jargon, and the ideological equipment and epistemological approach, are also important signs to look out for. So is the degree of mean-spiritedness. Subjective is not the same as meaningless.
Gotcha.
For my own part I endorse intimidating people who demonstrate mean-spirited behavior into silence (whether by making them feel stupid, if that works, or some other mechanism). Depending on what you mean by “ideological equipment and epistemological approach”, I might endorse the same tactic there as well.
Neither of those endorsements depends much on how long those people have been contributing, or how much karma they’ve accumulated, or what jargon they use.
Have to be careful about that—if you’re being trolled there is noticeable potential for an epic fail :-)
I endorse intimidating them into silence.
I don’t endorse doing something ineffectual or counterproductive with the intention of intimidating them into silence.
Do we?
Well, OK.
One place to start planning is by identifying desired outcomes, and then suggesting actions that might lead to those outcomes. So… what do we expect to have achieved, once we’ve dealt with it?
Another place to start, which is where you seem to be starting, is by arguing the merits of various proposed solutions.
That’s usually not an ideal place to start unless our actual goal is to champion a particular set of actions, and the problem is being identified primarily in order to justify those actions. But, OK, if we’re going down that path… you’ve identified two possible solutions:
Proclaiming ourselves “the best cult in the world”
Saying “we’re not a cult, really, stop calling us a cult, you meanies”
And you’ve argued, compellingly, that #2 is a bad plan, with which I agree completely.
I will toss another contender into the mix:
Asking “assuming we are, so what?” and going on about our business.
There of course exist other options.
Desired outcome:
Our folks stop worrying about whether they’re in a cult.
New folks stop worrying about joining a cult.
Outsiders stop worrying about our potential agendas as a cult.
And I don’t mean “relax”, I mean stop worrying, by virtue of knowing for a fact that their concerns are unfounded.
You do understand that these are the desired outcomes of a bona fide cult as well, right?
If you kidnap virgins to sacrifice in jungle hideouts while waiting for the UFOs to arrive, that’s what you’d want, too.
Yes, well, the desired outcome of both a criminal and an innocent when facing an investigation is to be found innocent. That they both share this trait is irrelevant to their guilt. An innocent certainly shouldn’t start worrying about maybe being guilty just because he doesn’t want to be found guilty, that’s just stupid.
OK. Given that desired outcome, I’d suggest your next steps would be:
figure out who worries about whether they’re in/joining a cult, and what causes them to worry about that
figure out which outsiders worry about our agendas, and what causes them to worry about that
address the concerns that cause those worries.
How to best go about step 3 will depend a lot on the results of step 1 and 2.
Do you have any theories about 1 and 2?
Things I worry about:
Some members make large donations
There is secret knowledge that you pay for (ai-box)
Members do some kooky things (cryonics, polyamory)
Members claim “rationality” has helped them lose weight or sleep better—subject things without controls—rather than something more measurable and where a mechanism is more obvious.
At least one thing is not supposed to be discussed in public (banned memetic hazard). LW members seem significantly kookier when talking about this (and in the original deleted thread) than on more public subjects.
Members have a lot of jargon. It can seem like they’re speaking their own language. More, there’s a bunch of literature embedded in the organization’s worldview; publicly this is treated as simple fiction, but internally it’s clearly taken more seriously.
Although there’s no explicit severing advice, LW encourages (in practice if not in theory) members to act in ways that reduce their out-of-group friendships
The hierarchy is opaque; it feels like there is a clique of high-level users, but this is not public.
Wouldn’t you expect that if the cause actually made sense though? (and not only if this is a cult)
Less than 0.01% of the users have played an ai-box game (to my knowledge) and even less have played it for money.
Again fairly small subset for the first thing, slightly larger for the second but I guess I will give you that one.
Probably a tiny subset of users claim that—I personally have never seen anyone claim that rationality helped them sleep better and if you mean that evidence-based reasoning helped them find an intervention designed to increase sleep quality you are grasping for straws.
We are not supposed to write out the actual basilisk (there is only one) on lesswrong.com. There is no problems with talking about it in public and again this affects a tiny portion of users.
Giving you this one as well.
Bullshit.
There are just respected users and no clear-cut hierarchy—that’s what happens at most places. For a proxy of who is a high-level user look at the ‘Top Contributors’.
This sort of point-by-point refutation is the same sort of thing that would happen in a church that was trying to defend against allegations of cultyness.
I don’t think lmm’s list of reasons was utterly compelling—good, but not utterly compelling—but I don’t think it would matter if it were a perfect list, because there will always be a defense for accusations of cultyness that satisfies the church/forum.
It is more interesting watching it happen here vs. the church IMO because LW is all about rationality, where the church can always push the “faith” button when they are backed into a logical corner.
At the end of the day, it is just an online forum. But it does sound to me (based on what I can gather from perusing) like there are a group of people here who take this stuff seriously enough so as to make cultyness possible.
I’m sure the “LW/cryonics/transhumanism/basilisk stuff is so similar to how religion works” bit got old a long time ago, but Dear Lord is it apparent and fascinating to me.
Ah, the perennial dilemma of how to respond to an accusation of cultiness. If one bothers to rebut it: that’s exactly what a cult would do! If one doesn’t rebut it: oh, the accusation must be unanswerable and hence true!
I completely understand. And I know mine is pretty cheap reasoning. But it just reminds me of what happens in a similar situation in the church. Feel free to ignore it. As I said, I’m confident it has probably been played out by now. I’m satisfied just to watch in awe.
Given how much LWers seem to care about effective charity, I’d expect more scrutiny, and a stronger insistence on measurable outcomes. I guess you’re right though; the money isn’t inherently a problem.
It seems like a defining characteristic; it’s one place where the site clearly differs from more “mainstream” AI research (though this may be a distorted perception since it was how I first heard of LW)
Shrug. It looks dodgy to me. It pattern-matches with e.g. the unverifiable stories people tell of their personal experience of Jesus.
That’s not at all clear. I’ve never seen any explicit rules. I’ve seen articles that carefully avoid saying the name.
Even on internet forums there’s usually an explicit distinction between mod and not, and often layers to it. (The one exception I know is HN, and even there people know who pg is, who’s part of YC and who’s not, and stories are presented differently if they’re coming from YC members). And it’s unusual and suspicious for the high-ups to all be on first name terms with each other. It raises questions over objectivity, oversight, conflict resolution.
It’s not. Your view is definitely distorted.
..
Look around then? Eliezer has even made a Reddit thread for things like that where the basilisk is freely discussed.
Yeah, and people here know who Eliezer Yudkowsky is and who is part of MIRI which is LW’s parent organization..
I’m not active on reddit. Most forums have a link to the rules right next to the comment box; this one does not. There clearly is a chilling effect going on, because I’ve seen posts that make carefully oblique references to memetic hazards rather than just saying “don’t post the basilisk in the comments please”.
I have no idea who’s part of MIRI and which posts are or aren’t from MIRI, because we don’t do the equivalent of (YC 09) on stories here. (And HN was explicitly the worst other example I know; they could certainly stand to improve their transparency a lot).
By “first-name terms with each other”, do you mean something more than the literal meaning of “familiar with someone, such that one can address that person by his or her first name”? Because in my experience, treating other users on a first name basis is the default for all users on many Internet forums, LW included.
I meant “talk about each other as if they’re close personal friends”. (Myself I generally try to avoid using first names for people who aren’t such, but I appreciate that that’s probably a cultural difference).
I think this is more due to the number of people who have their real name as their LessWrong username than any sinister cabal.
Don’t forget that much of the inner circle actually draws a paycheck from the organizations members are encouraged to donate to, and supposedly a fairly large one at that, and that the discussion of how much to donate is framed in terms of averting the destruction of the human race.
That and the polyamory commune are the two sketchiest things IMO, since it shows that the inner circle is materially and directly benefiting from the “altruism” / “rationality” of lower ranking members.
This is a good website, mostly good people on it, but there’s also an impression that there are questionable dealings going on behind the scenes.
Would LW be improved if paid employees/consultants of those organizations were barred from membership? (I do realize there are other ways to address this concern, and I don’t mean to suggest otherwise… for example, we could re-institute the moratorium on discussing those organizations, or a limited moratorium on fundraising activitiies here, or various other things. I’m just curious about your opinion about that specific solution.)
I get a kick out of this, because my social circle is largely polyamorous but would mostly consider LW a charming bunch of complete nutjobs on other grounds. Polyamory really isn’t all that uncommon in communities anchored around high-tech development/elite tech schools, IME.
No, I suspect it would die. Actionable suggestion though: some kind of “badge” / note (along the lines of the “mod” or “author” tag you get in Disqus, for example). When I worked for a company with user forums it was policy that all staff had badges on their avatars and you should not post from a non-staff account while employed by the company.
That is very surprising, very different from my experiences. My social circle comes mainly from the University of Cambridge and the London tech scene; I am friends with several furries, more transsexuals, and have had dinner with the maintainer of the BDSM FAQ. Polyamory still seems fucking weird (not offensive or evil, but naive and pretentious, and correlated with what I can probably best summarize as the stoner stereotype). I’ve met a couple of openly poly people and they were perfectly friendly and had the best of intentions but I wouldn’t trust them to organize my closet, yet alone the survival of humanity.
I originally took Moss_Piglet to imply that the leadership were recruiting groupies from LW to have sex with. That I would find very disturbing and offputting. Assuming that’s not an issue, I think the weirdness of the polyamory would be amply countered by evidence of general competence / life success.
Wth? You seem very prejudiced. What makes people who have multiple relationships this much less trustworthy than ‘normal’ people to you (that you wouldn’t even trust them to organize your closet)?
This whole subject is about prejudice; we judge organizations for their cultlike characteristics without investigating them in detail.
I think I was unclear: the implication runs in the opposite direction. All the specific poly individuals I’ve met are people I wouldn’t trust to organize my closet, in terms of general life competence (e.g. ability to hold down a job, finish projects they start, keep promises to friends). As a result, when I find out that someone is poly, I consider this evidence (in the Bayesian sense) that they’re incompetent, the same way I would adjust my estimate of someone’s competence if I discovered that they had particular academic qualifications or used a particular drug. (Obviously all these factors are screened off if I have direct evidence about their actual competence level).
To throw in a possibly-relevant factor: age.
It seems to me that polyamory in early twenties is quite different from polyamory in late forties.
Well, I would certainly agree with this. The poly folk I know in their 40s are as a class more successful in their relationships than the poly folk I know in their 20s.
Of course, this is true of the mono folk I know, also.
Which is what I would expect if experience having relationships increased one’s ability to do so successfully.
It also agrees with the experience from my real-life social circles, which are dominated by university people from the Helsinki region. Poly seems to be treated as roughly the same as homosexuality: unusual but not particularly noteworthy one way or the other. Treatments in the popular press (articles about some particular poly triad who’s agreed to be interviewed, etc.) are also generally positive.
(nods) Fair enough. My experience is limited to the U.S. in this context.
I agree that “recruiting groupies to have sex with,” if I understand what you mean by the phrase, would be disturbing and offputting.
Moss_Piglet appears to believe that polyamory is in and of itself a bad sign, independent of whether any groupies are being recruited. Beyond that, I decline to speculate.
The ethics of conflicts of interest in the media are pretty well-trodden ground; you could adapt any journalistic ethics document into a comprehensive do’s and don’ts list. Do state your financial interests clearly and publicly, don’t mix advertising and information content, etc.
It’s not foolproof and bias is universal, but a conflict of interest on the scale of one’s career and sex life is something a rationalist ought to be at least a little concerned about.
That may well be true, but it makes little difference. The collapse of sexual mores is directly, IMO causally, linked with decreased stability and fertility; letting it take hold in the highest IQ sections of the population is a recipe for disaster. Engaging in that sort of behavior is strong evidence against a person’s character as a leader.
That’s a silly assertion. First because it’s obviously only true from the point of view of a specific morality. And second, because it ignores the empirical fact that, in full compliance with biology, powerful male leaders usually bed many women.
Actually, that sounds like a great idea. I would absolutely endorse a policy along those lines.
Ah, I see.
Sure, if I believed that failing to conform to previously established sexual mores led to decreased stability, I would no doubt agree that the poly aspects of LW and its various parent organizations were troubling.
Similar things are true of my college living group, the theater group I perform with, etc.
Come to that, I suppose similar things are true of my state government, which accepts my marriage to my husband as a family despite this being a clear violation of pre-existing sexual mores, and of large chunks of my country, which accepts marriages between people of different skin colors as families despite this being a clear violation of the sexual mores of a few decades ago.
I’d prefer not to completely derail this into a political/moral argument, since the issue at hand is a bit more relevant. Since you are, as you admit, fairly invested in your position there is little purpose to debate on this point except to distract from the main issue.
Sure; I have no interest in debating with you whether polyamory, homosexuality, miscegenation and other violations of sexual mores really are harmful. As you say, it would be a distraction.
My point was that if they were, it would make sense to object to an organization on the grounds that its leadership approves of/engages in harmful sexual-more-violating activities. What would not make sense is to differentially object to that organization on those grounds.
That is, accepting/endorsing/engaging sexual-more-violating activities might make LW’s parent organizations dangerous, but it doesn’t make them any more dangerous than many far-more-mainstream organizations that do the same things (like my college living group, the theater group I perform with, etc).
If this one of the two sketchiest things about the organization, as you suggest, it sounds to me like the organization is pretty darned close to the mainstream, which seems rather opposed to the point lmm was trying to make, and which I originally inferred you were aligning with.
Don’t forget treating the writings of the charismatic founder as a sacred text and ritually quoting them :-)
Cool… this sort of thing is far more actionable than “seeming like a cult.”
So, next question. Taking you as representative of the group (which is of course not necessarily true, but we start from where we are)… what is your sense of where each of these falls on the spectrum between “this is legitimately worrying; in order to be less at risk for actual bad consequences LW should actually change so as not to do this” on the one hand, and “this is merely superficially worrying; there are probably no real risks here and LW should merely take steps to reassure worriers not to worry about it”?
I’m legitimately worried about the money and the incentives it creates. What would a self-interested agent (LW seems to use “agent” in exactly the opposite sense to what I’d expect it to mean, but I hope I’m clear) in the position of the LW leadership do? My cynical view is: write some papers about how the problems they need to solve are really hard; write enough papers each year to appear to be making progress, and live lives of luxury. So what’s stopping them? People in charities that provide far more fuzzies than LW have become disenchanted. People far dumber than Yudkowsky have found rationalizations to live well for themselves on the dime of the charity they run. Corrupt priests of every generation have professed as much faith that performing their actual mission would result in very high future utility, while in fact neglecting those duties for earthly pleasures.
Even if none of the leadership are blowing funds on crack and hookers, if they’re all just living ascetically and writing papers, that’s actually the same failure mode if they’re not being effective at preventing UFAI. When founding the first police force, one of Peel’s key principles was that the only way they could be evaluated was the prevalence of crime—not how much work the police were seen to be doing, not how good the public felt about their efforts. It’s very hard to find a similar standard with which to hold LW to account.
It occurs to me as I write that I have no idea what the LW funding structure is—whether the site is funded by the CFAR, MIRI, SIAI or something else. Even having all these distinct bodies with mostly the same members smells fishy, seems more likely to be politics than practicalities.
The kookiness… if LW were really more rational than others, I’d expect them to do some weird-but-not-harmful-to-others things. So I suspect this is more a perception than reality thing (Though if there are good answers to “what’s the empirical distinction between real and fake cryonics” and “why do you expect polyamory to turn out better for you lot than it did for the ’60s hippie communes” it’d be nice to see them). IMO the prime counter would be visible effectiveness. A rich person with some weird habits is an eccentric genius; a poor person with weird habits is just a crank.
It would be really nice to have more verifiable results that say LW-style rationality is good for people (or to know that it isn’t and respond accordingly). The failure mode here is that we do a bunch of things that feel good and pat each other on the back and actually it’s all placebos. We actually see a fair few articles here claiming that reading LW is bad for you, or that rationality doesn’t make people happier. On thinking it through this would be the kind of cult that’s basically harmless, so I’m not too concerned. On the perception side, IMO discussing health is not worth the damage it does to the way the community is seen (the first weight-loss thread I saw caused a palpable drop in my confidence in the site). I’ve no idea how to practically move away from doing so though.
Secrets and bans rub me very strongly the wrong way, and seem likely to damage our efforts in nonobvious ways (to put it another way, secretive organizations tend to become ineffective at their original aims, and I’m worried about this failure mode). I certainly don’t think the ban on the basilisk is effective at its purported aim, given that it’s still talked about on the internet. And just having this kind of deception around immediately sets off a whole chain of other doubts—what if it’s banned for other reasons? What else is banned?
If there really is a need for these bans, there should be a clear set of rules and some kind of review. That would certainly address the perception, and hopefully the actuality too.
I think the use of fictional evidence is actually dangerous. Given the apparently high value of LW-memetic fiction in recruiting, I don’t know where the balance is. I think overuse of jargon is just a perceptual problem (though probably worth addressing).
I have… unusual views on diversity, so I don’t think setting people against their less-rational friends is an actual problem (in the sense of being damaging to the organization’s aims); I file this as a perceptual problem. The most obvious counter I can think of is more politeness about common popular misbeliefs, and less condescension when correcting each other. But I suspect these are problems inherent to internet fora (which doesn’t mean they’re not real; I would suggest that e.g. reddit has a (minor) cultish aspect to it, one that’s offputting to participation. But there may not be any counter).
The hierarchy: in the short term it’s merely annoying, but long-term I worry about committee politics. If some of the higher-ups fell out in private (and given that several of them appear to be dating each other that seems likely) and began sniping at each other in the course of their duties, and catching innocent users in the crossfire… I’ve seen that happen in similar organizations and be very damaging. Actual concern.
So in summary: actual concern: where the money goes, any secrets the organization keeps, clarity of the leadership hierarchy, overuse of fiction. superficial issues: overuse of jargon. The rest of my list is on reflection probably not worth worrying about.
MIRI and SIAI are the same organization: SIAI is MIRI’s old name, now no longer used because people kept confusing the Singularity Institute and Singularity University.
(AFAIK, LW has traditionally been funded by MIRI, but I’m not sure how the MIRI/CFAR split has affected this.)
One might as well ask “why do you expect monogamy to turn out better than it did for all the people who have gone through a series of dysfunctional relationships”. Being in any kind of relationship is difficult, and some relationships will always be unsuccessful. Furthermore, just as there are many kinds of monogamous relationships—from the straight lovers who have been together since high school, to the remarried gay couple with a 20-year age difference, to the arranged marriage who’ve gradually grown to love each other and who practice BDSM—there are many kinds of polyamorous relationships, and the failure modes of one may or may not be relevant for another.
If you specify “the kinds of relationships practiced in the hippie communes of the sixties”, you’re not just specifying a polyamorous relationship style, you’re also specifying a large list of other cultural norms—just as saying “conventional marriages in the 1950′s United States” singles out a much narrower set of relationship behaviors than just “monogamy”, and “conventional marriages among middle class white people in the 1950′s United States” even more so.
And we haven’t even said anything about the personalities of the people in question—the kinds of people who end up moving to hippie communes are likely to represent a very particular set of personality types, each of which may make them more susceptible to some kinds of problems and more resistant to others. Other poly people may or may not represent the same personality types, so their relationships may or may not involve the same kinds of problems.
Answering your original question would require detailed knowledge about such communes, while most poly people are more focused on solving the kinds of relationship problems that pop up in their own lives.
You’re right, I overextended myself in what I wrote. What I meant was: I’m aware of long-term successful communities practicing monogamy, and long-term somewhat successful communities practicing limited polygyny—i.e. cases where we can reasonably conclude that the overall utility is positive. I’m not aware of long-term successful communities practicing other forms such as full polyamory (which may well be my own ignorance).
The fact that a small group of bay-area twentysomethings has been successfully practicing polyamory for a few years does not convince me that the overall utility of polyamory is positive. That’s because with ’60s hippie communes my understanding is that a small group of bay-area twentysomethings were successfully practicing polyamory for a few years, but eventually various “black swan”-type events (bwim events analogous to stock market crashes, but for utility rather than economic value) occurred, and it turns out the overall utility of those communes was negative despite the positive early years. If today’s polyamorists want to convince me that “this time is different” they would have some work to do.
(I’m not an expert on the history. It’s entirely possible I’m simply wrong, in which case I’d appreciate pointers to well-written popular accounts that are more accurate than the ones I’m currently basing my judgement on).
It still sounds like you’re talking about poly as if it was a coherent whole, when it’s really lots and lots of different things, some with a longer history than others. Take a look at this map, for instance—and note that many of the non-monogamous behaviors may overlap with ostensibly monogamous practices. E.g this article, written by a sexuality counselor, (EDIT: removed questionable prevalence figure) basically says that swinging works for some couples and doesn’t work for others. Similarly, for everything else in that map, you could find reports from different people (either contemporary people or historical figures) who’ve done something like it, with it having been a good idea for some, and a bad idea for others.
I guess the main thing that puzzles me about your comments is that you seem to be asking for some general argument for why (some specific form of) polyamory could be expected to work for everyone. Whereas my inclination would be to say “well, if you describe to me the people in question and the specific form of relationship arrangement they’re trying out, I guess I could try to hazard a guess of whether that arrangement might work for them”, without any claim of whether that statement would generalize for anyone else in any other situation. For example, in Is Polyamory a Choice?, Michael Carey writes:
I’ve also personally ran into cases of “naturally poly” people, who couldn’t prevent themselves from falling into love with multiple people at once, and who were utterly miserable if they had to kill those emotions: if they wanted to stay monogamous, they would have been forced to practically stop having any close friendships with people of the sexes that they were attracted to. For those people, it seems obvious that some kind of non-monogamous arrangement is the only kind of relationship in which they can ever feel really happy. (I don’t need to find an example of a visible community that has successfully practiced large-scale polyamory in order to realize that this kind of a person would be miserable in a monogamous arrangement.) At the same time, I also know of people are not only utterly incapable of loving more than one person, but also quite jealous: for those people, it seems obvious that a monogamous relationship is the right one.
Then there are people who are neither clearly poly nor clearly mono (I currently count myself as belonging to this category). For them the best choice requires some experimentation and probably also depends on the circumstances, e.g. if they fall in love with a clearly poly person, then a poly relationship might work best, but so might a mono relationship if they fell in love with someone who was very monogamous.
Then there are people who don’t necessarily experience romantic attraction to others, but also don’t experience much sexual jealousy and feel like having sex with others would just spice up the relationship they have with their primary partner: they might want to try out e.g. swinging. And so on.
I don’t believe that for a second and you should apply a little more critical thought to these numbers. What experts? What are they basing this on? Searching for this, I find nothing but echo chambers of media articles—“experts say”, “some experts think”, etc. Is 15 million remotely plausible? There are ~232m adults in the US, half are married, so 15m swingers would imply that 13% of marriages are open.
Slightly better are ‘estimates’ (or was it ‘a study’?) attributed to the Kinsey Institute of 2-4% of married couples being swingers, but that’s also quoted as ‘2-4m’ (a bit different) and one commenter even quotes it as 2-4% being ‘the BDSM and swing communities’, which reduces the size even more. All irrelevant, since I am unable to track down any study or official statement from Kinsey so I can’t even look at the methodology or assumptions they made to get those supposed numbers.
Fair point—the exact number wasn’t very important for my argument (I believe it would still carry even with the 2-4% or 2-4m figure), so I just grabbed the first figure I found. It passed my initial sanity check because I interpreted the “couples” to include “non-married couples”, and misremembered the US population to be 500 million rather than 300 million. (~4% of the adult population being swingers didn’t sound too unreasonable.)
I think the argument goes through the same way though. I understand your position to be: a culture in which a variety of different relationships are accepted and respected (among both leaders and ordinary citizens), and LW-style polyamory is one of those many varieties, can be stable, productive, and generally successful and high-utility. The question remains: why don’t we see historical examples of such societies? While you’re right that even nominally monogamous societies usually tolerate some greater or lesser degree of non-monogamous behavior, the kind of polyamory practiced by some prominent LW members is, I think, without such precedent, and would be condemned in all historically successful societies (including, I think, contemporary american society; while we don’t imprison people for it, I don’t think we’d elect a leader who openly engaged in such relationships, for example).
Yes, there’s a small self-selecting bay area community which operates in the way you describe. But I don’t think that community has (yet) demonstrated itself to be successful; other communities have achieved the same level of success that they currently enjoy, and then undergone dramatic collapse shortly after.
Well, one possibility would be that a polyamorous inclination is simply rare. In that case, we wouldn’t expect any society to adopt large-scale polyamorous practices for the same reason why we wouldn’t expect any society to adopt e.g. large-scale asexual or homosexual practices.
But then there’s also the issue that most societies have traditionally been patriarchal, with strict restrictions on women’s sexuality in general (partially due to early contraception being unreliable and pregnancies dangerous). If you assumed that polyamory could work, but that most societies in history wouldn’t want to give women the same kind of sexual freedom as men, then that would suggest that we could expect to see lots of polygamous societies… which does seem to be case.
What counts as success, anyway? Does a relationship have to last for life in order to be successful? I wouldn’t count e.g. a happy relationship of five years to be a failure, if it produces five years of happiness for everyone involved.
In general agreed, but a quibble about:
I don’t think I would necessarily expect that.
Of course, it depends a lot on what we mean by “large scale”. A society with a 5% Buddhist minority can still have large-scale Buddhist practice (e.g., millions of people practicing Buddhism in a public and generally accepted way). I would say some U.S. states are visibly adopting homosexual practices (e.g. same-sex marriage) on a large scale, despite it being very much a minority option, for example.
In the absence of any other social forces pushing people towards nominally exclusive monogamy/monoandry (and heterosexuality, come to that) I would expect something like that kind of heterogeneity for natural inclinations.
Of course, in the real world those social forces exist and are powerful, so my expectations change accordingly.
I think you’re putting the cart before the horse there. If patriarchy is a near-human-universal, doesn’t that suggest there’s a good reason for it?
My impression is that the downsides of breakup dominate the overall utility compared to the marginal increase from having a better relationship. Particularly in the presence of children.
My impression is the reverse. Breakups tend to be sharply painful, but the wounds heal in a matter of months or at most a few years. But if you’re unwilling to consider breakups, being in a miserable relationship is for the rest of your life.
Sure—it was probably a natural adaptation to the level of contraception, healthcare, and overall wealth available at the time. Doesn’t mean it would be a good idea anymore.
And if you wish to reinstate patriarchy, then singling out polyamory as a suspicious modern practice seems rather arbitrary. There’s a lot of bigger stuff that you’d want to consider changing, like whether women are allowed to vote… or, if we wish to stay on the personal level, you’d want to question any relationships in which both sexes were considered equal in the first place.
That sounds unlikely in the general case (though there are definitely some spectacularly messy break-ups where that is true), but of course it depends on your utility function.
I think that happens; it’s hard to imagine e.g. a president with anything other than a traditional family (were/are the Clintons equals? More so than those before them, but in public at least Hilary conformed to the traditional “supportive wife” role (in a way that I think contrasts with Bill’s position for the 2008 primaries)). To a certain extent LW is always going to seem cultish if our leaders’ relationships are at odds with the traditional forms for such. And I don’t think that’s irrational: in cases where failures are rare but highly damaging, it makes sense to accord more weight to tradition than we normally do.
(on the voting analogy: I’d be very cautious about adopting any change to our political system that had no historical precedent and seemed like it might increase our odds of going to war, even if it had been tried and shown to be better in a few years of day-to-day use. I don’t think that’s an argument against women having the vote (they’re stereotypically less warlike—although it has been argued that the Falklands War happened because Thatcher felt the need to prove herself and wouldn’t’ve occurred under a male PM), but it is certainly an argument for not extending the vote to non-landowners and under-21s. In as much as war has declined since the vote was extended to non-landowners and under-21s—which is actually, now that I think about it, really quite surprising—I guess that’s evidence against this position)
There are many relationships where the “marginal” increase from not being in that relationship anymore far outweigh the downsides of the breakup.
Sure there are good reasons. Physical strength is one. Not being in a semi-permanent state of either pregnant or breast-feeding is another.
Sure, I think these are legitimate.
Actually, I think this periodic “cult” angst distracts attention from a far more interesting question: “do contributions to MIRI actually accomplish anything worthwhile?” (Of which “Is MIRI a scam?” is a narrow subset, and not the most interesting subset at that, though still far more interesting than “Is LW a cult?”.)
Admittedly, I have the same problem with most non-profit organizations, but I agree that financial and organizational transparency is important across the board. (That said, I have no idea how MIRI’s transparency compares to, say, the National Stroke Foundation’s, or the Democratic National Committee’s.)
Hm.
So, just to clarify our terms a little… there’s a difference between fictional evidence on the one hand (that is, treating depictions of fictional events as though they were actual observations of the depicted events), and creating fictional examples to clearly and compellingly articulate a theory or call attention to an idea on the other.
The latter, in and of itself, I think is harmless. Do you disagree?
The former is dangerous, in that it can lead to false beliefs, and I suppose I agree that any use of fiction (including reading it for fun) increases my chances of unintentionally using it as evidence.
So I guess the question is, are we using fiction as evidence? Or are we using it as a shortcut to convey complicated ideas? And if it’s a bit of both, as I expect, then it’s (as you say) a question of where the balance is.
So, OK. You clearly believe the balance here swings too far to “fiction as evidence,” which might be true, and is an important problem if true.
What observations convinced you of this?
More tangentially:
Well, ultimately my answer to this is that a lot of my friends are in poly relationships, and it seems to be working for them all right. This is also why I expect same-sex marriages to turn out OK, why I expect marriages between different races to turn out OK, why I expect people remaining single to turn out OK, and so forth.
Am I ignoring the example of 60s hippy communes? Well, maybe. I don″t know that much about them, really. The vague impression I get is that they were a hell of a lot more coercive than I would endorse.
If MIRI folks are going around getting into coercive relationships (which is pretty common for people generally), I object to that, whether those relationships are poly or mono or kinky or vanilla or whatever. If MIRI folks are differentially getting into coercive relationships (that is, significantly more so than people generally), that’s a serious concern.
Are they?
Mostly, my rule of thumb about such things is that it’s important for a relationship to support individuals in that relationship in what they individually want and don’t want. Monogamous people in poly relationships suffer. Polygamous people in mono relationships suffer. Etc.
Also that it’s important for a community to support individuals in their relationship choices (which includes the choice to get out of a relationship).
All the other examples I can think of of what I might call political fiction—using fiction to convey serious ideas—are organizations/movements I have negative views of. One thinks of Ayn Rand, those people who take Gor seriously (assuming they’re not just an internet joke), the Pilgrim’s Progress. Trying to cast my net wider, I felt the part of The Republic where Plato tells a story about a city to be weak, I found Brave New World interesting as fiction but actively unhelpful as an point about the merits of utilitarianism. Monopoly is an entertaining board game (well, not a very entertaining one to be honest) but I don’t think it teaches us anything about capitalism.
I’ve been highly frustrated by the use of “parables” here like the dragon of death or the capricious king of poverty; it seems like the writers use them as a rhetorical trick, allowing them to pretend they’ve made a point that they haven’t in fact made, simply because it’s true in their story.
I am genuinely struggling to think of any positive-for-the-political-side examples of this kind of fiction.
I guess the main thing here is that if poly relationships worked (beyond limited polygyny in patriarchal societies, which evidently does “work” on some level) I’d expect to see an established tradition of polyamory somewhere in the world, and I don’t (maybe I’m not looking hard enough). Being single clearly works for many people. Gay people have shown an ability to form long-term, stable relationships when they weren’t allowed to marry, so it seems like marriage is likely to work. Widespread interracial marriage simply wasn’t possible before the development of modern transport technology, so there’s no mystery about the lack of historical successes. Is there some technological innovation that makes polyamory more practical now than it was in the past? I guess widespread contraception and cheap antibiotics might be such a thing… hmm, that’s actually a good answer. I shall reconsider.
So, when marriage traditionalists argue that the absence of an established tradition of same-sex marriages is evidence that same-sex marriage isn’t likely to work (since if it did, they’d expect to see an established tradition of same-sex marriage), you don’t find that convincing… your position there is that:
a) marriage is just a special case of long-term, stable relationship, and
b) we do observe an established (if unofficial, and often actively derided) tradition of long-term, stable same-sex relationships among the small fraction of the population who enjoy such relationships, so
c) we should expect same-sex marriages to work among that fraction of the population.
Yes?
But by contrast, on your account we don’t observe an established (if unofficial) tradition of long-term, stable multi-adult relationships, so a similar argument does not suffice to justify expecting multi-adult marriages to work among the fraction of the population who enjoy such relationships.
Yes? Have I understood you correctly?
You’ve accurately summarized what I said. I think on reflection point a) is very dubious, so allow me to instead bite your bullet: I don’t yet have a strong level of confidence that same-sex marriages are likely to work. (Which I don’t see as a reason to make them illegal, but might be a reason to e.g. weight them less strongly when considering a couple’s suitability to adopt).
I think we may need to taboo “work” here; if we’re talking about suitability as an organizational leader here then that’s a higher standard than just enjoying one’s own sex life. I would supplement b) with the observation that we observe historical instances of gay people making major contributions to wider society—Turing, Wilde, Britten (British/Irish examples because I’m British/Irish).
But yeah, that’s basically my position. Interested to see where you’re going with this—is there such an “established (if unofficial) tradition of long-term, stable multi-adult relationships” that I’m just ignorant of?
Um, polygamy? Concubinage? Both have long histories, and show up in cultures that are clearly functional.
Both of them are less gender egalitarian than modern polyamory, and it’s not clear to me that there’s ample real-world evidence of, say, Heinlein’s idea of line marriages working out.
Clearly functional… but as functional? http://www.gwern.net/docs/2012-heinrich.pdf
I think it would be useful here to distinguish between what is/was/should be/might be the average and what is the acceptable range of deviation from that average.
A society where most men have one wife but some men have several is different from a society where most men have one wife and having several is illegal and socially unacceptable.
Probably not- I buy the arguments that the incentives generated by monogamy are better than the ones generated by polygamy, across society as a whole. (I am not yet convinced that serial monogamy enabled by permissive divorce laws is better than polygyny, but haven’t investigated the issue seriously.) I meant more to exclude the idea that polygamy is only seen in, say, undeveloped societies.
Where I’m going with this is trying to understand your position, which I think I now do.
My own position, as I stated a while back, is that I base my opinions about the viability of certain kinds of relationships on observing people in such relationships. The historical presence or absence of traditions of those sorts of relationships is also useful data, but not definitively so.
EDIT: I suppose I should add to this that I would be very surprised if there weren’t just as much of a tradition of married couples one or both of whom were nonmonogamous with the knowledge and consent of their spouse as there was a tradition of people having active gay sex lives, and very surprised if some of those people weren’t making “major contributions to wider society” just as some gay people were. But I don’t have examples to point out.
Notoriously, Lady Hamilton and Horatio Nelson had a public affair in the 1800s, without any objection from Lord Hamilton.
Off the top of my head, Erwin Schrödinger.
Given this fact, I am now surprised by not having previously observed a seemingly endless series of jokes about it playing on the supposed indeterminacy of poly relationships.
I detect a horrible gap in the fabric of the universe! We need to create some ASAP!!
SMBC is on to it.
Right. As I’ve said, I think relationships tend to have big negative spikes analogous to stock market crashes, so am cautious about judging from samples of a few years.
Sure, absolutely.
Even watching twenty-year-old poly relationships, as I sometimes do, isn’t definitive… maybe it takes a few generations to really see the problems. Ditto for same-sex marriages, or couples of different colors, or of different religious traditions… sure, these have longer pedigrees, but the problems may simply not have really manifested yet, but are building up momentum while people like me ignore the signs.
I mention my own position not because I expect it to convince you, but because you were asking me where I was going in a way that suggested to me that you thought I was trying to covertly lead the conversation along to a point where I could demonstrate weaknesses in your position relative to my own, and in fact the questions I was asking you were largely orthogonal to my own position.
Just to pick a somewhat arbitrary example I was thinking about recently… have you ever read The Ones Who Walk Away from Omelas?
Does it qualify as what you’re calling “political fiction”?
Do you associate it with any particular organizations/movements?
By happy coincidence I also read it recently. I’ve not seen it used to make political/philosophical arguments, so I don’t class it as such. To my mind the ending and indeed the whole story is more ambiguous than the examples I’ve been thinking of; if the intent was to push a particular view then it failed, at least in my case. (By contrast Brave New World probably did influence my view of utilitarianism, despite my best efforts to remain unmoved).
If I saw someone using it to argue for a position I’d probably think less of it or them, and on a purely aesthetic level I found it disappointing.
I guess maybe Permutation City is a positive example; it provides a useful explicit example of some things we want to make philosophical arguments about. Maybe because I felt it wasn’t making a value judgement—it was more like, well, scientific fiction.
Thinking about this some more, and rereading this thread, I’m realizing I’m more confused than I’d thought I was.
Initially, when you introduced the term “political fiction,” you glossed it as “using fiction to convey serious ideas.” Which is similar enough to what I had in mind that I was happy to use that phrase.
But then you say that you don’t class Omelas in this category, because it doesn’t successfully push a particular view. Which suggests that the category you have in mind isn’t just about conveying ideas, but rather about pushing a particular philosophical/political perspective—yes?
But then you say that you do class Permutation City in this category (and I would agree), and you approve of it. This actually surprises me… I would have thought, given what you’d said earlier, that you would object to PC on the grounds that it simply asserts a fictional universe in which various things are true, and could be misused as evidence that those things are in fact true in the real world. I’m glad you aren’t making that argument, as it suggests our views are actually closer than I’d originally thought (I would agree that it’s possible to misuse PC that way, as it is possible to misuse other fictional ideas, but that’s a separate issue).
And you further explain that PC isn’t making a value judgment… but that you still consider it an example of political fiction… which is consistent with your original definition of the term… but I don’t know how to reconcile it with your description of Omelas.
So… I’m genuinely confused.
I’m trying to explore this as we go along; it’s very possible I’ve been incoherent.
I don’t class Omelas in any of these categories category because I’ve never seen it used in this kind of discussion, at all.
Permutation City was an example I only thought of in the last post. Fundamentally I feel like it belongs in a different cluster from these other examples (including the LW ones) - I’m trying to understand why, and I was suggesting that the value judgements might be the difference, but that’s little more than a guess.
Well… OK. Let’s change that.
OK… now that you’ve seen Omelas used to in a discussion about utilitarian moral philosophy, does your judgment about the story change?
Hmm. I was not fond of the story in any case, so this use would need to be particularly bad to diminish my opinion of it.
The fundamental lack of realism in the story now seems more important. Where before I was happy to suspend disbelief on the implausibility of a town that worked that way, if we’re going to apply actual philosophy it I find myself wanting a more rigorous explanation of how things work—why do people believe that comforting the child would damage the city?
Do I think using the story has made the discussion worse? Maybe; it’s hard to compare to a control in the specific instance. But I think in the general, average case philosophical discussions that use fictional examples turn out less well than those that don’t.
If I thought the use of the story had damaged the discussion, would that make me think less of the story or author? I think my somewhat weaselly (and nonutilitarian) answer is that intent matters here. If I discovered that a story I liked was actually intended allegorically, I think I’d think less of it (and to a certain extent this happened with Animal Farm, which I read naively at an early age and less naively a few years later, and thought less of it the second time). But if someone just happens to use a story I like in a philosophical argument, without there being anything inherent to the story or author that invited this kind of use, I don’t think that would change my opinion.
OK, fair enough. Thanks for clarifying your position.
Interesting.
From my perspective, Omelas does a fine job of doing what I’m claiming fiction is useful for in these sorts of discussions… it makes it easy to refer to a scenario that illustrates a point that would otherwise be far more complicated to even define.
For example, I can, in a conversation about total-utilitarianism, say “Well, so what if anything is wrong with Omelas?” to you, and you can tell me whether you think there’s anything wrong with Omelas and if so what, and we’ve communicated a lot more efficiently than if we hadn’t both read the story.
Similarly, around LW I can refer to something as an Invisible Dragon in the Garage and that clarifies what might otherwise be a hopelessly muddled conversation.
Now, you’re certainly right that while having identified a position is a necessary first step to defending that position, those are two different thigns, and that soemtimes people use fiction to do the former and then act as though they’d done the latter when they haven’t.
This is a mistake.
(Also, since you bring it up, I consider Brave New World far too complicated a story to serve very well in this role… there’s too much going on. Which is a good thing for fiction, and I endorse it utterly, since the primary function of fiction for me is not to provide useful shortcuts in discussions. However, some fiction performs this function and I think that’s a fine thing too.)
I think there is a good deal to be said for the interpretation according to which many of the features of the “ideal” city Socrates describes in Republic are intended to work on the biases of Glaucon and Adeimantus (and those like them) and not intended to actually represent an ideal city. Admittedly, Laws does seem to be intended to describe how Plato thinks a city should be run, so it seems that Plato had some pretty terrible political ideas (at least at the end; Laws is his last work, and I prefer to think his mind was starting to go), but nonetheless it’s not at all safe to assume that all the questionable ideas raised by Socrates in Republic are seriously endorsed by Plato.
I completely agree that Brave New World seems unhelpful in evaluating utilitarianism.
Which is I think the fundamental problem with this kind of political fiction. It allows people to present ideas and implications without committing to them or providing evidence (or alternately, making it clear that this is an opposing view that they are not endorsing). But then at a later stage they go on to treat the things that happened in their fiction as things they’d proven.
Brave New World is intended as a critique of utilitarianism: the Fordian society’s willingness to treat people as specialized components may not be as immediately terrifying as 1984, but it’s still intentionally disutopian. My apologies if I’m misreading your statement, but many folk don’t get that from reading the book in a classroom environment.
Some subcultures have different expectations of exclusivity, which may be meaningful here even if not true polyamory.
Communication availability and different economic situations, as well. The mainstream entrance of women into the work force as self-sustaining individuals is a fairly new thing, and the availability of instant always-on communication even more recent.
EDIT: I agree that there are structural concerns if a sufficient portion of the leadership are both poly and in a connected relationship, but this has to do more with network effects than polyamory. The availability and expenditures of money are likely to trigger the same network effect issues regardless of poly stuff.
Yes. I thought it would be clear that I knew that, since I don’t think my statement makes any sense otherwise?
I found Brave New World, in so far as it is taken as an illustration of a philosophical point, actively unhelpful; that is, its existence is detrimental to the quality of philosophical discussions about the merits of utilitarianism (basically for all the usual reasons that fictional evidence is bad; it leads people to assume that a utilitarian society would behave in a certain way, or that certain behaviors would have certain outcomes, simply because that was what happened in Brave New World).
(Independently, I found it interesting and enjoyable as a work of fiction).
I think this fine if the papers are good. It’s routine in academic research that somebody says “I’m working on curing cancer.” And then it turns out that they’re really studying one little gene that’s related to some set of cancers. In general, it’s utterly normal in the academy that somebody announces dramatic goal A, and then really works on subproblem D that might ultimately help achieve C, and then B, an important special case of A.
The standard I would use is “are there a significant number of people who find the papers interesting and useful?” And that’s a standard that I think MIRI is improving significantly on. A large fraction of academics with tenure in top-50 computer science departments aren’t work that’s better.
Notice that I wouldn’t use “avoid UFAI danger” as a metric. If the MIRI people are motivated to answer interesting questions about decision theory and coordination between agents-who-can-read-source-code, I think they’re doing worthwhile work.
Worthwhile? Maybe. But it seems dishonest to collect donations that are purportedly for avoiding UFAI danger if they don’t actually result in avoiding UFAI danger.
Which one was it?
I think the word for a thing that started calling itself the best cult in the world is ‘religion’.
When people ask me what religion I hail from (as far as I’m concerened, religion or religation is nothing more or less than RED Team VS BLU Team style affiliation, with, in the absence of exterior threats, a tendency to splinter and call heresy on each other), I tell them “secular humanist”. As far as I’m concerned, LW is just a particularly interesting denomination of that faith. “We’re the only religion whose beliefs are wholly grounded in empirical experience, and which, instead of praying for things to get better, go out and make them so”.
Are there in fact no (“other”) religions which endorse making things better?
I am aware of religious denominations which advocate doing good works as a route to personal salvation, but I honestly can’t think of any religious branch I’m aware of which advocates good works on the basis of “For goodness’ sake, look at this place, it’s seriously in need of fixing up.”
So, just to make sure I understand the category you’re describing here… if, for example, an organization like the Unitarian Universalist Association of Congregations asserts as one of its guiding principles “The goal of world community with peace, liberty, and justice for all;” and does not make a statement one way or the other about the salvatory nature of those principles, is that an example of the category?
I guess I’d say that it counts if you’re willing to treat Unitarian Universalism as an actual religious denomination. Whether it counts or not would probably depend on how you identify such things, since it’s missing qualities which one might consider important, such as formal doctrines.
In my experience Unitarian Universalism, at least in its modern form, is mainly a conglomeration of liberal progressive ideals used as an umbrella to unite people with religious beliefs ranging from moralistic therapeutic deism to outright atheism.
All of the Unitarian Universalists I’ve known well enough to ask have also identified themselves as secular humanists, so I certainly wouldn’t regard it as an alternative to secular humanism which carries that value.
Atheists tend to identify religions as creeds: your religion is about what you believe. By this way of thinking, a Catholic is any person who believes thus-and-so; a Buddhist is any person who believes this-and-that; a Muslim is any person who believes the-other-thing; and so on. Having correct belief (orthodoxy) is indeed significant to many religious people, but that’s not the same as it being what religion is about.
Protestant Christianity asserts sola fide, the principle that believers are justified (their sins removed) by dint of their faith in Jesus Christ. The first pillar of Islam is the shahadah, a declaration of belief. One of the first questions people ask about any new-to-them religion is, “What do you believe in?”, and discussions of religions often involve clarifying points of belief, such as “Buddhists don’t believe the Buddha is a god.” Examples such as these may lead many atheists to think that religion is about belief.
Another way of looking at religion, though, is that religion is about practice: what you do when you think you’re doing religion. The significant thing that makes your religion is not your religious beliefs, but your religious habits or practices. You can assert the Catechism all day long … but if you don’t go to Mass, pray, take communion, and confess your sins to a priest, you’re not a central example of a Catholic. And the other four pillars of Islam aren’t about belief, but about practice.
And something else to consider is that religion is also about who you do it with — a community. Religion is usually done with some reference to a local community (a church, coven, what-have-you) and a larger community that includes other local groups as well as teachers, leaders, recognized figures. It is this community that teaches and propagates the beliefs. People are considered to be members of the religion by dint of their membership in this community (possibly formally recognized, as by baptism) even if they do not yet know the beliefs they are “supposed to” have. Most Christians have never read the whole Bible, after all.
Well, that’s why I’m asking the question.
UU is one group I happen to know well enough to expect that its members do in fact advocate good works on the basis of good works being a good thing, so if it’s a counterexample, that’s easy. But if it isn’t because it isn’t actually a religion, OK.
I think something similar is true of Congregationalists, from my limited experience of them… but then, all the Congregationalists I’ve known well enough to ask have identified themselves as agnostics and atheists, so perhaps they don’t count either.
But I have a clearer notion of what your category is now; thank you for the clarification.
Does it seem to you that you and Ritalin (in the comment I replied to) mean the same thing by “religion”?
I can’t speak for Ritalin, but I’d speculate that we’d both identify it as requiring certain patterns of belief as well as affiliation.
As far as I know, they’re all about giving up your ego in one way or another and happily wait for death or the endtimes. The most proactive they get is trying to spread this attitude around (but not too much; they still need other people to actually pay for their contemplative lifestyle). Making things better, improving the standing of humankind, cancelling the apocalypse? A futile, arrogant, doomed effort.
OK.
In British English “bloody” is a general-purpose intensifier, e.g. “That’s just bloody lovely!” :-)
I know, I just thought maybe he meant the more literal kind of bloody, given the context (we’re talking about cults, this site is dominated by US Citizens), and wanted him to clarify.
Yes, I get definite cultist vibes from some members. A cult is basically an organization of a small number of members who hold that their beliefs make them superior (in one or more ways) to others, with an added implication of social tightness, shared activities, internal slang, difficult for outsiders to understand. Many LW people often appear to behave like this.
You too are using an even looser definition of a cult. Surely you know that ‘cult’ carries some different (negative) connotations for other people?
I never stated LW is a cult. It clearly isn’t. It does however have at least several, possibly many, members who appear to think about LW in the way many cult members think of their cult.
Observe the progression:
...
...
At this point, are you saying anything at all?
I assume an educated reader will infer the massive negative social connotations of any movement or organization that has a reputation, no matter how small at this point, as being ‘cultish’—such a reputation inevitably makes achieving goals, recruiting members, etc., more difficult.
Thus being careful not to create that image is very important (or should be) to the membership of the site.
From saying almost nothing you have switched to overblown hyperbole. (BTW, I believe you mean “be aware of”, not “infer”.) It appears that LW does have, in some circles (like RationalWiki, ha) a cultish reputation, but I do not see “massive” consequences arising from that.
In a highly chaotic system like our society, small differences (e.g. a reputation in some circles as cultish) can decrease the odds of something gaining influence or acceptance incredibly.
People spend their whole lives researching sales, and any time someone is spreading an idea, sales comes into it. If you think any marketing department of a major company would accept the idea that some website likely visited by many, many, potential members discusses their organization in such a negative light, you are very mistaken. When even the regular members are discussing openly, are we getting a reputation as a cult, that is a terrible ‘branding’ failure.
For LW to achieve the potential most of it’s members (I would assume) hope it will… yes there are consequences.
Any time a large group of potential members or future ‘rationalists’ (not to confuse LW with rationalism) is skeptical or inclined to disinterest in LW because they heard it had some sort of ‘cultish’ reputation, is a massive potential loss of people who could contribute and learn for the betterment of themselves and society as a whole.
Don’t underestimate the impact of small differences when you are dealing with something as complex, and unpredictable as society and the spread of ideas.
Which members?
So, I link to Amazon fairly frequently here, and when I do I use the referral link “ref=nosim?tag=vglnk-c319-20” to kick some money back to MIRI / whoever’s paying for LW.
First, is that the right link? Second, what would it take to add that to the “Show help” box so that I don’t have to dig it up whenever I want to use it, and others are more likely to use it?
This is done automatically in a somewhat different way. So my advice is not to worry about it. But, yes, it shouldn’t hurt and and it should help in the situation that viglink doesn’t fire. In those comments, Wei Dai agrees that this is the referral code.
Part of the reason why I’m asking is because that info might be old. Apparently “ref=nosim” was obsolete two years ago, and I don’t know if that’s still the right VigLink account, etc.
I thought LW automatically added affiliate links using VigLink already.
I was under that impression also. [Edit] I prefer doing it by hand because I almost always only open links in new tabs, which I predict others will do as well, and VigLink does not do that with new tab links.
[Edit2]The original text before the edit was:
When I asked why I did it by hand, I got back the answer “I do it by hand because I do it by hand,” which seemed silly. But it turns out there was a good reason which I had forgotten; score one for blind tradition.
Maybe it might even sense to let the forum automatically format links to Amazon that way.
I believe this is supposed to happen already, but have not tested it.
That would be a bad thing.
Why?
I should have made more explicit that it’s my opinion: “a bad thing from my point of view”.
It’s a quirk of mine—I dislike marketing schemes injected into non-marketing contexts, especially if that is not very explicit. It is a mild dislike, not like I’m going to quit a site because of it or even write a rant against Amazon links rewriting.
Yes, I understand that Internet runs on such things. No, it does not make me like it more.
It isn’t a marketing scheme. It’s a monetization scheme.
(Marketing is presenting products for sale. Monetization is finding ways to extract revenue from previously non-revenue-generating activity.)
(And no, the Internet doesn’t run on marketing or monetization. A few of your favorite Internet services probably do, though; but probably not all.)
I accept the correction.
I have another quirk: I dislike being monetized.
LW is mostly pure-text with no images except for occasional graphs. Why is that so? Are the reasons technical (due to reddit code), cultural (it’s better without images), or historical (it’s always been so)?
I think most people are unaware that they can include images in comments.
A state of affairs which I hope continues.
Ah, a vote for “it’s better this way”. Why do you prefer pure text? Is it because of the danger of being overrun with cat pictures and blinking gif smileys?
Let’s take that particular image. It covers a huge block that could have been filled by text otherwise and conveys relatively little information accurately. It distrupts my reading completely for a little while and getting back to the nice flow takes cognitive effort.
This moment I’m reading on my phone and the image fills the whole screen.
It is because text can be copy-pasted and composed easily since browsers mostly allow selecting any text (this is more difficult in win apps).
Whereas images cannot be copy pasted as simple (mostly you have to find the URL and copy paste that) and images cannot be composed easily at all (you at least need some pic editor which often doesn’t allow simple copy-paste).
This is the old problem that there is no graphical language. A problem that has evadad GUI designers since the beginning.
Um. In Firefox, right-click on the image, select Copy Image. Looks pretty simple to me. Pretty sure it works the same way in Chrome as well.
I think you’re missing the point of images. Their advantage is precisely that they are holistic, a gestalt—you’re supposed to take them in whole and not decompose them into elements.
Sure, if you want to construct a sequential narrative out of symbols, images are the wrong medium.
And how do you insert it into a comment?
That may be true of some images but not all.
I’d go with laziness and lack of overt demand. I know that people love graphs and images, but I don’t especially feel the need when writing something, and it’s additional work (one has to make the image somehow, name it, upload it somewhere, create special image syntax, make sure it’s not too big that it’ll spill out of the narrow column allotted articles etc). I can barely bring myself to include images for my own little statistical essays, though I’ve noticed that my more popular essays seem to include more images.
I haven’t tried authoring an article myself, but a quick look now seems to indicate that you can’t upload images, only link to them. This means images must be hosted on third parties, meaning you have to upload it there and if not directly under your control, it’s vulnerable to link rot. It seems like this would be inconvenient.
You can upload images to the LessWrong wiki, and then link them from comments or posts. It’s a bit roundabout, but the feature is there. The question is then, should it be made easier?
I haven’t tried it, but just knowing that it requires logging in to the wiki, I know that it’s way too hard and I’ll probably use imgur instead.
That’s very common in online forums (for the server load reasons) but doesn’t seem to stop some forums from being fairly image-heavy. It’s not like there is a shortage of free image-hosting sites.
Yes, I understand the inconvenience argument, but the lack of images at LW is pretty stark.
Do you think more people should include graphics in their posts?
Do you think more people should include graphics in their comments?
Do you think the image-heavy forums you mention get some benefit from being image-heavy that we would do well to pursue?
I am hesitant to put forward a recommendation. I don’t know yet and approach this as the Chesterton’s Fence.
That’s fair.
I’ll observe that I read your comments on this thread as implicitly recommending more images.
This is of course just my reading, but I figured I’d mention it anyway if you are hesitant to make a recommendation for fear of tearing that fence down in ignorance, on the off chance that I’m not entirely unique here.
I understand where you are coming from (asking why this house is not blue is often perceived as implying that this house should be blue) -- but do you think there’s any way to at least tone down this implication without putting in an explicit disclaimer?
Well, if that were my goal, one thing I would try to avoid is getting into a dynamic where I ask people why they avoid X, and then when they provide some reasons I reply with counterarguments.
Another thing I would try to avoid is not questioning comments which seem to support doing X, for example by pointing out that it’s easy to do, but questioning comments which seem to challenge those comments.
Also, when articulating possible reasons for avoiding X, I would take some care with the emotional connotations of my wording. This is of course difficult, but one easy way to better approximate it is to describe both the pro-X and anti-X positions using the same kind of language, rather than describing just one and leaving the other unmarked.
More generally, assymetry in how I handle the pro-X and anti-X cases will tend to get read as suggesting partiality; if I want to express impartiality, I would cultivate symmetry.
That said, it’s probably easier to just express my preferences as preferences.
I think it’s fine. Reasons that people provide might be strong or might be weak—it’s OK to tap on them to see if they would fall down. I would do the same thing to comments which (potentially) said “Yay images, we need more of them!”.
In general, I would prefer not to anchor the expectations of the thread participants, but not at the price of interference with figuring out of what does the territory actually look like.
I didn’t (and still don’t) have a position to describe. Summarizing arguments pro and con seemed premature. This really was just a simple open question without a hidden agenda.
All right.
You could put a “light” disclaimer, like “I’m curious” or “(not that I’m complaining)”.
Edit (post downvote): (not that I’m saying you should have) :D
I read them this way too.
There’s a good chance this is not a “fence”, deliberately designed by some agent with us in mind, but a fallen tree that ended up there by accident/laziness.
There’s a design choice on the part of LessWrong against avatar images. Text is supposed to speak for itself and not be judged by it’s author. Avatar imaging would increase author recognition.
I think I agree with that. I do read author names, but I read them after I read the text usually. I frequently find myself mildly surprised that I’ve just upvoted someone I usually downvote, or vice versa.
And yet names are visually quite distinct. I find authorship much more obvious here than on HN.
Most people are much better at remembering faces than at remembering names. Hacker News also has a lot more people and therefore you will interact with the same person less often.
Why shouldn’t it be?
I am not implying that it should, but to answer your question, because limits on accepted forms of expression are not necessarily a good thing. Not necessarily a bad thing, either.
People already mentioned some pros (e.g. graphs and such help cross the inferential distance) and cons (e.g. images break the mental flow of some people).
It doesn’t feel like a limit to me, just something that very seldom occurs to me to do because I very seldom have any use for it.
(I sometimes link to images—maybe next time I’ll consider including them directly in the comment.)
I’d note that the short help for comments does not list the Markdown syntax for embedding images in comments, and even the “more comment formatting help” page is not especially clear. That LessWrong cultural encourages folk to write comments before writing Main or Discussion articles makes that fairly relevant.
Some people embed graphics in their articles, and this is seen by many as a good thing. I suspect it’s just individuals choosing not to bother with images.
Reading this comment… I suddenly feel very odd about the fact that I failed to include images in my Neuroscience basics for LessWrongians post, in spite of in a couple places saying “an image might be useful here.” Though the lack of images was partly due to me having trouble finding good ones, so I won’t change it at the moment.
I find it harder to engage in System 2 when there are images around. Heck, even math glyphs usually trip me up. That’s not to say graphics can’t do more good than harm (for example, charts and diagrams can help cross inferential distance quickly, and may serve as useful intuition pumps) but I imagine that more images would mean more reliance on intuition and less on logic, hence less capacity for taking things to analytical extremes. So it could be harmful (given the nature of the site) to introduce more images.
I like my flow. I don’t have anything against images if they are arranged in a way that doesn’t distrupt reading. I’m not sure if lw platform allows for that.
Personally I find the absence of images mostly positive, and apparently helpful for staying in System 2 mode. This is a place where we analyze counter-intuitive propositions, so the absence of images may be critical to the optimal functioning of the community.
That’s not to say images don’t have cognitive advantages (the information content can be processed more quickly than text, e.g.) but they can be distracting, and might actually tend to lead one to be less analytical and concise in the long run. Notice how an image-based meme may seem credible or hard to shoot down even when it represents a strawman argument (or even no argument at all). There’s a reason Chick tracts are a thing.
Several months ago I set up a blog for writing intelligent, thought-provoking stuff. I’ve made two posts to it, and one of those is a photo of a page in Strategy of Conflict, because it hilariously featured the word “retarded”. Something has clearly gone wrong somewhere.
I’m pretty sure there are other would-be bloggers on here who experience similar update-discipline issues. Would any of them like to form some loose cabal of blogging spotters, who can egg each other on, suggest topics, provide editorial and stylistic feedback, etc.?
EDIT: ITT: I’m a bit of a dick! Sorry, everyone!
Are you sure the error is that you’re posting too little to the blog, rather than that you’re trying to have a blog in the first place?
Is this intended as snark, or an actual helpful comment?
Assuming the latter, I have what I consider to be sound motives for maintaining a blog. Unfortunately, I don’t have sound habits for maintaining a blog, coupled with a bit of a cold-start problem. I doubt I am the only person in this position, and believe social commitment mechanisms may be a possible avenue for improvement.
I was going for actual helpful comment. I personally don’t have a blog because several attempts to have a blog failed. Afterwards, I was fairly sure that the reason why my blogs failed was because I like conversations too much and monologuing too little. I found that forums both had a reliable stream of content to react to, as well as a somewhat reliable stream of content to build off of. The incentive structure seemed a lot nicer in a number of ways.
More broadly, I think a good habit when plans fail is to ask the question “What information does this failure give me?”, rather than the limited question “why did this plan fail me?”. Sometimes you should revise the plan to avoid that failure mode; other times you should revise the plan to have entirely different goals.
My immediate practical suggestion is to create a LW draft editing circle. This won’t give you the benefits of a blog distinct from LW, but eliminates most of the cold-start problem. It also adds to the potential interest base people who have ideas for posts but who don’t have the confidence in their ability to write a post that is socially acceptable to LW (i.e. doesn’t break some hidden protocol).
If you have any old material, you could consider posting those to get initial readership, even if you don’t consider them especially high quality.
I’d interpret Vaniver’s comment more generally to mean that parts of your brain might disagree with this assessment, and you experience this as procrastination.
Count me in if anything comes out of it.
Yes.
(My current excuse for not even having made one post is that I started to experience wrist pain, and didn’t want to make it worse by doing significant typing at home. It seems to be getting better now.)
Maybe you should consider joining an existing blogging community—livejournal or tumblr or medium? They’re good at giving you social prompts to write something.
In retrospect, my previous response to this does seem pretty unwarranted. This was a perfectly reasonable and relevant comment that caught me at a bad time. I’d like to apologise.
OK, I’m not trying to be antagonistic, but I really want to understand where the communication process goes wrong here. What was it about my original comment that seemed like a request for advice?
EDIT: Yeah, this is a pretty stupid comment.
You comment sounded to me like a request for help for which lmm gave a reasonable response.
Anytime you present all the options ABC as you see them, it is possible D is the best option and you just didn’t see it.
(Sorry if that came across as more advice...I mean you no harm...)
It is clear at this point that while I don’t think my original comment (or similar previous comments) was asking for advice, plenty of other people do indeed interpret them as such. If this were simply a choice of words that had a common meaning I was unfamiliar with, I’d happily accept this and move on, but in this case I think these other people are fundamentally doing something incorrectly with language and dialogue.
My immediate case for this is threefold:
1) The comment is literally not asking for advice. It does not execute the speech act of asking for advice.
2) If someone were to infer a request for advice from the comment, they would notice the comment does not contain sufficient information for them to provide good advice. Even if it has the superficial appearance of implicitly asking for advice, it is not well-suited to this task.
3) If someone were to go about the process of asking me for the salient background information to offer me good advice (rather than just shooting in the dark and generating irrelevant discussion that doesn’t serve anyone’s purposes), they would notice that it wasn’t a request for advice. This casts doubt on their motives for engagement with the dialogue, not to mention their ability to give appropriate advice.
It’s not a request for advice. It’s just not, and it’s pragmatically unsound (not to mention kind of rude and really annoying) to interpret it as such.
So yes, I think I’m right, and everyone else is wrong. I should point out that although I find the unsolicited advice incredibly annoying, I find the underlying discourse phenomenon really interesting.
Here’s some advice: When you think you’re right about the interpretation of what you said and everyone else is wrong you’re probably wrong. The fact that you have to go ON and ON about how incredibly obviously right you are and how everyone should have seen it is the rationalization of you in fact being wrong.
1) “I’ve been having a problem lately with x” Does not explicitly ask for advice. It implicitly does so.
2) Vague advice is STILL more useful than no advice when someone is asking for advice. Giving someone advice that has worked for them in a situation that isn’t exactly the same is still useful or at least leads to further possibly useful conversation. Example: “I’ve been having a lot of trouble sleeping lately.” “Have you tried Melatonin?” “Obviously, I’m not an idiot! I can’t sleep because of the construction! Clearly you didn’t have enough information to tell me about soundproofing options so why even talk!?”
3) This isn’t a private conversation. Responses to you are not just FOR you. If someone replies with general advice for a similar setting they’re trying to have a conversation about that even if it’s not exactly what you wanted to hear. I understand you didn’t want a conversation about blogging productivity, you just wanted some yes or no answers to your question. Why should that prevent other people saying things they think are relevant to the conversation in general?
In the end, you could have just ignored people giving you unsolicited advice, but instead you chose to go off an an assholish ranting streak at everyone who was simply trying to be helpful. You’ve wasted far more of your own time with these complaints than reading anyone’s advice has cost you. Nobody here was being rude except for you (and now me).
I actually totally appreciate this comment, and largely agree with it. I maintain my general point about the pragmatics of interpreting things as implicit requests for advice, but yeah, I’ve certainly not handled this particular thread gracefully.
Thank you.
You should be pragmatic about pragmatics.
The comment was an attempt to affect other people. If you produce the wrong effects, your language is wrong.
If everyone agrees that it’s a question, it’s a question. If things that weren’t questions a year ago are questions now, then the language changed. But it doesn’t take a lot of people wrongly interpreting something as a question to produce unwanted answers, so maybe they are wrong. And language fragmentation is worth fighting.
Having slept on it, I think I can offer a more fine-grained explanation for what I think is going on.
There are implicit and explicit speech acts. You can implicitly or explicitly threaten someone, or compliment someone, or express romantic interest in someone. There are some speech acts which you cannot, or as a matter of policy should not, carry out implicitly. As extreme examples, you cannot implicitly extend someone power of attorney, and you should not interpret someone’s implicit expressions of interest in being erotically asphyxiated as an invitation to go ahead and do so.
I believe implicit requests for advice basically shouldn’t exist. I would expect social decorum to drive people’s interpretations away from this possibility. Out in the big wide world, my experience is that people are considerably more careful about how they do and do not offer advice. My consternation is that on LW there appear to be forces driving people’s interpretations towards the possibility of implicit requests for advice, which runs counter to my expectations.
Not what-the-fuck-are-you-doing counter to my expectations, I should point out. I might, for example, occasionally expect relative strangers at work to wordlessly take a pen from my desk, use it to scribble a note and then put it back. This is probably the most mildly-invasive act I can think of, but if I was disrupted every five minutes by someone leaning in and pinching one, stopping a wordless pen-borrower in their tracks and saying “seriously, what is it with the pens?” seems like a reasonable line of inquiry.
I’m not precious about my pens, (nor do I think I’m especially hostile to receiving unsolicited advice) but there are good reasons to have social norms that drive people away from this sort of behaviour. When those social norms cease to exist, those good reasons don’t go away.
That’s not very pragmatic. Worry about whether they do exist. You say they don’t exist in other contexts, but this statement makes me distrust your observations.
Also, I suggest you consider more contexts. Are you familiar with other venues intermediate between LW and your baseline? nerds? other online fora?
Really? It seems like a reasonable way of stopping it. It does not seem to me like a way of learning. And since not that many people go by your desk, it might scale to actually stopping it.
I’m not saying they don’t exist in other contexts, but that they’re a less probable interpretation in other contexts. In those contexts, I wouldn’t expect my original comment to be interpreted as a request for advice as readily as it is here. I wouldn’t necessarily expect it to be interpreted any better, but I wouldn’t expect a small deluge of advice.
I am fairly sure this discussion isn’t really recoverable into something productive without me painting myself as some sort of neurotic pen-obsessive snapcase. Yes, I only have myself to blame.
Why do you believe this? That is, is this an aesthetic preference about the kind of society you want to live in, or do you believe they have negative consequences, or do you adhere to some deontological model with which they are inconsistent, or… ?
I believe there are negative consequences, some of which I’ve already elaborated upon, and some of which I haven’t.
Illustratively, there do exist social norms against patronising other people, asking personal questions, publicly speculating about other people’s personal circumstances, infringing privacy, etc., which are significant risks when offering people unsolicited advice. Since offering people unsolicited advice is itself a risk when inferring requests for advice from ambiguous statements, it seems reasonable (to me) to expect people to be less inclined to draw this inference.
Also, offering advice (or general assistance, or simply establishing a dialogue with someone) isn’t a socially-neutral act, especially in a public setting. A suitable analogy here might be walking into a bar and saying “after a day like today, I’m ready for a drink”. This isn’t an invitation for any nearby kind-hearted stranger to buy you a drink without first asking if you wanted them to. The act of buying a drink for someone has all sorts of social/hospitality/reciprocity connotations.
After making a right royal mess of this particular thread, I’m keen to disentangle myself from it, so while I’m happy to continue the exchange, I’d appreciate it if it didn’t continue any longer than was useful to you.
Um… well, OK.
I have to admit, I don’t quite understand, either from this post or this thread, what you think the negative consequences are which these social norms protect against… the consequences you imply all seem like consequences of violating a social norm, not consequences of the social norm not existing.
Perhaps I’m being dense.
Regardless, I’m only idly interested, so if you’d rather disentangle I’m happy to drop it.
Well, unwarranted advice can result in making someone feel patronised, or like their privacy or personal boundaries are being violated, or like their personal circumstances are subject to public speculation, and these are all unpleasant and negative experiences, and you should try and avoid subjecting people to them.
It can also, out of nowhere, create a whole raft of dubious questioning or accidental insinuation that the recipient of the advice may feel obliged, or even compelled, to put straight. It has a general capacity to generate discussion that is a lot more effort for the advisee to engage with than the advisor. It’s very easy to give people advice, but as I have found, it’s surprisingly hard to say “no, stop, I don’t want this advice!” (I have said it very vehemently in this thread, with the consequence of looking like an objectionable arse, but I’m not sure that saying it less vehemently would have actually stopped people from offering it.) These are also unpleasant and negative experiences, and you should try and avoid subjecting people to them as well.
Mm. OK, I think I understand. Thanks for clarifying.
Advice, unwanted or not, usually follows a description of the situation or relevant circumstances.
Someone who published—posted online—an account of his situation or “personal circumstances” cannot complain later that his privacy was violated or that these personal circumstances became “subject to public speculation”.
To put it bluntly, posting things on the ’net makes them not private any more.
Part of my point in this thread is that advice often comes even in the absence of a description of relevant circumstances. Hence they become subject to public speculation.
If you haven’t disclosed private information then I don’t see how advice or speculation invades your privacy.
You may consider it to be something like baseless rumors, but baseless rumors are not invasion of your privacy either.
You’re conflating invasion of privacy and public speculation of circumstances. I never equated the two.
Your complaint included “their privacy or personal boundaries are being violated”. And when you complained about speculation, you complained about “their personal circumstances are subject to public speculation”.
Presumably these personal circumstances were voluntarily published online, were they not?
If you do not post your personal circumstances online there is nothing to speculate about.
You seem to want to have a power of veto on people talking about you. That… is not going to happen.
Also, FYI, it’s not me who’s downvoting you.
If I talk, in the abstract, about how I imagine that it’s hard to organise bestiality orgies, and someone misinterprets that as a request for advice about organising bestiality orgies, that’s some pretty flammable speculation about my personal circumstances. I then have the option of either denying that I have interest in bestiality orgies, or ignoring them and leaving the speculation open.
Does that make sense? Please let it make sense. I want to leave this thread.
No, it is not unless you’re actually organizing bestiality orgies.
If you actually do not, then it’s neither an invasion of privacy nor a discussion of your personal circumstances because your personal circumstance don’t happen to involve bestiality orgies.
It might be a simple misunderstanding or it might be a malicious attack, but it has nothing to do with your private life (again, unless it has in which case you probably shouldn’t have mentioned it in the first place).
And leaving this thread is a simple as stepping away from the keyboard.
For my own part, if someone goes around saying “Dave likes to polish goats in his garage”, it seems entirely reasonable for me to describe that as talking about my private life, regardless of whether or not I polish goats, whether or not I like polishing goats, or whether or not I have a garage.
To claim that they aren’t actually talking about my private life at all is in some technical sense true, I suppose, but the relevance of that technical sense to anything I might actually be expected to care about is so vanishingly small that I have trouble taking the claim seriously.
You’re conflating privacy and public speculation again. I didn’t do that.
If I say “I think Lumifer likes to ride polar bears in his free time”, then I am speculating about your personal circumstances. I just am. That’s what I’m doing. It’s an incontrovertible linguistic fact. I am putting forth the speculation that you like to ride polar bears in your free time, which is a circumstance that pertains to you. I am speculating about your personal circumstances. Whether the statement is true or not is irrelevant. I’m still doing it.
And I am actually going to go away now. Reply however you like, or not.
Not quite. The words which are missing here are “imaginary” and “real”.
I have real personal circumstances. If someone were to find out what they really are and start discussing them, I would be justified in claiming invasion of privacy and speculation about my personal circumstances.
However in this example, me riding polar bears is not real personal circumstances. What’s happening is that you *associate* me with some imaginary circumstances. Because they are imaginary they do not affect my actual privacy or my real personal circumstances. They are not MY personal circumstances.
In legal terms, publicly claiming that Lumifer likes to ride polar bears and participate in unmentionable activities with them might be defamation but it is NOT invasion of privacy.
To repeat, you want to prevent or control people talking about you and that doesn’t sound to me like a reasonable request.
You are just using different definitions of privacy.
Recommended reading: Daniel Solove, “A Taxonomy of Privacy”.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=667622
Most likely this part:
I don’t think it’s that much of a stretch to infer, “and if you can suggest what, or how I might remedy it, that would be cool!”
That’s also my suspicion in this case, but does it really seem plausible that I completely abandoned my analysis of the situation at that point? Especially since I go on to explicitly identify it as an update-discipline issue, and make a specific request to address it?
I’ve been cheerfully posting to LW with moderate frequency for four years now, but over the past few months I’ve noticed an increased tendency in respondents to offer largely unsolicited advice. I’m fairly sure this is an actual shift in how people respond. It seems unlikely that my style of inquiry has changed, and I don’t think I’ve simply become more sensitive to an existing phenomenon.
Maybe the “advice” (or instrumental rationality?) style of post has become more common and this approach to discussion has bled over into the comments? I don’t know, I find lmm’s comment to read as a perfectly natural response to yours, so perhaps I’m not best placed to analyse the trend you seem to be experiencing.
One possible explanation is that you are just getting more responses (and thus more advice-based responses) because the Open Threads (and maybe Discussion in general) have more active users. Or maybe the users are more keen to participate in discussions and giving advice is the easiest way to do so.
It might help if you start.… (just kidding, I’m making a mental note not to give you advice unless you specifically ask for it from now)
Retracted because you haven’t asked for an opinion on the reason as to why you are getting advice either.
In this context, the discussion is about receiving unnecessary advice, so I think speculating on why this is happening is entirely reasonable.
To illustrate why it’s annoying, it may help to provide the most extreme example to date. A couple of months ago I made a post on the open thread about how having esoteric study pursuits can be quite isolating, and how maintaining hobbies and interests that are more accessible to other people can help offset this. I asked for other people’s experience with this. Other people’s experiences was specifically what I asked for.
Several people read this as “I’m an emotionally-stunted hermit! Please help me!” and proceeded to offer incredibly banal advice on how I, specifically, should try to form connections with other people. When I pointed out that I wasn’t looking for advice, one respondent saw fit to tell me that my social retardation was clearly so bad that I didn’t realise I needed the advice.
To my mind, asking for advice has a recognisable format in which the asker provides details for the situation they want advice on. If you have to infer those details, the advice you give is probably going to be generic and of limited use. What I find staggering is why so many people skip the process of thinking “well, I can’t offer you any good advice unless you give us more deta-...oh, wait, you weren’t asking for advice”, and just go ahead and offer it up anyway.
People will leap at any opportunity to give advice, because giving advice a) is extraordinarily cheap b) feels like charity and most importantly c) places the adviser above the advised. It’s the same impulse which drives us to pity; we can feel superior in both moral and absolute terms by patronizing others, and unlike charity there is only a negligible cost involved.
I, for example, have just erased a sentence giving you useless advice on how not to get useless advice in a comment to a post talking about how annoying unsolicited useless advice is. That is the level of mind-bending stupidity we’re dealing with here.
http://www.youtube.com/watch?v=-4EDhdAHrOg
Can you please use actual words to explain the underlying salience of this video? I see what you’re getting at, but I’m pretty sure if you said it explicitly, it would be kind of obnoxious. I would rather you said the obnoxious thing, which I could respond to, than passively post a video with snarky implicit undertones, which I can’t.
I think this isn’t entirely fair. You asked what people do to keep themselves relatable to other people. That’s not the same as asking for help relating to other people, but it is closer to that than you implied.
Not to say that I think the responses you got were justified, but I don’t find them surprising.
I’m going to stick to my guns on this one. I think my account is as close as makes no difference.
I’m happy to concede that other people may commonly interpret my inquiry as being closer to a request for advice, but I contend that this interpretation is not a reasonable one.
When you say you find this staggering, do you mean you don’t understand why many people do this?
I can speculate as to why people do this, but given my inability to escape the behaviour, I clearly don’t understand it very well.
To a certain extent, I’m also surprised that it happens on Less Wrong, which I would credit with above-average reading comprehension skills. Answering the question you want to answer, rather than the question that was asked, is something I’d expect less of here.
That’s fair.
There’s a pattern of “I have a problem with X, the solution seems to be Y, I need help implementing Y”.
Sometimes people ask this without considering other solutions; then it can be helpful to point out other solutions. Sometimes people ask this after considering and rejecting lots of other solutions; then it can be annoying to point out other solutions. Unfortunately it’s not always easy for someone answering to tell which is which.
Edit because concrete examples are good: I just came across this SO post, which doesn’t answer the question asked or the question I searched for, but it was my preferred solution to the problem I actually had.
Maybe that’s a description of the other responses, but lmm is not suggesting an alternative to Y, but an alternate path to Y. I think sixes and sevens’s response is ridiculous.
Consider your incentives. Actual (non-imaginary) incentives in your current life.
What are the incentives for maintaining a blog? What do you get (again, actually, not supposedly) when you make a post? What are the disincentives? (e.g. will a negative comment spoil your day?) Is there a specific goal you’re trying to reach? Is posting to your blog a step on the path to the goal?
Are you requesting answers for my specific case, or just providing me with advice?
(As an observation, which isn’t meant to be a hostile response to your comment, people seem very keen to offer advice on LW, even when none has been requested.)
Advice, I guess, in the sense that I think these are the questions you’d be interested in knowing the answers to (for yourself, not for posting here).
If I wanted to update a blog regularly, I would consider it imperative to put “update my blog” as a repeating item in my to-do list. For me, relying on memory is an atrocious way to ensure that something gets done; having a to-do list is enormously more effective.
I tried translating the sequences. Gave up on the third post.
The main problem in learning a new skill is maintaining the required motivation and discipline, especially in the early stages. Gamification deals this problem better than any of the other approaches I’m familiar with. Over the past few months, I’ve managed to study maths, languages, coding, Chinese characters, and more on a daily basis, with barely any interruptions. I accomplished this by simply taking advantage of the many gamified learning resources available online for free. Here are the sites I have tried and can recommend:
Codecademy. For learning computer languages (Ruby, Python, PHP, and others).
Duolingo. For learning the major Indo-European languages (English, German, French, Italian, Portuguese and Spanish).
Khan Academy. For learning maths. They also teach several other disciplines, but they offer mostly videos with only a few exercises.
Memrise. For memorizing stuff, especially vocabulary. The courses vary in quality; the ones on Mandarin Chinese are excellent.
Vocabulary.com. For memorizing English vocabulary.
Are you familiar with other good resources not listed above? If so, please mention them in the comments.
(Crossposted to my blog.)
I’ve been using Anki daily these past two or three months, and regularly-but-not-quite-daily maybe a year before that. I use it for a fair amount of different things (code, psychology, languages, …)I recommend it, though it’s not really “gamified”.
Please add the open_thread tag (with the underscore) to the post.
Fixed.
Not sure where this goes: how can I submit an article to discussion? I’ve written it and saved it as a draft, but I haven’t figured out a way to post it.
You don’t have enough karma to post yet. Consider making some quality comments first.
Thank you! One more—how much karma do I need? I was under the impression one needed 2 to post to discussion (20 to main), but presumably this is not the case. Is there an up to date list?
I think the requirement is currently 5 karma to post to discussion.
If you are asking this question, you are not ready to make a quality post. Consider the Open Thread instead.
Eliezer posted to Facebook:
My stab at it. I’m probably going to post it to FIMFiction in a day or so, but it’s basically a first draft at this point and could doubtless use editing / criticism.
I formerly thought I had a politically-motivated stalker who was going through all my old comments to downvote them.
Now I wonder if I have a stalker who is trying to keep me at ~6000 total, ~200 30-day karma.
A med student colleague of mine, a devout christian, is going to give a lecture on psychosexual development for our small group in a couple of days. She’s probably going to sneak in an unknown amount of propaganda. With delicious improbability, there happen to be two transgender med students in our group she probably isn’t aware of. To this day, relations in our group have been very friendly.
Any tips on how to avoid the apocalypse? Pre-emptive maneuvers are out of the question, I want to see what happens.
ETA: Nothing happened. Caused a significant update.
This sounds like a situation in which some people present may consider some other people’s beliefs to be an individual-level existential threat — whether to their identity, to their lives, or to their immortal souls. In other words, the problem is not just that these folks disagree with each other, but that they may feel threatened by one another, and by the propagation of one another’s beliefs.
Consider:
”If you convince people of your belief, people are more likely to try to kill me.”
″If you convince people of your belief, I am more likely to become corrupted.”
We are surprised when a local NAACP leader has a calm meeting with a KKK leader. (But possibly not as surprised as the national NAACP leadership were.)
One framework for dealing with situations like this is called liberalism. In liberalism, we imagine moral boundaries called “rights” around individuals, and we agree that no matter what other beliefs we may arrive at, that it would be wrong to transgress these boundaries. (We imagine individuals, not groups or ideas, as having rights; and that every individual has the same rights, regardless of properties such as their race, sex, sexuality, or religion.)
Agreeing on rights allows us to put boundaries around the effects of certain moral disagreements, which makes them less scary and more peaceful. If your Christian colleague will agree, for instance, that it is wrong to kidnap and torture someone in an effort to change that person’s sexual identity, they may be less threatening to the others.
What would constitute an apocalypse? When you say “I want to see what happens” do you mean you want to let the situation develop organically but set certain boundaries, a cap on damages, so to say?
That’s exactly what I mean. I’m not directing the situation, but will be participating.
I’d like to confront and see people confront her religious bias, without the result being excessive flame or her being backed in a corner without a chance to even marginally nudge her mind in the right direction. She’s smart, will not make explicit religious statements, and will back her claims with cherry picked reseach. Naturally the level of mindkill will depend on other participants too, and I will treat this as some sort of a rationality test if they manage to keep their calm. If they lose it I guess it’s understandable.
I guess I’ll be using lots of some version of “agree denotationally, disagree connotationally”.
Are the participants Finnish? I am tempted to start remembering jokes about the volatile and emotional character of Finns… :-)
Finnish indeed, and even with our completely watered down version of belief in belief christianity there’s always a religious nut or two to ruin your day.
Volatile and emotional? Most lwers per capita was it? Is it because we’re in the highest need for rationality?
Is she in-your-face christian or a live-and-let-live one?
Heavily leaning towards in-your-face on the spectrum. Has been very vocal on abortion issues for example. Thinks that homosexuality is a sin. Other than her religiousness, is a perfectly nice human being.
I’ve been out of things for a while; how goes Eliezer’s book?
The rationality book?
this is the last I’ve seen
http://lesswrong.com/lw/i3a/miris_2013_summer_matching_challenge/9gth
http://hpmor.com/notes/progress-13-10-0/
Eliezer says he’ll do a progress report on 11⁄1. I haven’t heard any news otherwise.
I don’t think that ‘Eliezer’s book’ refers to HPMOR. I think it is more likely that he is asking about the book based on the Sequences (for which this is probably the most recent thread).
Why isn’t there a pill that makes a broken heart go away?
Ask Gwern, he probably knows something that’s good enough.
There might be eventually.
Something like that was discussed previously. Kevin recommended antidepressants in the comments.
Just time is both necessary and sufficient.
It will get better over time.
They are on it: http://www.academia.edu/2764401/If_I_could_just_stop_loving_you_Anti-love_biotechnology_and_the_ethics_of_a_chemical_breakup
Arguably MDMA in the proper context. The chain of reasoning as as follows: One of the quickest traditional cures for sadness about a broken off relationship is a new relationship ---> the cure for a broken heart is to meet and fall in love with someone else as fast as possible ---> MDMA lowers inhibition and raises affection and touch (which are crucial for relationship formation) and theoretically should make the process of getting to know someone faster and happier ---> so if you take MDMA around the right people in the right place you can quickly fall into a new love.
You don’t need a pill. It’s in your head. Mind over matter. Meditate. Learn complete emotional control. It’s not hard.
And when you can choose to be happy whenever you want, life is very good ;)
Um. Complete emotional control is in fact that hard, and not necessarily desirable either. It is certainly not a predictable result of meditation.
Complete is a strong word that I should have qualified. Mastery is a better word. Control over it. Where your emotions bend to the will of your rational mind, not vice versa.
Don’t limit yourself without reason. As humans we are agents of change in an incredibly complex, chaotic system (society). Mastering emotional control allows us to become much more effective agents. Someone half as smart but with twice the self control in every area can easily beat the more intelligent opponent. Not every time, but it is a massive advantage. http://scienceblogs.com/cognitivedaily/2005/12/14/high-iq-not-as-good-for-you-as/
I didn’t say that it’s predictable, or that it is super easy, but it’s not particularly difficult and only takes a few months of commitment of a few hours a week, to bring a lifetime of reward.
I’m surprised that as a “rationalist” you suggest mastery of the emotions may not be desirable. Awareness of one’s emotions, sure. But letting them dictate your actions in any way, why? Be rational.
And one without mastery of their emotional state (that is, the experiential drag of depression, or impulsive actions of rage, or hurtful actions of uncontrolled lust, etc), one is at a disadvantage in almost any situation,
Complete emotional control is not hard? o.O
How do I learn it, then?
Ultimately it’s just a matter of choosing to feel a certain way. Find things you really like (for me, sex and cigarettes), then just do EVERYTHING to get that in every situation you’re in. Most people chase money. Break the mould. Chase something else. Realize that you can get everything you want if you know how to play the situation right. If you’d like to PM me we can discuss a ‘training’ programme that matches your lifestyle perfectly. (Context: I used to be a NLP trainer/dating coach)
Thats an example of one method of switching your mindset completely. Ultimately many mindsets can be imagined then enjoyed by the invividual if so chosen. Its simply a matter of self will primarily.
Since I’m not sure whether this advice would be welcome in a recent discussion, I’m just going to start cold by describing something which has worked for me.
In an initial post, I explain what kind of advice I’m looking for, and I’m specific about preferring advice from people who’ve gotten improvement in [specific situation]. I normally say other advice is welcome, but you’d be amazed how little of it I get.
I believe it’s important to head off unwanted advice early. I can’t remember whether I normally put my limiting request at the beginning or end of a post, but I think it helps if you can keep your commenters from becoming a mutually reinforcing advice-giving crowd.
I suggest that starting by being specific about what you do and don’t want is (among other things) an assertion of status, and this has some effects on the advice-giving dynamic.
I normally do want advice from people who’ve had appropriate experience. Has anyone tried being clear at the beginning that they don’t want advice?
In my social circle, explicitly tagging posts as “I’m not looking for advice” seems to work pretty well at discouraging advice. I don’t do it often myself though.
And you’re right, of course, that it is among other things an assertion of status, though of course it’s also a useful piece of explicit information.
Does anyone have any book recommendations on the topic of evidence based negotiation tactics? I have read Influence; Cialdini, thinking fast, and slow ; Kahneman , and Art of Strategy; Dixit and Nalebuff. These are great books to read but I am looking for something with a more narrow focus, there are lot’s of books on amazon that get good reviews but I am unsure of which one would suit me best.
Getting to Yes is a standard negotiation book; Difficult Conversations seems useful as a supplement for negotiation in non-business contexts (but, as a general communication book, has obvious business applications as well).
I picked these up per your suggestion. Thanks.
There is a sequel to Getting to Yes, “Getting past No”, which is also good.
Thanks. I actually picked up the amazon suggested bundle, which included the sequel that you are speaking of.
Hope it helps you. I know that my negotiating skills improved considerably. I used to be afraid to dicker with car salesmen.
How many dealerships did you go to?
I am really starting to play with the idea that if you aren’t getting rejected enough then you probably aren’t negotiating hard enough. The next time I go buy a car I will make sure to negotiate hard enough to get rejected from more than one dealership. If they don’t let you walk out, then you probably haven’t found their low point yet.
Last time? 3 or 4.
Rejection is not quite the term. In my experience the sales guy eventually offers you his “best deal”, you thank him (why is it there are virtually no women in these jobs?) for his time and stand up to leave. That’s when he likely calls his manager who sweetens the deal a bit. YMMV. Once it seemed fair enough that I took it, rejecting “documentation fee”, upsell and all other junk along the way. (Free service, extended warranty etc. should already be negotiated in at this point: this stuff is cheap for them, expensive for you.) Another time I still left, making it clear that I was ready to finish the deal, if only… then got a call later with some more concessions. Every time the “final” contract had unexpected charges added which had to be removed before signing. Admittedly, I never push it to the point where the salesperson hates me, so they clearly get a fair shake out of it, just not a lucrative one.
One tactic that is nearly always useful (be it cars, electronics, appliances or anything else) is anchoring: showing the price of a comparable item, the book value, lease details posted on the manufacturer’s site, anything that makes the seller to go for a version of “price match”.
Steve Sailer on the Trolley Problem: [1] and [2]. Basically, to what degree is the unwillingness of people in the thought experiment to attempt to push the fat man the realization that pushing the fat man is an inherently riskier prospect than pulling a lever?
Noah Millman also comments:
Hofstadter and AI—trying to understand how people actually think rather than producing brute-force simulations for specific problems.
Is it rude to buy a treadmill if you live on the second floor of the apartment ?
Buying it? No. Using it while your downstairs-neighbor is home? Yes. A repetitive thumping can make trying to study hellishly difficult (for people sufficiently similar to me).
To the extent that you believe the preferences of the person below you mirror your own, would it annoy you if the person above you started using a treadmill in their apt?
I don’t know what a treadmill upstairs sounds like.
Possibly; an elliptical machine may be more considerate, as it’s less likely to produce noise or impact which will be noticed downstairs.
Is there any research suggesting simulated out-of-body-experiences (OBE)(like this), can be used for self improvement? For example potential areas of benefits include triggering OBEs to help patients suffering from incorrect body identities, which is exciting.
For some time now, I have had this very strange fascination with OBE and using it to over come akrasia. Of course I have no scientific evidence for it, yet I have this strong intuition that makes me believe so. I’ll do my best to explain my rationale. Often I get this idea, that I can trick myself into doing what I want, if I pretend that I am not me but just someone observing me. This disconnects my body from my identity, so that the real me can control the body me. This gives me motivation to do things for the body me. I am not studying, my body me is studying to level up. I’m not hitting the gym, the body me is hitting the gym to level up. An even more powerful effect is present for social anxiety. Things like public speaking and rejection therapy are terrifying but by disconnecting my identity from my body, I would find that rejection is not as personal, just directed at this avatar that I control. Negative self-conscious thoughts and embarrassment seem to have a lessened impact.
Am I off my rocker?
A few potentially relevant observations:
The kind of dissociation you talk about here, where I experience my “self” as unrelated to my body, is commonly reported as spontaneously occurring during various kinds of emotional stress. I’ve had it happen to me many times.
It would not be surprising if the same mechanism that leads to spontaneous dissociation in some cases can also lead to the strong intuition that dissociation would be a really good idea.
Just because there’s a mechanism that leads me to strongly intuit that something would be a really good idea doesn’t necessarily mean that it actually would be.
All of that said: after my stroke, I experienced a lot of limb-dissociation… my arm didn’t really feel like part of me, etc. This did have the advantage you described, where I could tell my arm to keep doing some PT exercise and it would, and yes, my arm hurt, and I sort of felt bad for it, but it’s not like it was me hurting, and I knew I’d be better off for doing the exercise. It is indeed a useful trick.
I suspect there are healthier ways to get the same effect.
Do you have experience with OBEs? I personally have limited experience. I’m no expert but I know a bit.
In my experience the kind of people who have the skills for engaging in out-of-body-experiences usually don’t get a lot done. It rather increases akrasia then decreasing it. If you want to decrease akrasia associating more with your body is a better strategy than getting outside of it.
That effect is really there. You are making a trade. You lose empathy. Stopping to care about other people means that you can’t have genuine relationships.
On the other hand rejections don’t hurt as much and you can more easily put yourself into such a situation.
I don’t think you’re off your rocker, though dissociating at the gym might increase the risk of injury.
I tentatively suggest that you explore becoming comfortable enough in your life that you don’t need the hack, but I’m not sure that the hack is necessarily a bad strategy at present.
Does anyone know of a good online source for reading about general programming concepts? In particular, I’m interested in learning a bit more about pointers and content-addressability, and the Wikipedia material doesn’t seem very good. I don’t care about the language—ideally I’m looking for a source more general than that.
Try the r/learnprogramming resource pages: free books, online stuff.
Can’t actually name a good general article on pointers. They’re the big sticking point for anyone trying to learn C for the first time, but they end up just being this sort of ubiquitous background knowledge everyone takes for granted pretty fast. I did stumble into Learn C the Hard Way, which does get around to pointers.
The C2 wiki is an old site for general programming knowledge. It’s old, the navigation is weird, and the pages sometimes devolve into weird arguments where you have no idea who’s saying what. But there’s interesting opinionated content to find there, where sometimes the opinionators even have some idea what they’re talking about. Here’s one page on what they have to say about pointers.
Also I’m just going to link this article about soft skills involved in programming, because it’s neat.
Has anyone read Daniel Goleman’s new book? Opinions?
Could anyone provide me with some rigorous mathematical references on Statistical Hypotheses Testing, and Bayesian Decision Theory? I am not an expert in this area, and am not aware of the standard texts. So far I have found
Statistical Decision Theory and Bayesian Analysis—Berger
Bayesian and Frequentist Regression Methods—Wakefield
Currently, I am leaning towards purchasing Berger’s book. I am looking for texts similar in style and content to those of Springer’s GTM series. It looks like the Springer Series in Statistics may be sufficient.
Berger is highly technical, not much of an introduction.
On Bayesian statistics, Bayesian Data Analaysis is a classic.
“Bayesian decision theory” usually just means “normal decision theory,” so you could start with my FAQ. Though when decision theory is taught from a statistics book rather than an economic book, they use slightly different terminology, e.g. they set things up with a loss function rather than a utility function. For an intro to decision theory from the Bayesian statistics angle, Introduction to Statistical Decision Theory is pretty thorough, and more accessible than Berger.
Great, thank you very much for the references. I am now reading your FAQ before moving onto the texts, I’ll post any comments I have there.
The main problem in learning a new skill is maintaining the required motivation and discipline, especially in the early stages. Gamification deals this problem better than any of the other approaches I’m familiar with. Over the past few months, I’ve managed to study maths, languages, coding, Chinese characters, and more on a daily basis, with barely any interruptions. I accomplished this by simply taking advantage of the many gamified learning resources available online for free. Here are the sites I have tried and can recommend:
[Codecademy][1]. For learning computer languages (Ruby, Python, PHP, and others).
[Duolingo][2]. For learning the major Indo-European languages (English, German, French, Italian, Portuguese and Spanish). [Khan Academy][3]. For learning maths. They also teach several other disciplines, but they offer mostly videos with only a few exercises. [Memrise][4]. For memorizing stuff, especially vocabulary. The courses vary in quality; the ones on Mandarin Chinese are excellent. * [Vocabulary.com][5]. For memorizing English vocabulary. Am I missing anything? Please leave your suggestions in the comments section. -
MWI gives an interesting edge to an old quote:
″… there are an infinite number of alternate dimensions out there. And somewhere out there you can find anything you might imagine. What I imagine is out there is a bunch of evil characters bent on destroying our time stream!”—Lord Simultaneous
… does the fact that there’s been no obvious contact suggest that the answer to the transdimensional variant of the Fermi paradox is that once you’ve gone down one leg of the Trousers of Time, there’s no way to affect any other leg, no matter how much you try to cheat?
The Fermi paradox includes us knowing a lot about the density of stuff in the visible universe. You’d expect expansionistic life to populate most of a galaxy in short order since there are only the three dimensions to expand in. The Everett multiverse is a bit bigger. Would you still get a similar expansion model for a difficult to discover cheat, or could we end up with effects only observable in a minuscule fraction of all branches even if a cheat was possible, but was difficult enough to discover?
It may also suggest that there are also a bunch of good characters stopping the bad characters, and that they’re doing a good job. Or that the bad guys don’t do anything by half measures and any contact is overwhelmingly likely to result in total destruction. Anthropic effects explain why we haven’t seen them. (This reasoning also applies to the original Fermi paradox by the way) Or maybe just that affecting other universes happens to be so costly w.r.t any value system that could potentially produce technology.
We know so little about trans-universe physics that it’s of very little use to speculate.
We understand it quite well. It’s just Schroedinger’s equation.
Does the Schrodinger equation tell us how to increase the relative probability of interacting with an almost completely orthogonal Everett Branch?
“Almost completely orthogonal” here bears qualifying: In classical thermodynamics, the concept of entropy is sometimes taught by appealing to the probability of all of the gas in a room happening to end up in a configuration where one half of the room is vacuum, and the other half of the room contains gas. After some calculation, we see that the probability of this happening ends up being (effectively) on the order of 10^(-10^23), give or take several orders of magnitude (not like it matters at that point).
Now, that said, how confident are you that different Everettian earths are even at the same point of space time we are, given a branching, say, 10 seconds ago? Pick an atom before the split and pick its two copies after. Are they still within a Bohr radii of each other after after even a nanosecond? Their phases are already scrambled all to hell, so that’s a fun unitary transformation to figure out.
Sure, you can prepare highly quantum mechanical sources and demonstrate interference effects, but “interuniversal travel” for any meaningful sense of the word, is about as hard as simply transforming the universe itself, subatomically, atom for atom, controllably into a different reality.
So in that sense, Schrodinger’s equation tells us as much about trans-universe physics as the second law of thermodynamics tells us about building large scale Maxwell’s Demons.
The second law of thermodynamics tells us everything there is to know about building large scale Maxwell’s Demons. You can’t. What else is there to it?
Schroedinger’s equation isn’t quite as good. It’s not quite impossible. But it is enough to tell us that there’s no way we’ll ever be able to do it.
The Schrodinger equation is not even at the right level of the relevant physics. It applies to non-relativistic QM. My guess is that DanielLC simply read the QM sequence and memorized the teacher’s password. World splitting, if some day confirmed experimentally, requires at least QFT or deeper, maybe some version of the Wheeler-deWitt equation.
The Schroedinger equation is sufficient for world splitting. It’s just entanglement at a massive scale.
QFT is a special case of the general form of the Schrodinger equation.
Link?
General form the Schrodinger Equation: dPsi/dt = -iH/hbar Psi
Quantum Field theories are not usually presented in this form because it’s intrinsically nonrelativistic, but if you pick a reference frame, you can dump the time derivative on the left and everything else on the right as part of H and there you go.
So it’s equivalent. As calef says, there are good reasons not to actually do anything with it in that form.
This is a little chicken-or-the-egg in terms of “what’s more fundamental?”, but nonrelativistic QFT really is just the Schrodinger equation with some sparkles.
For example, the language electronic structure theorists use to talk about electronic excitations in insert-your-favorite-solid-state-system-here really is quantum field theoretic—excited electronic states are just quantized excitations about some vacuum (usually, the many-body ground state wavefunction).
Another example: http://en.wikipedia.org/wiki/Kondo_model
You could switch to a purely Schrodinger-Equation-motivated way of writing everything out, but you would quickly find that it’s extremely cumbersome, and it’s not terribly straightforward how to treat creation and annihilation of particles by hand.