Why do you think EY uses conspiracy in his fictional writing? He seems to use them in positive or at least not clearly negative light, which is not how I think of conspiracies at all. I notice that I am confused, so I’m trying to gather some other opinions.
The anecdote in this post, about Fermi, Rabi and Szilard considering keeping the possibility of practical nuclear fission a secret, may shed some light on the subject. He thinks that some knowledge is dangerous enough that people who know it may reasonably want to keep it secret.
(much more recently, there has been some controversy about the publication of a way of obtaining a particularily infectious strain of a certain virus, but I can’t find any references for that right now)
(much more recently, there has been some controversy about the publication of a way of obtaining a particularily infectious strain of a certain virus, but I can’t find any references for that right now)
Groups led by Ron Fouchier of the Erasmus Medical Center in Rotterdam, the Netherlands, and Yoshihiro Kawaoka of the University of Wisconsin–Madison created a storm in late 2011 when they artificially engineered potentially pandemic forms of the H5N1 avian flu virus. In January last year, researchers ended a voluntary 12-month moratorium on such gain-of-function flu research, which can increase the host range, transmissibility or virulence of viruses (see Nature 493, 460; 2013), and work resumed.
This month, Kawaoka’s group reported that it had engineered a de novo flu virus from wild-avian-flu-strain genes that coded for proteins similar to those in the 1918 pandemic virus (T. Watanabe Cell Host Microbe 15, 692–705; 2014). The researchers were able to make a virulent version that could transmit between ferrets, and they concluded that a 1918-like virus could therefore emerge from wild avian flu viruses.
Although fellow flu researcher professor Wendy Barclay at Imperial College said there was nothing wrong with doing the research in a BSL-2 lab: “In nature there is no containment. He’s only doing what happens in nature every day.” Which is true for ebola too.
I think that I remember reading an even better example about publishing scientific results that might have furthered the Nazis ability to produce a nuclear weapon in HPMOR, though I can’t recall where it was exactly. I found that example persuasive, but I considered it a distasteful necessity, not a desirable state of affairs. Hence my confusion at Brennan’s world, which I thought being set in the future of our world was perhaps post-Singularity, and therefore the epitome of human flourishing. Another commenter asked me if I wouldn’t enjoy the thought of being a super-villain, and I thought , um no, that would be terrible, so maybe there are some Mind Projection issues going on in both directions. I don’t know the distribution of people who would gain positive utility from a world of conspiracies, but I’m sure there would be a great deal of disutility with some proportion of current people with current minds. I can see where that world might provide challenge and interest for its inhabitants, but I remain highly skeptical that it’s a utilitarian optima. Using my current brain and assuming stable values, it actually seems pretty dystopian to me, but I’ll admit that’s a limited way to look at things.
I think that I remember reading an even better example about publishing scientific results that might have furthered the Nazis ability to produce a nuclear weapon in HPMOR, though I can’t recall where it was exactly.
Graphite as a neutron modulator, I believe. Ch. 85:
During World War II, there had been a project to sabotage the Nazi nuclear weapons program. Years earlier, Leo Szilard, the first person to realize the possibility of a fission chain reaction, had convinced Fermi not to publish the discovery that purified graphite was a cheap and effective neutron moderator. Fermi had wanted to publish, for the sake of the great international project of science, which was above nationalism. But Szilard had persuaded Rabi, and Fermi had abided by the majority vote of their tiny three-person conspiracy. And so, years later, the only neutron moderator the Nazis had known about was deuterium.
I think it stems from the Brennan’s World weirdtopia, and the idea that making knowledge freely available makes it feel worthless, while making it restricted to members of a secretive group makes it feel as valuable and powerful as it actually is.
If something is valuable and powerful, and (big if) it’s not harmful, plus it’s extremely cheap to reproduce I see no reason not to distribute it freely. My confusion was that Brennan’s world seems set in the future, and I got the sense that EY may have been in favor of it in some ways (perhaps that’s mistaken). Since it seemed to be set in the future of our world, I got the sense that the Singularity had already happened. Maybe I just need to get to the fun sequence, but that particular future really made me uneasy,
Perhaps it’s only powerful in the hands of the chosen few. If it’s in the open and it looks powerful, then other people try it and see less than amazing success, and it looks less and less cool until it stops growing. But by then it’s harder for the special few to recognize its value—or perhaps don’t want to associate themselves with it—and potential is wasted.
If instead the details are kept secret but the powers known publicly, then the masters of the craft are taken seriously and can suck up all the promising individuals.
I don’t know how he feels about it currently, but in the past he did endorse Brennan’s world as a better way to organize society post-Singularity. It started as a thought experiment about how to fix the problem that most people take science for granted and don’t understand how important and powerful it is, and grew into a utopia he found extremely compelling. (To the point where he specifically did not explain the rest of the details because it is too inefficient to risk diverting effort towards. This was probably an overreaction.) He talks about this in
The linked article ends with this; I think this part of context is necessary. Emphasis mine:
Right now, we’ve got the worst of both worlds. Science isn’t really free, because the courses are expensive and the textbooks are expensive. But the public thinks that anyone is allowed to know, so it must not be important. Ideally, you would want to arrange things the other way around.
As I understand it, the Conspiracy world is a mental experiment with different advantages and disadvantages. And a tool used to illustrate some other concepts in a storytelling format (because this is what humans pay more attention to), such as resisting social pressure, actually updating on a difficult topic, and a fictional evidence that by more rational thinking we could be more awesome.
But it’s not an optimal (according to Eliezer, as I understand the part I quoted) world. That would be a world where the science is open (and financially available, etc.) to everyone and yet, somehow, people respect it. (The question is, how to achieve that, given human psychology.)
HJPEV is a drama queen and likes acting as if he’s badass (ignore for the moment whether he is) and sinister and evil: Look at what he calls his army and how he acts around them. Hence calling his thing with Draco the Bayesian Conspiracy. Not everything that takes place in an author’s fiction is indicative of something they support.
Conspiracy is the default mode of a group of people getting anything done. Every business is a conspiracy. They plot and scheme within their “offices”, anonymous buildings with nothing but their name on the front door. They tell no-one what they’re doing, beyond legal necessity, and aim to conquer the world by, well, usually the evil plan is to make stuff that people will want to buy.
No organisation conducts all its business in public, whatever its aims. Even if you find one that seems to, dollars to cents you’re not looking at its real processes. There needn’t be anything sinister in this, although of course sometimes there is.
EY makes complicated arguments. He’s not the person to make arguments about X is good and Y is bad. Fiction is about playing with ideas.
As far as I can find the first instance of the term Bayesian conpiracy appear in a 2003 nonfiction article by Eliezer:
Fun Fact!
Q. What is the Bayesian Conspiracy?
A. The Bayesian Conspiracy is a multinational, interdisciplinary, and shadowy group of scientists that controls publication, grants, tenure, and the illicit traffic in grad students. The best way to be accepted into the Bayesian Conspiracy is to join the Campus Crusade for Bayes in high school or college, and gradually work your way up to the inner circles. It is rumored that at the upper levels of the Bayesian Conspiracy exist nine silent figures known only as the Bayes Council.
At the time it seems like a fun joke to make and it stayed. There are also a variety of other arguments to be made that it’s sometimes not useful to share all information with outsiders.
I’m guessing it’s cultural influence from Discordianism, Shea and Wilson’s Illuminatus!, or the like. Conspiracies, cults, and initiatory orders are all pretty common themes in Discordian-influenced works. Some are destructive, some are constructive, some are both, and some run around in circles.
For the same reason EY supports the censoring of posts on topics he has decided are dangerous for the world to see. He generalizes that if he is willing to hide facts that work against his interests, that others similarly situated to him, but with different interests will also be willing to work surreptitiously.
I’m relatively new to the site and I wasn’t aware of any censorship.I suppose I can imagine that it might be useful and even necessary to censor things, but I have an intuitive aversion to the whole business. Plus I’m not sure how practical it is, since after you posted that I googled lesswrong censorship and found out what was being censored. I have to say, if they’re willing to censor stuff that causes nightmares then they ought to censor talk of conspiracies, as I can personally attest that that has caused supreme discomfort. They are a very harmful meme and positing a conspiracy can warp your sense of reality. I have bipolar, and I was taking a medicine that increases the level of dopamine in my brain to help with some of the symptoms of depression. Dopamine (I recently rediscovered) increased your brain’s tendency to see patterns, and I had to stop talking a very helpful medication after reading this site. Maybe it would have happened anyway, but the world of conspiracy theories is very dark and my journey there was triggered by his writings. I guess most of the content on this site is disorienting though, but perhaps some clarification about what he thinks the benefits of conspiracies are and their extent should be would help.
Also, the content on this site is pretty hard hitting in a lot of ways, I find it inconsistent to censor things to protect sensitive people who think about AI but not people who are sensitive to all the other things that are discussed here. I think it’s emblematic of a broader problem with the community, which is that there’s a strong ingroup outgroup barrier, which is a problem when you’re trying to subsist on philanthropy and the ingroup is fairly tiny.
Maybe it would have happened anyway, but the world of conspiracy theories is very dark and my journey there was triggered by his writings.
Many websites about conspiracy theories don’t care much about the truth. They don’t go through the work of checking whether what they are saying is true.
On the other hand organisations such as P2 exist or existed. The Mafia exists. To the extend that we care about truth we can’t claim that aren’t groups of people that coordinate together in secret for the benefits of their members.
Italy is a pretty good country to think about when you want to think about conspiracies because there a lot of publically available information.
It’s actually pretty easy to see flaws in the argument of someone who claims that the US government brought down the twin towers on 9/11 via explosives if you are actually searching for flaws and not only searching for evidence that the claim might be true. The same goes for lizard overlords.
I guess most of the content on this site is disorienting though, but perhaps some clarification about what he thinks the benefits of conspiracies are and their extent should be would help.
Learn to live with not knowing things. Learn to live with uncertainty. Living with uncertainty is one of the core skills as a rationalist. If you don’t know than you don’t know an wanting to know. We live in a very complex world that we don’t fully understand.
Plus I’m not sure how practical it is, since after you posted that I googled lesswrong censorship and found out what was being censored.
You found out what was censored in a way where you don’t understand the debate that was censored in depth and you took no emotional harm.
Learning to live with not knowing things is good advice if you are trying to choose between “I explain this by saying that people are hiding things” and “I don’t have an explanation”.
Learning to live with not knowing things is poor advice in a context where people are actually hiding things from you and what is not known is what the people are hiding rather than whether the people are hiding something. It is especially poor advice where there is a conflict of interest involved—that is, when the same people telling you you’d be better off not knowing also stand to lose from you knowing.
Needless to say, 9/11 and lizard conspiracy theories fall in the first category and the material that has been censored from lesswrong falls in the second category.
Learning to live with not knowing things is poor advice in a context where people are actually hiding things from you and what is not known is what the people are hiding rather than whether the people are hiding something.
No, if you can’t stand thinking that you don’t know how things work you are pretty easy to convince of a lie. You take the first lie that makes a bit of sense in your view of the world. The lie feels like you understand the world. It feels better than uncertainty. Any decent organisation that operates in secret puts out lies to distract people who want to know the truth.
Andy Müller-Maguhn was standing in front of the Chaos Computer Congress in German and managed to give a good description of how the NSA surveils the internet and how the German government lets them spy on German soil. At the time you could have called it a conspiracy theory. Those political Chaos Computer Club people are very aware of what they know and where they are uncertain. That’s required if you want to reason clearly about hidden information.
Needless to say, 9/11 and lizard conspiracy theories fall in the first category and the material that has been censored from lesswrong falls in the second category.
When it comes to 9/11 the government does hide things. 9/11 is not an event where all information is readily available. It’s pretty clear that names of some Saudi’s are hidden. Bin Laden comes from a rich Saudi family and the US wants to keep a good relationship with the Saudi government. I think it’s pretty clear that there some information that the US didn’t want to have in the 9/11 report because the US doesn’t want to damage the relationship with the Saudis.
Various parts of the NSA and CIA do not want to share all their information about what they are doing with Congressional Inquiries. As a result they hide information from the 9/11 commission. The NSA wants to have a lot of stuff out of the public eye that could be find out if a congressional commission would dig around and get full cooperation. The chief of the NSA lied under oath to congress about the US spying program. A congressional commission that would investigate 9/11 fully would want to look at all evidence that they NSA gathered at that point and that’s not what the NSA wants, even if the NSA didn’t do anything to make 9/11 happen.
If someone finds evidence of the NSA withholding information to a congressional commission that shouldn’t surprise you at all, or should increase your belief that the NSA orchestrated 9/11 because they are always hiding stuff.
Information about Al Qaeda support for the Muslim fighters that Nato helped to fight for the independence of Kosovo isn’t clear.
The extend to which Chechnya Muslims freedom fighter are financed by the Saudis or Western sources isn’t clear. The same goes for Uyghurs.
General information about identities of people who did short selling before 9/11 was hidden because the US government just doesn’t release all information about all short selling publically.
The problem with 9/11 is that people go to school and learn that the government is supposed to tell them the truth and not hide things. Then they grow up a bit and are faced with a world where government constantly hides information and lies. Then those people take the evidence that the government hides information in a case like 9/11 as evidence that the US government caused the twin towers to be destroyed with dynamite.
Politically the question whether to take 9/11 as a lesson to cut the money flow to Muslim ‘freedom fighters’ in Chechnya does matter and it’s something where relevant information gets withhold.
I think you are misunderstanding me. The point is that there are two scenarios:
1) Someone doesn’t really know anything about some subject. But they find a conspiracy scenario appealing because they would rather “know” an explanation with little evidence behind it, rather than admit that they don’t know.
2) Information definitely is being hidden from someone, and they say “I want to know that information:”.
Both of these involve someone wanting to know, but “wanting to know” is being used in very different ways. If you say that people should “learn to live without knowing things”, that’s a good point in the first scenario but not so good in the second scenario. And the second scenario is what’s taking place for the information that has been censored from lesswrong. (Considering that your reply was pretty much all about 9/11, do you even know what is being referred to by information that has been censored from lesswrong?)
“learning to live without knowing things” doesn’t mean that you don’t value information. It means that when you can’t/don’t know, you’re not in constant suffering. It means that you don’t get all freaked out and desperate for anything that looks like an answer (e.g. a false conspiracy theory)
It’s the difference between experiencing crippling performance anxiety and just wanting to give a good performance. The difference between “panic mode” and “optimizing mode”. Once you can live with the worst case, fear doesn’t control you any more—but that doesn’t mean you’re not motivated to avoid the worst case!
2) Information definitely is being hidden from someone, and they say “I want to know that information:”.
In the case of 9/11 there is definitely information that’s hidden. Anybody who roughly understands how the US government works should expect that’s true. Anybody who studies the issue in detail will find out that’s true.
do you even know what is being referred to by information that has been censored from lesswrong
Yes, I’m aware of three different instances in which information got censored on Lesswrong. There are additional instances where authors deleted their own posts which you could also call censorship.
I don’t think that the value of discovering the information in any of those three cases of censorship is very high to anyone.
In the case of 9/11 there is definitely information that’s hidden.
The two senses of “wanting to know” can both be applied to 9/11.
Someone who “wants to know” in the sense of ignoring evidence to be able to “know” that 9/11 was caused by a conspiracy is better off not wanting to know.
Someone who wants to know information about 9/11 that is hidden but actually exists is not better off not wanting to know. Wanting to know in this sense is generally a good thing. (Except for privacy and security concerns, but politicians doing things is not privacy, and a politician who says something should be hidden for national security is probably lying).
I don’t think that the value of discovering the information in any of those three cases of censorship is very high to anyone
I was referring to the basilisk. Telling people what the basilisk is is very valuable as criticism of LW, and has high “negative value” to LW itself because of how embarrassing it is to LW.
You think that wanting to know the truth means that you can decide on the outcome of what the information that you don’t have says. That isn’t true.
To the extend that there an interest in weakening Russia and China geopolitically by funding separatists movements within their borders there obviously an interest to be silent about how those movements get funded and which individuals do the funding.
US senator Bob Graham made statements about how crucial information on the potential role of Saudi funding of the 9/11 attack got censored out of the report. (see Wikipedia: http://en.wikipedia.org/wiki/9/11_Commission_Report)
Whether or not you call that a conspiracy is irrelevant. Calling it a conspiracy is just a label.
How many Saudi would have to have what specific ties with Al Qaeda and parts of the US government that it’s a conspiracy_tm? This is far from a black and white affair. Obsessing about the label makes you ignore the real issues that are at stake. The US government might very well be hiding information about people that likely payed for 9/11.
Once you understand that fact you might want to know the information. Unfortunately there no easy way to know especially as an individual. If you want to have a quick fix, then you will believe in a lie. You actually have to be okay with knowing that you don’t know if you don’t want to believe in lies.
I was referring to the basilisk. Telling people what the basilisk is is very valuable as criticism of LW, and has high “negative value” to LW itself because of how embarrassing it is to LW.
Explaining to someone the whole story of what TDT is in a way that the basilisk debate makes sense to them is not an easy task. You are basically telling outsiders a strawman if you try to summarize the basilisk debate. In a lot of fields there are complex argument that seem strange and silly to outsiders, the existence of those cases is no argument against those fields.
Another thing that I learned while doing debating is that you focus on refuting strong arguments of your opponent and not on weak arguments. Good criticism isn’t criticism that focuses on obvious mistakes that someone makes. Good criticism focuses on issues that have actually strong argument and it shows that there are better arguments against the position.
Steelmanning is better than arguing against strawman when you want to be a valuable critic. If a strawman argument about the basilisk is the best you can do to criticize LW, LW is a pretty awesome place.
You are basically telling outsiders a strawman if you try to summarize the basilisk debate. In a lot of fields there are complex argument that seem strange and silly to outsiders, the existence of those cases is no argument against those fields.
-- A whole lot of arguments on LW seem silly to outsiders. I just got finished arguing that it’s okay to kill people to take their organs (or rather, that it’s okay to do so in a hypothetical situation that may not really be possible). Should that also be deleted from the site?
-- LW has a conflict of interest when deciding that some information is so easy to take out of context that it must be suppressed, but when suppressing the information also benefits LW for other reasons. Conflicts of interest should generally be avoided because of the possibility that they taint one’s judgment—even if it’s not possible to prove that the conflict of interest does so.
-- I am not convinced that “they’re crazy enough to fall for the basilisk” is strawmanning LW. Crazy-soiunding ideas are more likely to be false than non-crazy-sounding ideas (even if you don’t have the expertise to tell whether it’s really crazy or just crazy-sounding). Ideas which have not been reviewed by the scientific community are more likely to be false than ideas which have. You can do a legitimate Bayseian update based on the Basilisk sounding crazy.
-- Furthermore, LW doesn’t officially believe in the Basilisk. So it’s not “the Basilisk sounds crazy to outsiders because they don’t understand it”, it’s “even insiders concede that the Basilisk is crazy, it just sounds more crazy to outsiders because they don’t understand it”, which is a much weaker reason to suppress it than the former one.
A whole lot of arguments on LW seem silly to outsiders. I just got finished arguing that it’s okay to kill people to take their organs (or rather, that it’s okay to do so in a hypothetical situation that may not really be possible).
That debate is shared with academic ethics as, IIRC, a standard scenario given as criticism of some forms of utilitarian ethics, is it not? I think that’s a mitigating factor. It may sound funny to discuss ‘quarks’ (quark quark quark! funny sound, isn’t it?) or ‘gluons’ but that also is borrowed from an academic field.
-- A whole lot of arguments on LW seem silly to outsiders. I just got finished arguing that it’s okay to kill people to take their organs (or rather, that it’s okay to do so in a hypothetical situation that may not really be possible). Should that also be deleted from the site?
It’s not deleted because it’s silly to outsiders. You said it was important criticism. It’s not.
LW has a conflict of interest when deciding that some information is so easy to take out of context that it must be suppressed, but when suppressing the information also benefits LW for other reasons.
Discussion like the one we are having here aren’t suppressed on LW. If basilisk censoring would be about that, this discussion would be outside of the limit which it isn’t.
The problem with updating on the basilisk is that you don’t have access to the reasoning based on which the basilisk got censored. If you want to update on whether someone makes rational decisions it makes a lot of sense to focus on instances where the person actually fully opening about why he does what he does.
It’s also a case where there was time pressure to make a decision while a lot of LW discussions aren’t of that nature and intellectual position get developed over months and years. A case where a decision was made within a day is not representative for the way opinions get formed on LW.
Discussion like the one we are having here aren’t suppressed
But outsiders wouldn’t have any idea what we’re talking about (unless they googled “Roko’s Basilisk”),
The problem with updating on the basilisk is that you don’t have access to the reasoning based on which the basilisk got censored. If you want to update on whether someone makes rational decisions it makes a lot of sense to focus on instances where the person actually fully opening about why he does what he does.
Just because you don’t have all information doesn’t mean that the information you do have isn’t useful. Of course updating on “the Basilisk sounds like a crazy idea” isn’t as good as doing so based on completely comprehending it, but that doesn’t mean it’s useless or irrational. Besides, LW (officially) agrees that it’s a crazy idea, so it’s not as if comprehending it would lead to a vastly different conclusion.
And again, LW has a conflict of interest in deciding that reading the Basilisk won’t provide outsiders with useful information. The whole reason we point out conflicts of interest in the first place is that we think certain parties shouldn’t make certain decisions. So arguing “LW should decide not to release the information because X” is inherently wrong—LW shouldn’t be deciding this at all.
It’s also a case where there was time pressure to make a decision while a lot of LW discussions aren’t of that nature and intellectual position get developed over months and years.
There was time pressure when the Basilisk was initially censored. There’s no time pressure now.
Explaining to someone the whole story of what TDT is in a way that the basilisk debate makes sense to them is not an easy task. You are basically telling outsiders a strawman if you try to summarize the basilisk debate. In a lot of fields there are complex argument that seem strange and silly to outsiders, the existence of those cases is no argument against those fields.
What does it mean “to make sense” of “the basilisk” debate? I am curious if you are suggesting that it makes sense to worry about any part or interpretation of it.
No matter what you think about RationalWiki in general, I believe it does a good job at explaining it. But if that is not the case, you are very welcome to visit the talk page there and provide a better account.
I find it inconsistent to censor things to protect sensitive people who think about AI but not people who are sensitive to all the other things that are discussed here.
To the extent there is censorship of dangerous information on LW, the danger is to the future of mankind rather then to the (very real and I don’t mean to minimize this) feelings of readers.
One could make the argument that anything that harms the mission of lesswrong’s sponsoring organizations is to the detriment of mankind. I’m not opposed to that argument, but googling censorship of lesswrong did not turn up anything I considered to be particularly dangerous. Maybe that just means that the censorship is more effective than I would have predicted, or is indicative or a lack of imagination on my part.
I’d say that “censorship” (things that could be classified or pattern-matched to this word) happens less than once in a year. That could actually contribute to why people speak so much about it; if it happened every day, it would be boring.
From my memory, this is “censored”:
inventing scenarios about Pascal’s mugging by AI
debating, even hypothetically, harm towards specific people or organization
replying to a downvoted post (automatically penalized by −5 karma)
And the options 2 and 3 are just common sense, and could happen on any website. Thus, most talk about “censorship” on LW focuses on the option 1.
(By the way, if you learned about the “basilisk” on RationalWiki, here is a little thing I just noticed today: The RW article has a screenshot of dozens of deleted comments, which you will obviously associate with the incident. Please note that the “basilisk” incident happened in 2010, and the screenshot is from 2012. So this is not the censorship of the original debate. It is probably a censorship of some “why did you remove this comment two years ago? let’s talk about it forever and ever” meta-threads that were quite frequent and IMHO quite annoying at some time.)
Also, when a comment or article is removed, at least the message about the removal stays there. There is no meta-censorship (trying to hide the fact that censorship happened). If you don’t see messages about removed comments at some place, it means no comments were removed there.
By meta-censorship I meant things like removing the content from the website without a trace, so unless you look at the google cache, you have no idea that anything happened, and unless someone quickly makes a backup, you have no proof that it happened.
Leaving the notices “this comment was removed” on the page is precisely what allowed RW to make a nice screenshot about LW censorship. LW itself provided evidence that some comments were deleted. Providing a hyperlink instead of screenshot would probably give the same information.
Also, I am mentioning basilisk now, and I have above 95% confidence that this comment will not be deleted. (One of the reasons is that it doesn’t get into details; it doesn’t try to restart the whole debate. Another reason is that don’t start a new thread.)
There’s not a lot of actual censorship of dangerous information “for the future of mankind”. Or at least, I rate that as fairly unlikely, given that when the scientific groundwork for a breakthrough has been laid, multiple people usually invent it in parallel, close to each-other in time. Which means that unless you can get everyone who researches dangerous-level AI into LW, censoring on LW won’t really help, it will just ensure that someone less scrupulous publishes first.
“Three may keep a secret, if two of them are dead.”
Conspiracy is hard. If you don’t have actual legal force backing you up, it’s nearly impossible to keep information from spreading out of control—and even legal force is by no means a sure thing. The existence of the Groom Lake air station, for example, was suspected for decades before publicly available satellite images made it pointless to keep up even the pretense of secrecy.
For an extragovernmental example, consider mystery religions. These aren’t too uncommon: they’re not as popular as they once were, but new or unusual religions still often try to elide the deepest teachings of their faiths, either for cultural/spiritual reasons (e.g. Gardnerian Wicca) or because they sound as crazy as six generations of wolverines raised on horse tranquilizers and back issues of Weird Tales (e.g. Scientology).
Now, where’s it gotten them? Well, Gardnerian Wiccans will still tell you they’re drinking from a vast and unplumbed well of secret truths, but it’s trivially easy to find dozens of different Books of Shadows (some from less restrictive breakaway lineages, some from people who just broke their oaths) that agree on the broad strokes and many of the details of the Gardnerian mysteries. (Also many others that bear almost no resemblance beyond the name and some version of the Lesser Banishing Ritual of the Pentagram, but never mind that.) As to Scientology, Operation Clambake (xenu.net) had blown that wide open years before South Park popularized the basic outline of what’s charmingly known as “space opera”; these days it takes about ten minutes to fire up a browser and pull down a more-or-less complete set of doctrinal PDFs by way of your favorite nautical euphemism. Less if it’s well seeded.
“But these are just weird minority religions,” you say? “Knowing this stuff doesn’t actually harm my spiritual well-being, because I only care about the fivefold kisses when my SO’s involved and there’s no such thing as body thetans”? Sure, but the whole point of a mystery religion is selecting for conviction. Typically they’re gated by an initiation period measured in years and thousands of dollars, not to mention some truly hair-raising oaths; I don’t find it plausible that science broadly defined can do much better.
You are clearly right that conspiracy is hard. And yet, it is not impossible. Plenty of major events are caused by conspiracies, from the assassination of Julius Caesar to the recent coup in Thailand. In addition, to truly prevent a conspiracy, it is often necessary to do more than merely reveal it; if the conspirators have plausible deniability, then revealing (but not thwarting) the conspiracy can actually strengthen the plotters hands, as they can now co-ordinate more easily with outside supporters.
Successful conspiracies, like any other social organization, need incentive compatibility. Yes, it’s easy to find out the secrets of the Scientology cult. Not so easy to find out the secret recipe for Coca Cola, though.
I find it inconsistent to censor things to protect sensitive people who think about AI but not people who are sensitive to all the other things that are discussed here.
To the extent there is censorship of dangerous information on LW, the danger is to the future of mankind rather then to the (very real and I don’t mean to minimize this) feelings of readers.
Have you asked the people who are able to censor information on LW, or do you just assume this to be the case?
Do the people in charge of LW censor information that are neither dangerous nor spam?
I infer it’s the case from being a regular reader of LW. I don’t know if LW censors other types of information in part because spam is not a well defined category.
dangerous information on LW, the danger is to the future of mankind
I think that would be far overstating the importance of this forum. If Eliezer/MIRI have some dark secrets (or whatever they consider to be dangerous knowledge), they surely didn’t make it to LW.
I would assume the main explanation to be just “conspiracies are cool”, the same reason why they pop up in all kinds of other fiction ranging from The X-Files to Babylon 5 to Deus Ex to the Illuminati card game to whatever.
Oh come on. You’ve never steepled your fingers and pretended to be a Bond villain? Or, let’s say it, to be Gendo Ikari? Being an evil conspirator is fun.
Why do you think EY uses conspiracy in his fictional writing? He seems to use them in positive or at least not clearly negative light, which is not how I think of conspiracies at all. I notice that I am confused, so I’m trying to gather some other opinions.
The anecdote in this post, about Fermi, Rabi and Szilard considering keeping the possibility of practical nuclear fission a secret, may shed some light on the subject. He thinks that some knowledge is dangerous enough that people who know it may reasonably want to keep it secret.
(much more recently, there has been some controversy about the publication of a way of obtaining a particularily infectious strain of a certain virus, but I can’t find any references for that right now)
This is a perennial issue, occurring in various forms relating to the preservation of viruses like smallpox, the sequencing of their genomes, and increasing their virulence. Looking in Google News for ‘virus research increase virulence’, it seems the most recent such research would be http://www.nature.com/news/biosafety-in-the-balance-1.15447 / http://www.independent.co.uk/news/science/american-scientists-controversially-recreate-deadly-spanish-flu-virus-9529707.html :
EDIT: Sandberg provides an amazing quote on the topic: http://www.aleph.se/andart/archives/2014/07/if_nature_doesnt_do_containment_why_should_i.html
I think that I remember reading an even better example about publishing scientific results that might have furthered the Nazis ability to produce a nuclear weapon in HPMOR, though I can’t recall where it was exactly. I found that example persuasive, but I considered it a distasteful necessity, not a desirable state of affairs. Hence my confusion at Brennan’s world, which I thought being set in the future of our world was perhaps post-Singularity, and therefore the epitome of human flourishing. Another commenter asked me if I wouldn’t enjoy the thought of being a super-villain, and I thought , um no, that would be terrible, so maybe there are some Mind Projection issues going on in both directions. I don’t know the distribution of people who would gain positive utility from a world of conspiracies, but I’m sure there would be a great deal of disutility with some proportion of current people with current minds. I can see where that world might provide challenge and interest for its inhabitants, but I remain highly skeptical that it’s a utilitarian optima. Using my current brain and assuming stable values, it actually seems pretty dystopian to me, but I’ll admit that’s a limited way to look at things.
Graphite as a neutron modulator, I believe. Ch. 85:
I think it stems from the Brennan’s World weirdtopia, and the idea that making knowledge freely available makes it feel worthless, while making it restricted to members of a secretive group makes it feel as valuable and powerful as it actually is.
If something is valuable and powerful, and (big if) it’s not harmful, plus it’s extremely cheap to reproduce I see no reason not to distribute it freely. My confusion was that Brennan’s world seems set in the future, and I got the sense that EY may have been in favor of it in some ways (perhaps that’s mistaken). Since it seemed to be set in the future of our world, I got the sense that the Singularity had already happened. Maybe I just need to get to the fun sequence, but that particular future really made me uneasy,
Perhaps it’s only powerful in the hands of the chosen few. If it’s in the open and it looks powerful, then other people try it and see less than amazing success, and it looks less and less cool until it stops growing. But by then it’s harder for the special few to recognize its value—or perhaps don’t want to associate themselves with it—and potential is wasted.
If instead the details are kept secret but the powers known publicly, then the masters of the craft are taken seriously and can suck up all the promising individuals.
I don’t know how he feels about it currently, but in the past he did endorse Brennan’s world as a better way to organize society post-Singularity. It started as a thought experiment about how to fix the problem that most people take science for granted and don’t understand how important and powerful it is, and grew into a utopia he found extremely compelling. (To the point where he specifically did not explain the rest of the details because it is too inefficient to risk diverting effort towards. This was probably an overreaction.) He talks about this in
Eutopia is Scary
The linked article ends with this; I think this part of context is necessary. Emphasis mine:
As I understand it, the Conspiracy world is a mental experiment with different advantages and disadvantages. And a tool used to illustrate some other concepts in a storytelling format (because this is what humans pay more attention to), such as resisting social pressure, actually updating on a difficult topic, and a fictional evidence that by more rational thinking we could be more awesome.
But it’s not an optimal (according to Eliezer, as I understand the part I quoted) world. That would be a world where the science is open (and financially available, etc.) to everyone and yet, somehow, people respect it. (The question is, how to achieve that, given human psychology.)
HJPEV is a drama queen and likes acting as if he’s badass (ignore for the moment whether he is) and sinister and evil: Look at what he calls his army and how he acts around them. Hence calling his thing with Draco the Bayesian Conspiracy. Not everything that takes place in an author’s fiction is indicative of something they support.
This, however, is a recurring theme in Eliezer’s work. I don’t think I fully grok the motivations (though I could hazard a guess or two), but it’s definitely not just HJPEV’s supervillain fetish talking.
Agreed, it’s also Eliezer’s super-villain fetish thing.
Conspiracy is the default mode of a group of people getting anything done. Every business is a conspiracy. They plot and scheme within their “offices”, anonymous buildings with nothing but their name on the front door. They tell no-one what they’re doing, beyond legal necessity, and aim to conquer the world by, well, usually the evil plan is to make stuff that people will want to buy.
No organisation conducts all its business in public, whatever its aims. Even if you find one that seems to, dollars to cents you’re not looking at its real processes. There needn’t be anything sinister in this, although of course sometimes there is.
Every one of us is a conspiracy of one.
“Conspiracy” doesn’t mean “people working where you can’t tell what they are doing”.
It means “people working where you can’t tell what they are doing and you worry that you wouldn’t like it”.
EY makes complicated arguments. He’s not the person to make arguments about X is good and Y is bad. Fiction is about playing with ideas.
As far as I can find the first instance of the term Bayesian conpiracy appear in a 2003 nonfiction article by Eliezer:
At the time it seems like a fun joke to make and it stayed. There are also a variety of other arguments to be made that it’s sometimes not useful to share all information with outsiders.
I’m guessing it’s cultural influence from Discordianism, Shea and Wilson’s Illuminatus!, or the like. Conspiracies, cults, and initiatory orders are all pretty common themes in Discordian-influenced works. Some are destructive, some are constructive, some are both, and some run around in circles.
For the same reason EY supports the censoring of posts on topics he has decided are dangerous for the world to see. He generalizes that if he is willing to hide facts that work against his interests, that others similarly situated to him, but with different interests will also be willing to work surreptitiously.
I’m relatively new to the site and I wasn’t aware of any censorship.I suppose I can imagine that it might be useful and even necessary to censor things, but I have an intuitive aversion to the whole business. Plus I’m not sure how practical it is, since after you posted that I googled lesswrong censorship and found out what was being censored. I have to say, if they’re willing to censor stuff that causes nightmares then they ought to censor talk of conspiracies, as I can personally attest that that has caused supreme discomfort. They are a very harmful meme and positing a conspiracy can warp your sense of reality. I have bipolar, and I was taking a medicine that increases the level of dopamine in my brain to help with some of the symptoms of depression. Dopamine (I recently rediscovered) increased your brain’s tendency to see patterns, and I had to stop talking a very helpful medication after reading this site. Maybe it would have happened anyway, but the world of conspiracy theories is very dark and my journey there was triggered by his writings. I guess most of the content on this site is disorienting though, but perhaps some clarification about what he thinks the benefits of conspiracies are and their extent should be would help.
Also, the content on this site is pretty hard hitting in a lot of ways, I find it inconsistent to censor things to protect sensitive people who think about AI but not people who are sensitive to all the other things that are discussed here. I think it’s emblematic of a broader problem with the community, which is that there’s a strong ingroup outgroup barrier, which is a problem when you’re trying to subsist on philanthropy and the ingroup is fairly tiny.
Many websites about conspiracy theories don’t care much about the truth. They don’t go through the work of checking whether what they are saying is true.
On the other hand organisations such as P2 exist or existed. The Mafia exists. To the extend that we care about truth we can’t claim that aren’t groups of people that coordinate together in secret for the benefits of their members. Italy is a pretty good country to think about when you want to think about conspiracies because there a lot of publically available information.
It’s actually pretty easy to see flaws in the argument of someone who claims that the US government brought down the twin towers on 9/11 via explosives if you are actually searching for flaws and not only searching for evidence that the claim might be true. The same goes for lizard overlords.
Learn to live with not knowing things. Learn to live with uncertainty. Living with uncertainty is one of the core skills as a rationalist. If you don’t know than you don’t know an wanting to know. We live in a very complex world that we don’t fully understand.
You found out what was censored in a way where you don’t understand the debate that was censored in depth and you took no emotional harm.
Learning to live with not knowing things is good advice if you are trying to choose between “I explain this by saying that people are hiding things” and “I don’t have an explanation”.
Learning to live with not knowing things is poor advice in a context where people are actually hiding things from you and what is not known is what the people are hiding rather than whether the people are hiding something. It is especially poor advice where there is a conflict of interest involved—that is, when the same people telling you you’d be better off not knowing also stand to lose from you knowing.
Needless to say, 9/11 and lizard conspiracy theories fall in the first category and the material that has been censored from lesswrong falls in the second category.
No, if you can’t stand thinking that you don’t know how things work you are pretty easy to convince of a lie. You take the first lie that makes a bit of sense in your view of the world. The lie feels like you understand the world. It feels better than uncertainty. Any decent organisation that operates in secret puts out lies to distract people who want to know the truth.
Andy Müller-Maguhn was standing in front of the Chaos Computer Congress in German and managed to give a good description of how the NSA surveils the internet and how the German government lets them spy on German soil. At the time you could have called it a conspiracy theory. Those political Chaos Computer Club people are very aware of what they know and where they are uncertain. That’s required if you want to reason clearly about hidden information.
When it comes to 9/11 the government does hide things. 9/11 is not an event where all information is readily available. It’s pretty clear that names of some Saudi’s are hidden. Bin Laden comes from a rich Saudi family and the US wants to keep a good relationship with the Saudi government. I think it’s pretty clear that there some information that the US didn’t want to have in the 9/11 report because the US doesn’t want to damage the relationship with the Saudis.
Various parts of the NSA and CIA do not want to share all their information about what they are doing with Congressional Inquiries. As a result they hide information from the 9/11 commission. The NSA wants to have a lot of stuff out of the public eye that could be find out if a congressional commission would dig around and get full cooperation. The chief of the NSA lied under oath to congress about the US spying program. A congressional commission that would investigate 9/11 fully would want to look at all evidence that they NSA gathered at that point and that’s not what the NSA wants, even if the NSA didn’t do anything to make 9/11 happen.
If someone finds evidence of the NSA withholding information to a congressional commission that shouldn’t surprise you at all, or should increase your belief that the NSA orchestrated 9/11 because they are always hiding stuff.
Information about Al Qaeda support for the Muslim fighters that Nato helped to fight for the independence of Kosovo isn’t clear.
The extend to which Chechnya Muslims freedom fighter are financed by the Saudis or Western sources isn’t clear. The same goes for Uyghurs.
General information about identities of people who did short selling before 9/11 was hidden because the US government just doesn’t release all information about all short selling publically.
The problem with 9/11 is that people go to school and learn that the government is supposed to tell them the truth and not hide things. Then they grow up a bit and are faced with a world where government constantly hides information and lies. Then those people take the evidence that the government hides information in a case like 9/11 as evidence that the US government caused the twin towers to be destroyed with dynamite.
Politically the question whether to take 9/11 as a lesson to cut the money flow to Muslim ‘freedom fighters’ in Chechnya does matter and it’s something where relevant information gets withhold.
I think you are misunderstanding me. The point is that there are two scenarios:
1) Someone doesn’t really know anything about some subject. But they find a conspiracy scenario appealing because they would rather “know” an explanation with little evidence behind it, rather than admit that they don’t know.
2) Information definitely is being hidden from someone, and they say “I want to know that information:”.
Both of these involve someone wanting to know, but “wanting to know” is being used in very different ways. If you say that people should “learn to live without knowing things”, that’s a good point in the first scenario but not so good in the second scenario. And the second scenario is what’s taking place for the information that has been censored from lesswrong. (Considering that your reply was pretty much all about 9/11, do you even know what is being referred to by information that has been censored from lesswrong?)
“learning to live without knowing things” doesn’t mean that you don’t value information. It means that when you can’t/don’t know, you’re not in constant suffering. It means that you don’t get all freaked out and desperate for anything that looks like an answer (e.g. a false conspiracy theory)
It’s the difference between experiencing crippling performance anxiety and just wanting to give a good performance. The difference between “panic mode” and “optimizing mode”. Once you can live with the worst case, fear doesn’t control you any more—but that doesn’t mean you’re not motivated to avoid the worst case!
In the case of 9/11 there is definitely information that’s hidden. Anybody who roughly understands how the US government works should expect that’s true. Anybody who studies the issue in detail will find out that’s true.
Yes, I’m aware of three different instances in which information got censored on Lesswrong. There are additional instances where authors deleted their own posts which you could also call censorship.
I don’t think that the value of discovering the information in any of those three cases of censorship is very high to anyone.
The two senses of “wanting to know” can both be applied to 9/11.
Someone who “wants to know” in the sense of ignoring evidence to be able to “know” that 9/11 was caused by a conspiracy is better off not wanting to know.
Someone who wants to know information about 9/11 that is hidden but actually exists is not better off not wanting to know. Wanting to know in this sense is generally a good thing. (Except for privacy and security concerns, but politicians doing things is not privacy, and a politician who says something should be hidden for national security is probably lying).
I was referring to the basilisk. Telling people what the basilisk is is very valuable as criticism of LW, and has high “negative value” to LW itself because of how embarrassing it is to LW.
You think that wanting to know the truth means that you can decide on the outcome of what the information that you don’t have says. That isn’t true.
To the extend that there an interest in weakening Russia and China geopolitically by funding separatists movements within their borders there obviously an interest to be silent about how those movements get funded and which individuals do the funding.
US senator Bob Graham made statements about how crucial information on the potential role of Saudi funding of the 9/11 attack got censored out of the report. (see Wikipedia: http://en.wikipedia.org/wiki/9/11_Commission_Report) Whether or not you call that a conspiracy is irrelevant. Calling it a conspiracy is just a label.
How many Saudi would have to have what specific ties with Al Qaeda and parts of the US government that it’s a conspiracy_tm? This is far from a black and white affair. Obsessing about the label makes you ignore the real issues that are at stake. The US government might very well be hiding information about people that likely payed for 9/11.
Once you understand that fact you might want to know the information. Unfortunately there no easy way to know especially as an individual. If you want to have a quick fix, then you will believe in a lie. You actually have to be okay with knowing that you don’t know if you don’t want to believe in lies.
Explaining to someone the whole story of what TDT is in a way that the basilisk debate makes sense to them is not an easy task. You are basically telling outsiders a strawman if you try to summarize the basilisk debate. In a lot of fields there are complex argument that seem strange and silly to outsiders, the existence of those cases is no argument against those fields.
Another thing that I learned while doing debating is that you focus on refuting strong arguments of your opponent and not on weak arguments. Good criticism isn’t criticism that focuses on obvious mistakes that someone makes. Good criticism focuses on issues that have actually strong argument and it shows that there are better arguments against the position.
Steelmanning is better than arguing against strawman when you want to be a valuable critic. If a strawman argument about the basilisk is the best you can do to criticize LW, LW is a pretty awesome place.
-- A whole lot of arguments on LW seem silly to outsiders. I just got finished arguing that it’s okay to kill people to take their organs (or rather, that it’s okay to do so in a hypothetical situation that may not really be possible). Should that also be deleted from the site?
-- LW has a conflict of interest when deciding that some information is so easy to take out of context that it must be suppressed, but when suppressing the information also benefits LW for other reasons. Conflicts of interest should generally be avoided because of the possibility that they taint one’s judgment—even if it’s not possible to prove that the conflict of interest does so.
-- I am not convinced that “they’re crazy enough to fall for the basilisk” is strawmanning LW. Crazy-soiunding ideas are more likely to be false than non-crazy-sounding ideas (even if you don’t have the expertise to tell whether it’s really crazy or just crazy-sounding). Ideas which have not been reviewed by the scientific community are more likely to be false than ideas which have. You can do a legitimate Bayseian update based on the Basilisk sounding crazy.
-- Furthermore, LW doesn’t officially believe in the Basilisk. So it’s not “the Basilisk sounds crazy to outsiders because they don’t understand it”, it’s “even insiders concede that the Basilisk is crazy, it just sounds more crazy to outsiders because they don’t understand it”, which is a much weaker reason to suppress it than the former one.
That debate is shared with academic ethics as, IIRC, a standard scenario given as criticism of some forms of utilitarian ethics, is it not? I think that’s a mitigating factor. It may sound funny to discuss ‘quarks’ (quark quark quark! funny sound, isn’t it?) or ‘gluons’ but that also is borrowed from an academic field.
It’s not deleted because it’s silly to outsiders. You said it was important criticism. It’s not.
Discussion like the one we are having here aren’t suppressed on LW. If basilisk censoring would be about that, this discussion would be outside of the limit which it isn’t.
The problem with updating on the basilisk is that you don’t have access to the reasoning based on which the basilisk got censored. If you want to update on whether someone makes rational decisions it makes a lot of sense to focus on instances where the person actually fully opening about why he does what he does.
It’s also a case where there was time pressure to make a decision while a lot of LW discussions aren’t of that nature and intellectual position get developed over months and years. A case where a decision was made within a day is not representative for the way opinions get formed on LW.
But outsiders wouldn’t have any idea what we’re talking about (unless they googled “Roko’s Basilisk”),
Just because you don’t have all information doesn’t mean that the information you do have isn’t useful. Of course updating on “the Basilisk sounds like a crazy idea” isn’t as good as doing so based on completely comprehending it, but that doesn’t mean it’s useless or irrational. Besides, LW (officially) agrees that it’s a crazy idea, so it’s not as if comprehending it would lead to a vastly different conclusion.
And again, LW has a conflict of interest in deciding that reading the Basilisk won’t provide outsiders with useful information. The whole reason we point out conflicts of interest in the first place is that we think certain parties shouldn’t make certain decisions. So arguing “LW should decide not to release the information because X” is inherently wrong—LW shouldn’t be deciding this at all.
There was time pressure when the Basilisk was initially censored. There’s no time pressure now.
You underrate the intelligence of the folks who read LW. If someone wants to know he googles it.
Sure?
What does it mean “to make sense” of “the basilisk” debate? I am curious if you are suggesting that it makes sense to worry about any part or interpretation of it.
No matter what you think about RationalWiki in general, I believe it does a good job at explaining it. But if that is not the case, you are very welcome to visit the talk page there and provide a better account.
To the extent there is censorship of dangerous information on LW, the danger is to the future of mankind rather then to the (very real and I don’t mean to minimize this) feelings of readers.
One could make the argument that anything that harms the mission of lesswrong’s sponsoring organizations is to the detriment of mankind. I’m not opposed to that argument, but googling censorship of lesswrong did not turn up anything I considered to be particularly dangerous. Maybe that just means that the censorship is more effective than I would have predicted, or is indicative or a lack of imagination on my part.
I’d say that “censorship” (things that could be classified or pattern-matched to this word) happens less than once in a year. That could actually contribute to why people speak so much about it; if it happened every day, it would be boring.
From my memory, this is “censored”:
inventing scenarios about Pascal’s mugging by AI
debating, even hypothetically, harm towards specific people or organization
replying to a downvoted post (automatically penalized by −5 karma)
And the options 2 and 3 are just common sense, and could happen on any website. Thus, most talk about “censorship” on LW focuses on the option 1.
(By the way, if you learned about the “basilisk” on RationalWiki, here is a little thing I just noticed today: The RW article has a screenshot of dozens of deleted comments, which you will obviously associate with the incident. Please note that the “basilisk” incident happened in 2010, and the screenshot is from 2012. So this is not the censorship of the original debate. It is probably a censorship of some “why did you remove this comment two years ago? let’s talk about it forever and ever” meta-threads that were quite frequent and IMHO quite annoying at some time.)
Also, when a comment or article is removed, at least the message about the removal stays there. There is no meta-censorship (trying to hide the fact that censorship happened). If you don’t see messages about removed comments at some place, it means no comments were removed there.
And yet earlier in your post you’re talking about some posts in 2012 about censorship in 2010 being deleted. Smells like meta-censorship to me.
By meta-censorship I meant things like removing the content from the website without a trace, so unless you look at the google cache, you have no idea that anything happened, and unless someone quickly makes a backup, you have no proof that it happened.
Leaving the notices “this comment was removed” on the page is precisely what allowed RW to make a nice screenshot about LW censorship. LW itself provided evidence that some comments were deleted. Providing a hyperlink instead of screenshot would probably give the same information.
Also, I am mentioning basilisk now, and I have above 95% confidence that this comment will not be deleted. (One of the reasons is that it doesn’t get into details; it doesn’t try to restart the whole debate. Another reason is that don’t start a new thread.)
There’s not a lot of actual censorship of dangerous information “for the future of mankind”. Or at least, I rate that as fairly unlikely, given that when the scientific groundwork for a breakthrough has been laid, multiple people usually invent it in parallel, close to each-other in time. Which means that unless you can get everyone who researches dangerous-level AI into LW, censoring on LW won’t really help, it will just ensure that someone less scrupulous publishes first.
“Three may keep a secret, if two of them are dead.”
Conspiracy is hard. If you don’t have actual legal force backing you up, it’s nearly impossible to keep information from spreading out of control—and even legal force is by no means a sure thing. The existence of the Groom Lake air station, for example, was suspected for decades before publicly available satellite images made it pointless to keep up even the pretense of secrecy.
For an extragovernmental example, consider mystery religions. These aren’t too uncommon: they’re not as popular as they once were, but new or unusual religions still often try to elide the deepest teachings of their faiths, either for cultural/spiritual reasons (e.g. Gardnerian Wicca) or because they sound as crazy as six generations of wolverines raised on horse tranquilizers and back issues of Weird Tales (e.g. Scientology).
Now, where’s it gotten them? Well, Gardnerian Wiccans will still tell you they’re drinking from a vast and unplumbed well of secret truths, but it’s trivially easy to find dozens of different Books of Shadows (some from less restrictive breakaway lineages, some from people who just broke their oaths) that agree on the broad strokes and many of the details of the Gardnerian mysteries. (Also many others that bear almost no resemblance beyond the name and some version of the Lesser Banishing Ritual of the Pentagram, but never mind that.) As to Scientology, Operation Clambake (xenu.net) had blown that wide open years before South Park popularized the basic outline of what’s charmingly known as “space opera”; these days it takes about ten minutes to fire up a browser and pull down a more-or-less complete set of doctrinal PDFs by way of your favorite nautical euphemism. Less if it’s well seeded.
“But these are just weird minority religions,” you say? “Knowing this stuff doesn’t actually harm my spiritual well-being, because I only care about the fivefold kisses when my SO’s involved and there’s no such thing as body thetans”? Sure, but the whole point of a mystery religion is selecting for conviction. Typically they’re gated by an initiation period measured in years and thousands of dollars, not to mention some truly hair-raising oaths; I don’t find it plausible that science broadly defined can do much better.
So I’m the only one here who actually took a hair-raising oath before making an account?
You’re not allowed to talk about the oath! Why am I the only one who seems able to keep it?
Because there are different factions at work, you naked ape.
Nah, I hear we traditionally save that for after you earn your 10,000th karma point and take the Mark of Bayes.
You probably need to get those 10K karma points from Main.
You are clearly right that conspiracy is hard. And yet, it is not impossible. Plenty of major events are caused by conspiracies, from the assassination of Julius Caesar to the recent coup in Thailand. In addition, to truly prevent a conspiracy, it is often necessary to do more than merely reveal it; if the conspirators have plausible deniability, then revealing (but not thwarting) the conspiracy can actually strengthen the plotters hands, as they can now co-ordinate more easily with outside supporters.
Successful conspiracies, like any other social organization, need incentive compatibility. Yes, it’s easy to find out the secrets of the Scientology cult. Not so easy to find out the secret recipe for Coca Cola, though.
Have you asked the people who are able to censor information on LW, or do you just assume this to be the case?
Do the people in charge of LW censor information that are neither dangerous nor spam?
I infer it’s the case from being a regular reader of LW. I don’t know if LW censors other types of information in part because spam is not a well defined category.
I think that would be far overstating the importance of this forum. If Eliezer/MIRI have some dark secrets (or whatever they consider to be dangerous knowledge), they surely didn’t make it to LW.
I would assume the main explanation to be just “conspiracies are cool”, the same reason why they pop up in all kinds of other fiction ranging from The X-Files to Babylon 5 to Deus Ex to the Illuminati card game to whatever.
A “conspiracy” may be usefully generalised as any group of people trying to get something done.
Oh come on. You’ve never steepled your fingers and pretended to be a Bond villain? Or, let’s say it, to be Gendo Ikari? Being an evil conspirator is fun.