This is a politically reinforced heuristic that does not work for this problem.
Transparency is very important regarding people and organisations in powerful and unique positions. The way they act and what they claim in public is weak evidence in support of their honesty. To claim that they have to censor certain information in the name of the greater public good, and to fortify the decision based on their public reputation, does bear no evidence about their true objectives. The only way to solve this issue is by means of transparency.
Surely transparency might have negative consequences, but it mustn’t and can outweigh the potential risks from just believing that certain people are telling the truth and do not engage in deception to follow through on their true objectives.
There is also nothing that Yudkowsky has ever achieved that would sufficiently prove his superior intellect that would in turn justify people to just believe him about some extraordinary claim.
When I say something is a misapplied politically reinforced heuristic, you only reinforce my point by making fully general political arguments that it is always right.
Censorship is not the most evil thing in the universe. The consequences of transparency are allowed to be worse than censorship. Deal with it.
When I say something is a misapplied politically reinforced heuristic, you only reinforce my point by making fully general political arguments that it is always right.
I already had Anna Salamon telling me something about politics. You sound as incomprehensible to me. Sorry, not meant as an attack.
Censorship is not the most evil thing in the universe. The consequences of transparency are allowed to be worse than censorship. Deal with it.
I stated several times in the past that I am completely in favor of censorship, I have no idea why you are telling me this.
Our rules and intuitions about free speech and censorship are based on the types of censorship we usually see in practice. Ordinarily, if someone is trying to censor a piece of information, then that information falls into one of two categories: either it’s information that would weaken them politically, by making others less likely to support them and more likely to support their opponents, or it’s information that would enable people to do something that they don’t want done.
People often try to censor information that makes people less likely to support them, and more likely to support their opponents. For example, many governments try to censor embarrassing facts (“the Purple Party takes bribes and kicks puppies!”), the fact that opposition exists (“the Pink Party will stop the puppy-kicking!”) and its strength (“you can join the Pink Party, there are 10^4 of us already!”), and organization of opposition (“the Pink Party rally is tomorrow!”). This is most obvious with political parties, but it happens anywhere people feel like there are “sides”—with religions (censorship of “blasphemy”) and with public policies (censoring climate change studies, reports from the Iraq and Afghan wars). Allowing censorship in this category is bad because it enables corruption, and leaves less-worthy groups in charge.
The second common instance of censorship is encouragement and instructions for doing things that certain people don’t want done. Examples include cryptography, how to break DRM, pornography, and bomb-making recipes. Banning these is bad if the capability is suppressed for a bad reason (cryptography enables dissent), if it’s entangled with other things (general-purpose chemistry applies to explosives), or if it requires infrastructure that can also be used for the first type of censorship (porn filters have been caught blocking politicians’ campaign sites).
These two cases cover 99.99% of the things we call “censorship”, and within these two categories, censorship is definitely bad, and usually worth opposing. It is normally safe to assume that if something is being censored, it is for one of these two reasons. There are gray areas—slander (when the speaker knows he’s lying and has malicious intent), and bomb-making recipes (when they’re advertised as such and not general-purpose chemistry), for example—but the law has the exceptions mapped out pretty accurately. (Slander gets you sued, bomb-making recipes get you surveilled.) This makes a solid foundation for the principle that censorship should be opposed.
However, that principle and the analysis supporting it apply only to censorship that falls within these two domains. When things fall outside these categories, we usually don’t call them censorship; for example, there is a widespread conspiracy among email and web site administrators to suppress ads for Viagra, but we don’t call that censorship, even though it meets every aspect of the definition except motive. If you happen to find a weird instance of censorship which doesn’t fall into either category, then you have to start over and derive an answer to whether censorship in that particular case is good or bad, from scratch, without resorting to generalities about censorship-in-general. Some of the arguments may still apply—for example, building a censorship-technology infrastructure is bad even if it’s only meant to be used on spam—but not all of them, and not with the same force.
If the usual arguments against censorship don’t apply, and we’re trying to figure out whether to censor it, the next two things to test are whether it’s true, and whether an informed reader would want to see it. If both of these conditions hold, then it should not be censored. However, if either condition fails to hold, then it’s okay to censor.
Either the forbidden post is false, in which case it does not deserve protection because it’s false, or it’s true, in which case it should be censored because no informed person should want to see it. In either case, people spreading it are doing a bad thing.
Either the forbidden post is false, in which case it does not deserve protection because it’s false,
Even if this is right the censorship extends to perhaps true conversations about why the post is false. Moreover, I don’t see what truth has to do with it. There are plenty of false claims made on this site that nonetheless should be public because understanding why they’re false and how someone might come to think that they are true are worthwhile endeavors.
The question here is rather straight forward: does the harm of the censorship outweigh the harm of letting people talk about the post. I can understand how you might initially think those who disagree with you are just responding to knee-jerk anti-censorship instincts that aren’t necessarily valid here. But from where I stand the arguments made by those who disagree with you do not fit this pattern. I think XiXi has been clear in the past about why the transparency concern does apply to SIAI. We’ve also seen arguments for why censorship in this particular case is a bad idea.
Either the forbidden post is false, in which case it does not deserve protection because it’s false, or it’s true, in which case it should be censored because no informed person should want to see it. In either case, people spreading it are doing a bad thing.
There are clearly more than two options here. There seem to be two points under contention:
It is/is not (1/2) reasonable to agree with the forbidden post.
It is/is not (3/4) desirable to know the contents of the forbidden post.
You seem to be restricting us to either 2+3 or 1+4. It seems that 1+3 is plausible (should we keep children from ever knowing about death because it’ll upset them?), and 2+4 seems like a good argument for restriction of knowledge (the idea is costly until you work through it, and the benefits gained from reaching the other side are lower than the costs).
But I personally suspect 2+3 is the best description, and that doesn’t explain why people trying to spread it are doing a bad thing. Should we delete posts on Pascal’s Wager because someone might believe it?
Either the forbidden post is false, in which case it does not deserve protection because it’s false, or it’s true, in which case it should be censored because no informed person should want to see it.
Excluded middle, of course: incorrect criterion. (Was this intended as a test?) It would not deserve protection if it were useless (like spam), not “if it were false.”
The reason I consider sufficient to keep it off LessWrong is that it actually hurt actual people. That’s pretty convincing to me. I wouldn’t expunge it from the Internet (though I might put a warning label on it), but from LW? Appropriate. Reposting it here? Rude.
Unfortunately, that’s also an argument as to why it needs serious thought applied to it, because if the results of decompartmentalised thinking can lead there, humans need to be able to handle them. As Vaniver pointed out, there are previous historical texts that have had similar effects. Rationalists need to be able to cope with such things, as they have learnt to cope with previous conceptual basilisks. So it’s legitimate LessWrong material at the same time as being inappropriate for here. Tricky one.
(To the ends of that “compartmentalisation” link, by the way, I’m interested in past examples of basilisks and other motifs of harmful sensation in idea form. Yes, I have the deleted Wikipedia article.)
Note that I personally found the idea itself silly at best.
The assertion that if a statement is not true, fails to alter political support, fails to provide instruction, and an informed reader wants to see that statement, it is therefore a bad thing to spread that statement and a OK thing to censor, is, um, far from uncontroversial.
To begin with, most fiction falls into this category. For that matter, so does most nonfiction, though at least in that case the authors generally don’t intend for it to be non-true.
The assertion that if a statement is not true, fails to alter political support, fails to provide instruction, and an informed reader wants to see that statement, it is therefore a bad thing to spread that statement and a OK thing to censor, is, um, far from uncontroversial.
No, you reversed a sign bit: it is okay to censor if an informed reader wouldn’t want to see it (and the rest of those conditions).
No, I don’t think so. You said “if either condition fails to hold, then it’s okay to censor.” If it isn’t true, and an informed reader wants to see it, then one of the two conditions failed to hold, and therefore it’s OK to censor.
Oops, you’re right—one more condition is required. The condition I gave is only sufficient to show that it fails to fall into a protected class, not that it falls in the class of things that should be censored; there are things which fall in neither class (which aren’t normally censored because that requires someone with a motive to censor it, which usually puts it into one of the protected classes). To make it worthy of censorship, there must additionally be a reason outside the list of excluded reasons to censor it.
I just have trouble understanding what you are saying. That might very well be my fault. I do not intent any hostile attack against you or the SIAI. I’m just curious, not worried at all. I do not demand anything. I’d like to learn more about you people, what you believe and how you arrived at your beliefs.
There is this particular case of the forbidden topic and I am throwing everything I got at it to see if the beliefs about it are consistent and hold water. That doesn’t mean that I am against censorship or that I believe it is wrong. I believe it is right but too unlikely (...). I believe that Yudkowsky and the SIAI are probably honest (although my gut feeling is to be very skeptic) but that there are good arguments for more transparency regarding the SIAI (if you believe it is as important as being portrayed). I believe that Yudkowsky is wrong about his risk estimation regarding the idea.
I just don’t understand your criticism of my past comments and that included telling me something about how I use politics (I don’t get it) and that I should accept that censorship sometimes is necessary (which I haven’t argued against).
There is this particular case of the forbidden topic and I am throwing everything I got at it to see if the beliefs about it are consistent and hold water.
There is this particular case of the forbidden topic and I am throwing everything I got at it to see if the beliefs about it are consistent and hold water.
The problem with that is that Eliezer and those who agree with him, including me, cannot speak freely about our reasoning on the issue, because we don’t want to spread the idea, so we don’t want to describe it and point to details about it as we describe our reasoning. If you imagine yourself in our position, believing the idea is dangerous, you could tell that you wouldn’t want to spread the idea in the process of explaining its danger either.
Under more normal circumstances, where the ideas we disagree about are not thought by anyone to be dangerous, we can have effective discussion by laying out our true reasons for our beliefs, and considering counter arguments that refer to the details of our arguments. Being cut off from our normal effective methods of discussion is stressful, at least for me.
I have been trying to persuade people who don’t know the details of the idea or don’t agree that it is dangerous that we do in fact have good reasons for believing it to be dangerous, or at least that this is likely enough that they should let it go. This is a slow process, as I think of ways to express my thoughts without revealing details of the dangerous idea, or explaining them to people who know but don’t understand those details. And this ends up involving talking to people who, because they don’t think the idea is dangerous and don’t take it seriously, express themselves faster and less carefully, and who have conflicting goals like learning or spreading the idea, or opposing censorship in general, or having judged for themselves the merits of censorship (from others just like them) in this case. This is also stressful.
I engage in this stressful topic, because I think it is important, both that people do not get hurt from learning about this idea, and that SIAI/Eliezer do not get dragged through mud for doing the right thing.
Sorry, but I am not here to help you get the full understanding you need to judge if the beliefs are consistent and hold water. As I have been saying, this is not a normal discussion. And seriously, you would be better of dropping it and finding something else to worry about. And if you think it is important, you can remember to track if SIAI/Eliezer/supporters like me engage in a pattern of making excuses to ban certain topics to protect some hidden agenda. But then please remember all the critical discussion that don’t get banned.
I have been trying to persuade people who don’t know the details of the idea or don’t agree that it is dangerous that we do in fact have good reasons for believing it to be dangerous, or at least that this is likely enough that they should let it go. This is a slow process, as I think of ways to express my thoughts without revealing details of the dangerous idea, or explaining them to people who know but don’t understand those details.
Note that this shouldn’t be possible other than through arguments from authority.
(I’ve just now formed a better intuitive picture of the reasons for danger of the idea, and saw some of the comments previously made unnecessarily revealing, where the additional detail didn’t actually serve the purpose of convincing people I communicated with, who lacked some of the prerequisites for being able to use that detail to understand the argument for danger, but would potentially gain (better) understanding of the idea. It does still sound silly to me, but maybe the lack of inferential stability of this conclusion should actually be felt this way—I expect that the idea will stop being dangerous in the following decades due to better understanding of decision theory.)
Yes, the idea really is dangerous.
And for those who understand the idea, but not why it is wrong, nor the explanation of why it is wrong?
This is a politically reinforced heuristic that does not work for this problem.
Transparency is very important regarding people and organisations in powerful and unique positions. The way they act and what they claim in public is weak evidence in support of their honesty. To claim that they have to censor certain information in the name of the greater public good, and to fortify the decision based on their public reputation, does bear no evidence about their true objectives. The only way to solve this issue is by means of transparency.
Surely transparency might have negative consequences, but it mustn’t and can outweigh the potential risks from just believing that certain people are telling the truth and do not engage in deception to follow through on their true objectives.
There is also nothing that Yudkowsky has ever achieved that would sufficiently prove his superior intellect that would in turn justify people to just believe him about some extraordinary claim.
When I say something is a misapplied politically reinforced heuristic, you only reinforce my point by making fully general political arguments that it is always right.
Censorship is not the most evil thing in the universe. The consequences of transparency are allowed to be worse than censorship. Deal with it.
I already had Anna Salamon telling me something about politics. You sound as incomprehensible to me. Sorry, not meant as an attack.
I stated several times in the past that I am completely in favor of censorship, I have no idea why you are telling me this.
Our rules and intuitions about free speech and censorship are based on the types of censorship we usually see in practice. Ordinarily, if someone is trying to censor a piece of information, then that information falls into one of two categories: either it’s information that would weaken them politically, by making others less likely to support them and more likely to support their opponents, or it’s information that would enable people to do something that they don’t want done.
People often try to censor information that makes people less likely to support them, and more likely to support their opponents. For example, many governments try to censor embarrassing facts (“the Purple Party takes bribes and kicks puppies!”), the fact that opposition exists (“the Pink Party will stop the puppy-kicking!”) and its strength (“you can join the Pink Party, there are 10^4 of us already!”), and organization of opposition (“the Pink Party rally is tomorrow!”). This is most obvious with political parties, but it happens anywhere people feel like there are “sides”—with religions (censorship of “blasphemy”) and with public policies (censoring climate change studies, reports from the Iraq and Afghan wars). Allowing censorship in this category is bad because it enables corruption, and leaves less-worthy groups in charge.
The second common instance of censorship is encouragement and instructions for doing things that certain people don’t want done. Examples include cryptography, how to break DRM, pornography, and bomb-making recipes. Banning these is bad if the capability is suppressed for a bad reason (cryptography enables dissent), if it’s entangled with other things (general-purpose chemistry applies to explosives), or if it requires infrastructure that can also be used for the first type of censorship (porn filters have been caught blocking politicians’ campaign sites).
These two cases cover 99.99% of the things we call “censorship”, and within these two categories, censorship is definitely bad, and usually worth opposing. It is normally safe to assume that if something is being censored, it is for one of these two reasons. There are gray areas—slander (when the speaker knows he’s lying and has malicious intent), and bomb-making recipes (when they’re advertised as such and not general-purpose chemistry), for example—but the law has the exceptions mapped out pretty accurately. (Slander gets you sued, bomb-making recipes get you surveilled.) This makes a solid foundation for the principle that censorship should be opposed.
However, that principle and the analysis supporting it apply only to censorship that falls within these two domains. When things fall outside these categories, we usually don’t call them censorship; for example, there is a widespread conspiracy among email and web site administrators to suppress ads for Viagra, but we don’t call that censorship, even though it meets every aspect of the definition except motive. If you happen to find a weird instance of censorship which doesn’t fall into either category, then you have to start over and derive an answer to whether censorship in that particular case is good or bad, from scratch, without resorting to generalities about censorship-in-general. Some of the arguments may still apply—for example, building a censorship-technology infrastructure is bad even if it’s only meant to be used on spam—but not all of them, and not with the same force.
If the usual arguments against censorship don’t apply, and we’re trying to figure out whether to censor it, the next two things to test are whether it’s true, and whether an informed reader would want to see it. If both of these conditions hold, then it should not be censored. However, if either condition fails to hold, then it’s okay to censor.
Either the forbidden post is false, in which case it does not deserve protection because it’s false, or it’s true, in which case it should be censored because no informed person should want to see it. In either case, people spreading it are doing a bad thing.
Even if this is right the censorship extends to perhaps true conversations about why the post is false. Moreover, I don’t see what truth has to do with it. There are plenty of false claims made on this site that nonetheless should be public because understanding why they’re false and how someone might come to think that they are true are worthwhile endeavors.
The question here is rather straight forward: does the harm of the censorship outweigh the harm of letting people talk about the post. I can understand how you might initially think those who disagree with you are just responding to knee-jerk anti-censorship instincts that aren’t necessarily valid here. But from where I stand the arguments made by those who disagree with you do not fit this pattern. I think XiXi has been clear in the past about why the transparency concern does apply to SIAI. We’ve also seen arguments for why censorship in this particular case is a bad idea.
There are clearly more than two options here. There seem to be two points under contention:
It is/is not (1/2) reasonable to agree with the forbidden post.
It is/is not (3/4) desirable to know the contents of the forbidden post.
You seem to be restricting us to either 2+3 or 1+4. It seems that 1+3 is plausible (should we keep children from ever knowing about death because it’ll upset them?), and 2+4 seems like a good argument for restriction of knowledge (the idea is costly until you work through it, and the benefits gained from reaching the other side are lower than the costs).
But I personally suspect 2+3 is the best description, and that doesn’t explain why people trying to spread it are doing a bad thing. Should we delete posts on Pascal’s Wager because someone might believe it?
Excluded middle, of course: incorrect criterion. (Was this intended as a test?) It would not deserve protection if it were useless (like spam), not “if it were false.”
The reason I consider sufficient to keep it off LessWrong is that it actually hurt actual people. That’s pretty convincing to me. I wouldn’t expunge it from the Internet (though I might put a warning label on it), but from LW? Appropriate. Reposting it here? Rude.
Unfortunately, that’s also an argument as to why it needs serious thought applied to it, because if the results of decompartmentalised thinking can lead there, humans need to be able to handle them. As Vaniver pointed out, there are previous historical texts that have had similar effects. Rationalists need to be able to cope with such things, as they have learnt to cope with previous conceptual basilisks. So it’s legitimate LessWrong material at the same time as being inappropriate for here. Tricky one.
(To the ends of that “compartmentalisation” link, by the way, I’m interested in past examples of basilisks and other motifs of harmful sensation in idea form. Yes, I have the deleted Wikipedia article.)
Note that I personally found the idea itself silly at best.
The assertion that if a statement is not true, fails to alter political support, fails to provide instruction, and an informed reader wants to see that statement, it is therefore a bad thing to spread that statement and a OK thing to censor, is, um, far from uncontroversial.
To begin with, most fiction falls into this category. For that matter, so does most nonfiction, though at least in that case the authors generally don’t intend for it to be non-true.
No, you reversed a sign bit: it is okay to censor if an informed reader wouldn’t want to see it (and the rest of those conditions).
No, I don’t think so. You said “if either condition fails to hold, then it’s okay to censor.” If it isn’t true, and an informed reader wants to see it, then one of the two conditions failed to hold, and therefore it’s OK to censor.
No?
Oops, you’re right—one more condition is required. The condition I gave is only sufficient to show that it fails to fall into a protected class, not that it falls in the class of things that should be censored; there are things which fall in neither class (which aren’t normally censored because that requires someone with a motive to censor it, which usually puts it into one of the protected classes). To make it worthy of censorship, there must additionally be a reason outside the list of excluded reasons to censor it.
Your comment that I am replying too is often way more salient than things you have said in the past that I may or may not have observed.
I just have trouble understanding what you are saying. That might very well be my fault. I do not intent any hostile attack against you or the SIAI. I’m just curious, not worried at all. I do not demand anything. I’d like to learn more about you people, what you believe and how you arrived at your beliefs.
There is this particular case of the forbidden topic and I am throwing everything I got at it to see if the beliefs about it are consistent and hold water. That doesn’t mean that I am against censorship or that I believe it is wrong. I believe it is right but too unlikely (...). I believe that Yudkowsky and the SIAI are probably honest (although my gut feeling is to be very skeptic) but that there are good arguments for more transparency regarding the SIAI (if you believe it is as important as being portrayed). I believe that Yudkowsky is wrong about his risk estimation regarding the idea.
I just don’t understand your criticism of my past comments and that included telling me something about how I use politics (I don’t get it) and that I should accept that censorship sometimes is necessary (which I haven’t argued against).
You are just going to piss off the management.
IMO, it isn’t that interesting.
Yudkowsky apparently agrees that squashing it was handled badly.
Anyway, now Roko is out of self-imposed exile, I figure it is about time to let it drop.
The problem with that is that Eliezer and those who agree with him, including me, cannot speak freely about our reasoning on the issue, because we don’t want to spread the idea, so we don’t want to describe it and point to details about it as we describe our reasoning. If you imagine yourself in our position, believing the idea is dangerous, you could tell that you wouldn’t want to spread the idea in the process of explaining its danger either.
Under more normal circumstances, where the ideas we disagree about are not thought by anyone to be dangerous, we can have effective discussion by laying out our true reasons for our beliefs, and considering counter arguments that refer to the details of our arguments. Being cut off from our normal effective methods of discussion is stressful, at least for me.
I have been trying to persuade people who don’t know the details of the idea or don’t agree that it is dangerous that we do in fact have good reasons for believing it to be dangerous, or at least that this is likely enough that they should let it go. This is a slow process, as I think of ways to express my thoughts without revealing details of the dangerous idea, or explaining them to people who know but don’t understand those details. And this ends up involving talking to people who, because they don’t think the idea is dangerous and don’t take it seriously, express themselves faster and less carefully, and who have conflicting goals like learning or spreading the idea, or opposing censorship in general, or having judged for themselves the merits of censorship (from others just like them) in this case. This is also stressful.
I engage in this stressful topic, because I think it is important, both that people do not get hurt from learning about this idea, and that SIAI/Eliezer do not get dragged through mud for doing the right thing.
Sorry, but I am not here to help you get the full understanding you need to judge if the beliefs are consistent and hold water. As I have been saying, this is not a normal discussion. And seriously, you would be better of dropping it and finding something else to worry about. And if you think it is important, you can remember to track if SIAI/Eliezer/supporters like me engage in a pattern of making excuses to ban certain topics to protect some hidden agenda. But then please remember all the critical discussion that don’t get banned.
Note that this shouldn’t be possible other than through arguments from authority.
(I’ve just now formed a better intuitive picture of the reasons for danger of the idea, and saw some of the comments previously made unnecessarily revealing, where the additional detail didn’t actually serve the purpose of convincing people I communicated with, who lacked some of the prerequisites for being able to use that detail to understand the argument for danger, but would potentially gain (better) understanding of the idea. It does still sound silly to me, but maybe the lack of inferential stability of this conclusion should actually be felt this way—I expect that the idea will stop being dangerous in the following decades due to better understanding of decision theory.)