I am not actually a lurker—I currently have 13 karma—but I am not a heavy participator. However, now I would like to get to 20 karma so I can make a post on why MWI makes acausal incentives into minor considerations. I would also be gratified if someone told me how to make my draft of this post linkable, even if it does not show up within “new”.
I think that you should get some bonus towards the initial 20 karma for your average karma per post. This belief is clearly self-serving, but not necessarily thereby invalid. I believe my own average karma per post is decent but not outstanding.
I believe that the businesslike tone of this post, as a series of declarative statements, will be seen as excessive subservience to the imagined norms of a community of rationalists, and thus net me less status and karma than a chattier post. I am honestly unsure if the simple self-referential gambit of this paragraph will help or hurt this situation.
I posted a diary, and it was banned for containing a dangerous idea. I can understand that certain ideas are dangerous; in fact, in the discussion I started, I consciously refrained from expressing several sub-points for that reason, starting with my initial post. But I think that if there’s such a policy, it should be explicit, and there should be some form of appeal. If the very discussion of these issues shouldn’t happen in public, then there should be a private space to give whatever explanation can be given of why. A secret, unappealable rule which cannot even be discussed—this is not the path to rationalism, it’s the way down the rabbit hole.
It was a counter argument against the dangerous topic being dangerous, which by necessity touched the dangerous topic and which wasn’t strong enough to justify this (anyone for whom the dangerous topic actually would be dangerous [rather than just causing nightmares] would almost by necessity already be aware of a stronger argument).
Interesting. Thanks, uprated; with the caveat that of course, we only have your word that the other argument is “stronger”.
Without further evidence, it’s my rationality plus consideration of the issue minus overconfidence against yours. You have an advantage on consideration, since you know both arguments while I only know that I know one; however, on the whole, I think it would be pathological for me to abandon my argument and belief just on that basis. As for the other aspects, we’re both probably smarter and less biased than average people, and I don’t see any argument to swing that.
Seriously, my post was about why that stuff is not scary. Fiction can be good allegory for reality, but those stories all use a lot of you-should-be-scared tricks, all very well and good for ghost stories, but not conducive to actual discussion.
We are swimming in a soup of sirens’ songs, every single day. Dangerous ideas don’t just exist, they abound. But I see no evidence of any dangerous ideas which are not best fought with some measure of banality, among other tactics. The trappings of Avert Your Eyes For That Way Lies Doom seem to be one of the best ways to enhance the danger of an idea.
In fact… what if Eliezer himself… no, that would be too horrible… oh my god, it’s full of stars. (Or, in serious terms: I’m being asked to believe not just in a threat, but also that those who claim to protect us have some special immunity, either inherent or acquired; I see no evidence for either proposition).
Gah, it’s incredibly annoying to try to talk about something without being too explicit. The more explicit I get in my head, the more ridiculous this whole charade seems to me. Of course I can find plenty of rational arguments to support that, but I also trust the feeling. I’m participationg in the “that which must not be mentioned” dance out of both respect and precaution, but honestly, it’s mostly just respect. You’re smart people and high status in this arena and I probably shouldn’t laugh at your bugaboos.
I’m participationg in the “that which must not be mentioned” dance out of both respect and precaution, but honestly, it’s mostly just respect.
Just to point out some irony—I’m participating in the “that which must not be mentioned” dance out of lost respect. I no longer believe Eliezer is able to consider such questions rationally. Anyone who wants to have a useful discussion on the subject must find a place outside of Eliezer’s influence to do it. For much the same reason I don’t try to discuss the details of biology in church.
Gah, it’s incredibly annoying to try to talk about something without being too explicit.
The more explicit I get in my head, the more ridiculous this whole charade seems to me.
FWIW, it seems pretty ridiculous to me too. It might be funny—were it not so negative.
I’m participationg in the “that which must not be mentioned” dance out of both
respect and precaution, but honestly, it’s mostly just respect.
Plus, if you don’t do the dance just right, your comments get deleted by the moderator.
So apparently either “that which can be destroyed by the truth should be” is false, or you’ve written dangerous falsehoods which would overtax the rationality of our readers. Eliezer’s response above seems to imply the former.
(Or, in serious terms: I’m being asked to believe not just in a threat, but also that those who claim to protect us have some special immunity, either inherent or acquired; I see no evidence for either proposition).
Very good question, but AFAIK Eliezer tries to not think the dangerous thought, too.
I’m participationg in the “that which must not be mentioned” dance out of both respect and precaution, but honestly, it’s mostly just respect.
AFAIK Eliezer tries to not think the dangerous thought, too.
I don’t think there was ever any good evidence that the thought was dangerous.
At the time I argued that youthful agents that might become powerful would be able to promise much to helpers and to threaten supporters of their competitors—if they were so inclined. They would still be able to do that whether people think the forbidden thought or not. All that is needed is for people not to be able to block out such messages. That seems reasonable—if the message needs to get out it can be put into TV adverts and billboards—and then few will escape exposure.
In which case, the thought seems to be more forbidden than dangerous.
I don’t think there was ever any good evidence that the thought was dangerous. … In which case, the thought seems to be more forbidden than dangerous.
If there was any such evidence, it would be in the form of additional details, and sharing it with someone would be worse than punching them in the face. So don’t take the lack of publically disclosed evidence as an indication that no evidence exists, because it isn’t.
Exactly. One must be careful to distinguish between “this is not evidence” and “accounting for this evidence should not leave you with a high posterior”.
I think we already had most of the details, many of them in BOLD CAPS for good measure.
But there is the issue of probabilities—of how much it is likely to matter. FWIW, I do not fear thinking the forbidden thought. Indeed, it seems reasonable to expect that people will think similar thoughts more in the future—and that those thoughts will motivate people to act.
Fine. The thought is evidently forbidden, but merely alleged dangerous.
I see no good reason to call it “dangerous”—in the absence of publicly verifiable evidence on the issue—unless the aim is to scare people without the inconvenience of having to back up the story with evidence.
The hypothetical danger. The alleged danger. Note that it was alleged dangerous by someone whose living apparently depends on scaring people about machine intelligence. So: now we have the danger-that-is-too-awful-to-even-think about. And where is the evidence that it is actually dangerous? Oh yes: that was all deleted—to save people from the danger!
Faced with this, it is pretty hard not to be sceptical.
I really don’t have a handle on the situation, but the censored material has allegedly caused serious and lasting psychological stress to at least one person, and could easily be interpreted as an attempt to get gullible people to donate more to SIAI. I don’t see any way out for an administrator of human-level intelligence.
AFAICT, the stresses seem to be largely confined to those in the close orbit of the Singularity Institute. Eliezer once said: “Beware lest Friendliness eat your soul”. So: perhaps the associated pathology could be christened Singularity Fever—or something.
I don’t donate to SIAI on a regular basis, but I haven’t donated because of being scared of UFAI. I think more about aging and death. So, I’m assuming that UFAI is not why most people donate. Also, this incident seems like a net loss for PR, so it being a strategy for more donations doesn’t really seem to make sense. As for the evidence, what’d you’d expect to see in a universe where it was dangerous would be it being deleted.
(Going somewhere, will be back in a couple of hours)
I have little doubt that some smart people honestly believe that it’s dangerous. The deletions are sufficient evidence of that belief for me. The belief, however, is not sufficient evidence for me of the actual danger, given that I see such danger as implausible on the face of it.
In other words, sure, it gets deleted in the world where it’s dangerous, as in the world where people falsely believe it is. Any good Bayesian should consider both possibilities. I happen to think that the latter is more probable.
However, of course I grant that there is some possibility that I’m wrong, so I assign some weight to this alleged danger. The important point is that that is not enough, because the value of free expression and debate weighs on the other side.
Even if I grant “full” weight to the alleged danger, I’m not sure it beats free expression. There are a lot of dangerous ideas—for example, dispensationalist christianity—and, while I’d probably be willing to suppress them if I had the power to do so cleanly, I think any real-world efforts of mine to do so would be a net negative because I’d harm free debate and lower my own credibility while failing to supress the idea. Since the forbidden idea, insofar as I know what it is, seems far more likely to independently occur to various people than something like dispensationalism, while the idea of suppressing it is less likely to do so than in that case, I think that such an argument is even stronger in this case.
Well, I figure if people that have been proven rational in the past see something potentially dangerous, it’s not proof but it lends it more weight. Basically that the idea of there being something dangerous there should be taken seriously.
Hmm, what I meant was that it being deleted isn’t evidence of foul play, since it’d happen in both instances.
I don’t see any arguments against except for surface implausibility?
Free expression doesn’t trump everything. For example, in the Riddle Theory story, the spread of the riddle would be a bad idea. It might occur to people independently, but they might not take it seriously, at at least the spread will be lessened.
I’m not sure if it turned out for the better, deleting it, because people only wanted to know more after its deletion. But who knows.
I have several reasons, not just surface implausibility, for believing what I do. There’s little point in further discussion until the ground rules are cleared up.
In real life, humans are not truth-proving machines. If confronted with their Godel sentences, they will just shrug—and say “you expect me to do what?”
Fiction isn’t evidence. If anything it shows that there is so little real evidence of ideas so harmful that they deserve censorship, that people have to make things up in order to prove their point.
Also, this incident seems like a net loss for PR, so it being a strategy for
more donations doesn’t really seem to make sense.
There are PR upsides: the shephard protects his flock from the unspeakable danger; it makes for good drama and folklaw; there’s opportunity for further drama caused by leaks. Also, it shows everyone who’s the boss.
A popular motto claims that there is no such thing as bad publicity.
Firstly, if there’s an unspeakable danger, surely it’d be best to try and not let others be exposed, so this one’s really a question of if it’s dangerous, and not an argument in itself. It’s only a PR stunt if it’s not dangerous, if it’s dangerous good PR would merely be a side effect.
The drama was bad IMO. Looks like bad publicity to me.
I discredit the PR stunt idea because I don’t think SIAI would’ve dumb enough to pull something like this as a stunt. If we were being modeled as ones who’d simply go along with a lie- well, there’s no way we’d be modeled as such fools. If we were modeled as ones who would look at a lie carefully, a PR stunt wouldn’t work anyways.
There’s also the fact that people who have read the post and are unaffiliated with the SIAI are taking it seriously. That says something, too.
Edit: Why do I keep getting downvoted?
This comment wasn’t meant sarcastically, though it might’ve been worded carelessly. I’m also confused about the other two in this thread that got downvoted.
Not blaming you, wnoise.
I discredit the PR stunt idea because I don’t think SIAI would’ve dumb enough
to pull something like this as a stunt. If we were being modeled as ones who’d
simply go along with a lie- well, there’s no way we’d be modeled as such fools.
If we were modeled as ones who would look at a lie carefully, a PR stunt
wouldn’t work anyways.
Well, it doesn’t really matter what the people involved were thinking, the issue is whether all the associated drama eventually has a net positive or negative effect. It evidently drives some people away—but may increase engagement and interest among those who remain. I can see how it contributes to the site’s mythology and mystique—even if to me it looks more like a car crash that I can’t help looking at.
It may not be over yet—we may see more drama around the forbidden topic in the future—with the possibility of leaks, and further transgressions. After all, if this is really such a terrible risk, shouldn’t other people be aware of it—so they can avoid thinking about it for themselves?
Firstly, if there’s an unspeakable danger, surely it’d be best to try and not let others be exposed, so this one’s really a question of if it’s dangerous
Not quite. It’s a question of what the probability that it’s dangerous is, what the magnitude of the effect is if so, what the cost (including goodwill and credibility) to suppressing it are, and what the cost (including psychological harm to third parties) to not suppressing it is. To make a proper judgement, you must determine all four of these, separately, and perform the expected utility computation (probabiltiy effect-if-dangerous + effect-if-not-dangerous vs cost). A sufficiently large magnitude of effect is sufficient to outweigh both* a small probability and large cost.
That’s the problem here. Some people see a small probability, round it off to 0, and see that the effect-if-not-dangerous isn’t huge, and conclude that it’s ok to talk about it, without computing the expected utility.
I tell you that I have done the computation, and that the utility of hearing, discussing, and allowing discussion of the banned topic are all negative. Furthermore, they are negative by enough orders of magnitude that I believe anyone who concludes otherwise must be either missing a piece of information vital to the computation, or have made an error in their reasoning. They remain negative even if one of the probability or the effect-if-not-dangerous is set to zero. Both missing information and miscalculation are especially likely—the former because information is not readily shared on this topic, and the latter because it is inherently confusing.
You also have to calculate what the effectiveness of your suppression is. If that effectiveness is negative, as is plausibly the case with hamhanded tactics, the rest of the calculation is moot.
Also, I believe I have information about the supposed threat. I think that there are several flaws in the supposed mechanisms, but that even if all the effects work as advertised, there is a factor which you’re not considering which makes 0 the only stable value for the effect-if-dangerous in current conditions.
I agree with you about the effect-if-not-dangerous. This is a good argument, and should be your main one, because you can largely make it without touching the third rail. That would allow an explicit, rather than a secret, policy, which would reduce the costs of supression considerably.
Some of us are okay with rejecting Pascal’s Mugging by using heuristics and injunctions, even though the expected utility calculation contradicts our choice. Why not reject the basilisk in the same way?
For what it’s worth, over the last few weeks I’ve slowly updated to considering the ban a Very Bad Thing. One of the reasons: the CEV document hasn’t changed (or even been marked dubious/obsolete), though it really should have.
I tell you that I have done the computation, and that the utility of hearing,
discussing, and allowing discussion of the banned topic are all negative.
Furthermore, they are negative by enough orders of magnitude that I
believe anyone who concludes otherwise must be either missing a
piece of information vital to the computation, or have made an error
in their reasoning. They remain negative even if one of the probability
or the effect-if-not-dangerous is set to zero.
You sum doesn’t seem like useful evidence. You can’t cite your sources, because that information is self-censored. Since you can’t support your argument, I am not sure why you are bothering to post it. People are supposed to think you conclusions are true—because Jim said so? Pah! Support your assertions, or drop them.
It’s not a special immunity, it’s a special vulnerability which some people have. For most people reading the forbidden topic would be safe. Unfortunately most of those people don’t take the matter serious enough so allowing them to read it is not safe for others.
EDIT: Removed first paragraph since it might have served as a minor clue.
Well, if that’s the case, I can state with high confidence that I am not vulnerable to the forbidden idea. I don’t believe it, and even if I saw something that would rationally convince me, I am too much of a constitutional optimist to let that kind of danger get me.
So, what’s the secret knock so people will tell me the secret? I promise I can keep a secret, and I know I can keep a promise. In fact, the past shows that I am more likely to draw attention to the idea accidentally, in ignorance, than deliberately.
(Of course, I would have to know a little more about the extent of my promise before I’d consider it binding. But I believe I’d make such a promise, once I knew more about its bounds.)
Your comment gave me a funny idea: what if the forbidden meme also says “you must spread the forbidden meme”? I wonder how PeerInfinity, Roko and others would react to this.
Hi.
I am not actually a lurker—I currently have 13 karma—but I am not a heavy participator. However, now I would like to get to 20 karma so I can make a post on why MWI makes acausal incentives into minor considerations. I would also be gratified if someone told me how to make my draft of this post linkable, even if it does not show up within “new”.
I think that you should get some bonus towards the initial 20 karma for your average karma per post. This belief is clearly self-serving, but not necessarily thereby invalid. I believe my own average karma per post is decent but not outstanding.
I believe that the businesslike tone of this post, as a series of declarative statements, will be seen as excessive subservience to the imagined norms of a community of rationalists, and thus net me less status and karma than a chattier post. I am honestly unsure if the simple self-referential gambit of this paragraph will help or hurt this situation.
I posted a diary, and it was banned for containing a dangerous idea. I can understand that certain ideas are dangerous; in fact, in the discussion I started, I consciously refrained from expressing several sub-points for that reason, starting with my initial post. But I think that if there’s such a policy, it should be explicit, and there should be some form of appeal. If the very discussion of these issues shouldn’t happen in public, then there should be a private space to give whatever explanation can be given of why. A secret, unappealable rule which cannot even be discussed—this is not the path to rationalism, it’s the way down the rabbit hole.
What? Is this separate from the recent Banned Post? Is this a different idea?
It was a counter argument against the dangerous topic being dangerous, which by necessity touched the dangerous topic and which wasn’t strong enough to justify this (anyone for whom the dangerous topic actually would be dangerous [rather than just causing nightmares] would almost by necessity already be aware of a stronger argument).
Interesting. Thanks, uprated; with the caveat that of course, we only have your word that the other argument is “stronger”.
Without further evidence, it’s my rationality plus consideration of the issue minus overconfidence against yours. You have an advantage on consideration, since you know both arguments while I only know that I know one; however, on the whole, I think it would be pathological for me to abandon my argument and belief just on that basis. As for the other aspects, we’re both probably smarter and less biased than average people, and I don’t see any argument to swing that.
In other words, I still think I’m right.
No posts on Riddle Theory.
Nor joke warfare
Nor pictures of birds.
Nor writing “Bloody Mary” in lipstick on mirrors?
Seriously, my post was about why that stuff is not scary. Fiction can be good allegory for reality, but those stories all use a lot of you-should-be-scared tricks, all very well and good for ghost stories, but not conducive to actual discussion.
We are swimming in a soup of sirens’ songs, every single day. Dangerous ideas don’t just exist, they abound. But I see no evidence of any dangerous ideas which are not best fought with some measure of banality, among other tactics. The trappings of Avert Your Eyes For That Way Lies Doom seem to be one of the best ways to enhance the danger of an idea.
In fact… what if Eliezer himself… no, that would be too horrible… oh my god, it’s full of stars. (Or, in serious terms: I’m being asked to believe not just in a threat, but also that those who claim to protect us have some special immunity, either inherent or acquired; I see no evidence for either proposition).
Gah, it’s incredibly annoying to try to talk about something without being too explicit. The more explicit I get in my head, the more ridiculous this whole charade seems to me. Of course I can find plenty of rational arguments to support that, but I also trust the feeling. I’m participationg in the “that which must not be mentioned” dance out of both respect and precaution, but honestly, it’s mostly just respect. You’re smart people and high status in this arena and I probably shouldn’t laugh at your bugaboos.
Just to point out some irony—I’m participating in the “that which must not be mentioned” dance out of lost respect. I no longer believe Eliezer is able to consider such questions rationally. Anyone who wants to have a useful discussion on the subject must find a place outside of Eliezer’s influence to do it. For much the same reason I don’t try to discuss the details of biology in church.
FWIW, it seems pretty ridiculous to me too. It might be funny—were it not so negative.
Plus, if you don’t do the dance just right, your comments get deleted by the moderator.
So apparently either “that which can be destroyed by the truth should be” is false, or you’ve written dangerous falsehoods which would overtax the rationality of our readers. Eliezer’s response above seems to imply the former.
Did you read the “riddle theory” link? The riddle is not dangerous because it’s false, but because it’s incomprehensible.
And of course, if you meant to list all the possibilities, you left out the ones where E. is just wrong about the danger.
My comparison at the time was to The Ring.
Very good question, but AFAIK Eliezer tries to not think the dangerous thought, too.
Seconded.
I don’t think there was ever any good evidence that the thought was dangerous.
At the time I argued that youthful agents that might become powerful would be able to promise much to helpers and to threaten supporters of their competitors—if they were so inclined. They would still be able to do that whether people think the forbidden thought or not. All that is needed is for people not to be able to block out such messages. That seems reasonable—if the message needs to get out it can be put into TV adverts and billboards—and then few will escape exposure.
In which case, the thought seems to be more forbidden than dangerous.
If there was any such evidence, it would be in the form of additional details, and sharing it with someone would be worse than punching them in the face. So don’t take the lack of publically disclosed evidence as an indication that no evidence exists, because it isn’t.
It actually is, in the sense we use the term here.
Exactly. One must be careful to distinguish between “this is not evidence” and “accounting for this evidence should not leave you with a high posterior”.
I think we already had most of the details, many of them in BOLD CAPS for good measure.
But there is the issue of probabilities—of how much it is likely to matter. FWIW, I do not fear thinking the forbidden thought. Indeed, it seems reasonable to expect that people will think similar thoughts more in the future—and that those thoughts will motivate people to act.
No, you haven’t. The worst of it has never appeared in public, deleted or otherwise.
Fine. The thought is evidently forbidden, but merely alleged dangerous.
I see no good reason to call it “dangerous”—in the absence of publicly verifiable evidence on the issue—unless the aim is to scare people without the inconvenience of having to back up the story with evidence.
If one backed it up with how exactly it was dangerous, people would be exposed to the danger.
The hypothetical danger. The alleged danger. Note that it was alleged dangerous by someone whose living apparently depends on scaring people about machine intelligence. So: now we have the danger-that-is-too-awful-to-even-think about. And where is the evidence that it is actually dangerous? Oh yes: that was all deleted—to save people from the danger!
Faced with this, it is pretty hard not to be sceptical.
I really don’t have a handle on the situation, but the censored material has allegedly caused serious and lasting psychological stress to at least one person, and could easily be interpreted as an attempt to get gullible people to donate more to SIAI. I don’t see any way out for an administrator of human-level intelligence.
AFAICT, the stresses seem to be largely confined to those in the close orbit of the Singularity Institute. Eliezer once said: “Beware lest Friendliness eat your soul”. So: perhaps the associated pathology could be christened Singularity Fever—or something.
I don’t donate to SIAI on a regular basis, but I haven’t donated because of being scared of UFAI. I think more about aging and death. So, I’m assuming that UFAI is not why most people donate. Also, this incident seems like a net loss for PR, so it being a strategy for more donations doesn’t really seem to make sense. As for the evidence, what’d you’d expect to see in a universe where it was dangerous would be it being deleted.
(Going somewhere, will be back in a couple of hours)
I have little doubt that some smart people honestly believe that it’s dangerous. The deletions are sufficient evidence of that belief for me. The belief, however, is not sufficient evidence for me of the actual danger, given that I see such danger as implausible on the face of it.
In other words, sure, it gets deleted in the world where it’s dangerous, as in the world where people falsely believe it is. Any good Bayesian should consider both possibilities. I happen to think that the latter is more probable.
However, of course I grant that there is some possibility that I’m wrong, so I assign some weight to this alleged danger. The important point is that that is not enough, because the value of free expression and debate weighs on the other side.
Even if I grant “full” weight to the alleged danger, I’m not sure it beats free expression. There are a lot of dangerous ideas—for example, dispensationalist christianity—and, while I’d probably be willing to suppress them if I had the power to do so cleanly, I think any real-world efforts of mine to do so would be a net negative because I’d harm free debate and lower my own credibility while failing to supress the idea. Since the forbidden idea, insofar as I know what it is, seems far more likely to independently occur to various people than something like dispensationalism, while the idea of suppressing it is less likely to do so than in that case, I think that such an argument is even stronger in this case.
Well, I figure if people that have been proven rational in the past see something potentially dangerous, it’s not proof but it lends it more weight. Basically that the idea of there being something dangerous there should be taken seriously.
Hmm, what I meant was that it being deleted isn’t evidence of foul play, since it’d happen in both instances.
I don’t see any arguments against except for surface implausibility?
Free expression doesn’t trump everything. For example, in the Riddle Theory story, the spread of the riddle would be a bad idea. It might occur to people independently, but they might not take it seriously, at at least the spread will be lessened.
I’m not sure if it turned out for the better, deleting it, because people only wanted to know more after its deletion. But who knows.
I have several reasons, not just surface implausibility, for believing what I do. There’s little point in further discussion until the ground rules are cleared up.
Okay.
Riddle theory is fiction.
In real life, humans are not truth-proving machines. If confronted with their Godel sentences, they will just shrug—and say “you expect me to do what?”
Fiction isn’t evidence. If anything it shows that there is so little real evidence of ideas so harmful that they deserve censorship, that people have to make things up in order to prove their point.
There are PR upsides: the shephard protects his flock from the unspeakable danger; it makes for good drama and folklaw; there’s opportunity for further drama caused by leaks. Also, it shows everyone who’s the boss.
A popular motto claims that there is no such thing as bad publicity.
Firstly, if there’s an unspeakable danger, surely it’d be best to try and not let others be exposed, so this one’s really a question of if it’s dangerous, and not an argument in itself. It’s only a PR stunt if it’s not dangerous, if it’s dangerous good PR would merely be a side effect.
The drama was bad IMO. Looks like bad publicity to me.
I discredit the PR stunt idea because I don’t think SIAI would’ve dumb enough to pull something like this as a stunt. If we were being modeled as ones who’d simply go along with a lie- well, there’s no way we’d be modeled as such fools. If we were modeled as ones who would look at a lie carefully, a PR stunt wouldn’t work anyways.
There’s also the fact that people who have read the post and are unaffiliated with the SIAI are taking it seriously. That says something, too.
Well, many are only taking it seriously under pain of censorship.
I dunno, I’d call that putting up with it.
Edit: Why do I keep getting downvoted? This comment wasn’t meant sarcastically, though it might’ve been worded carelessly. I’m also confused about the other two in this thread that got downvoted. Not blaming you, wnoise.
Edit2: Back to zeroes. Huh.
I only just read your comments and my votes seem to bring you up to 1.
Well, it doesn’t really matter what the people involved were thinking, the issue is whether all the associated drama eventually has a net positive or negative effect. It evidently drives some people away—but may increase engagement and interest among those who remain. I can see how it contributes to the site’s mythology and mystique—even if to me it looks more like a car crash that I can’t help looking at.
It may not be over yet—we may see more drama around the forbidden topic in the future—with the possibility of leaks, and further transgressions. After all, if this is really such a terrible risk, shouldn’t other people be aware of it—so they can avoid thinking about it for themselves?
Not quite. It’s a question of what the probability that it’s dangerous is, what the magnitude of the effect is if so, what the cost (including goodwill and credibility) to suppressing it are, and what the cost (including psychological harm to third parties) to not suppressing it is. To make a proper judgement, you must determine all four of these, separately, and perform the expected utility computation (probabiltiy effect-if-dangerous + effect-if-not-dangerous vs cost). A sufficiently large magnitude of effect is sufficient to outweigh both* a small probability and large cost.
That’s the problem here. Some people see a small probability, round it off to 0, and see that the effect-if-not-dangerous isn’t huge, and conclude that it’s ok to talk about it, without computing the expected utility.
I tell you that I have done the computation, and that the utility of hearing, discussing, and allowing discussion of the banned topic are all negative. Furthermore, they are negative by enough orders of magnitude that I believe anyone who concludes otherwise must be either missing a piece of information vital to the computation, or have made an error in their reasoning. They remain negative even if one of the probability or the effect-if-not-dangerous is set to zero. Both missing information and miscalculation are especially likely—the former because information is not readily shared on this topic, and the latter because it is inherently confusing.
You also have to calculate what the effectiveness of your suppression is. If that effectiveness is negative, as is plausibly the case with hamhanded tactics, the rest of the calculation is moot.
Also, I believe I have information about the supposed threat. I think that there are several flaws in the supposed mechanisms, but that even if all the effects work as advertised, there is a factor which you’re not considering which makes 0 the only stable value for the effect-if-dangerous in current conditions.
I agree with you about the effect-if-not-dangerous. This is a good argument, and should be your main one, because you can largely make it without touching the third rail. That would allow an explicit, rather than a secret, policy, which would reduce the costs of supression considerably.
Tiny probabilities of vast utilities again?
Some of us are okay with rejecting Pascal’s Mugging by using heuristics and injunctions, even though the expected utility calculation contradicts our choice. Why not reject the basilisk in the same way?
For what it’s worth, over the last few weeks I’ve slowly updated to considering the ban a Very Bad Thing. One of the reasons: the CEV document hasn’t changed (or even been marked dubious/obsolete), though it really should have.
You sum doesn’t seem like useful evidence. You can’t cite your sources, because that information is self-censored. Since you can’t support your argument, I am not sure why you are bothering to post it. People are supposed to think you conclusions are true—because Jim said so? Pah! Support your assertions, or drop them.
It’s not a special immunity, it’s a special vulnerability which some people have. For most people reading the forbidden topic would be safe. Unfortunately most of those people don’t take the matter serious enough so allowing them to read it is not safe for others.
EDIT: Removed first paragraph since it might have served as a minor clue.
Interesting.
Well, if that’s the case, I can state with high confidence that I am not vulnerable to the forbidden idea. I don’t believe it, and even if I saw something that would rationally convince me, I am too much of a constitutional optimist to let that kind of danger get me.
So, what’s the secret knock so people will tell me the secret? I promise I can keep a secret, and I know I can keep a promise. In fact, the past shows that I am more likely to draw attention to the idea accidentally, in ignorance, than deliberately.
(Of course, I would have to know a little more about the extent of my promise before I’d consider it binding. But I believe I’d make such a promise, once I knew more about its bounds.)
Your comment gave me a funny idea: what if the forbidden meme also says “you must spread the forbidden meme”? I wonder how PeerInfinity, Roko and others would react to this.
If we’re going to keep acquiring more banned topics, there ought to be a list of them somewhere.
You just lost the game.
Response to this above. (attached to grandchild)