(Or, in serious terms: I’m being asked to believe not just in a threat, but also that those who claim to protect us have some special immunity, either inherent or acquired; I see no evidence for either proposition).
Very good question, but AFAIK Eliezer tries to not think the dangerous thought, too.
I’m participationg in the “that which must not be mentioned” dance out of both respect and precaution, but honestly, it’s mostly just respect.
AFAIK Eliezer tries to not think the dangerous thought, too.
I don’t think there was ever any good evidence that the thought was dangerous.
At the time I argued that youthful agents that might become powerful would be able to promise much to helpers and to threaten supporters of their competitors—if they were so inclined. They would still be able to do that whether people think the forbidden thought or not. All that is needed is for people not to be able to block out such messages. That seems reasonable—if the message needs to get out it can be put into TV adverts and billboards—and then few will escape exposure.
In which case, the thought seems to be more forbidden than dangerous.
I don’t think there was ever any good evidence that the thought was dangerous. … In which case, the thought seems to be more forbidden than dangerous.
If there was any such evidence, it would be in the form of additional details, and sharing it with someone would be worse than punching them in the face. So don’t take the lack of publically disclosed evidence as an indication that no evidence exists, because it isn’t.
Exactly. One must be careful to distinguish between “this is not evidence” and “accounting for this evidence should not leave you with a high posterior”.
I think we already had most of the details, many of them in BOLD CAPS for good measure.
But there is the issue of probabilities—of how much it is likely to matter. FWIW, I do not fear thinking the forbidden thought. Indeed, it seems reasonable to expect that people will think similar thoughts more in the future—and that those thoughts will motivate people to act.
Fine. The thought is evidently forbidden, but merely alleged dangerous.
I see no good reason to call it “dangerous”—in the absence of publicly verifiable evidence on the issue—unless the aim is to scare people without the inconvenience of having to back up the story with evidence.
The hypothetical danger. The alleged danger. Note that it was alleged dangerous by someone whose living apparently depends on scaring people about machine intelligence. So: now we have the danger-that-is-too-awful-to-even-think about. And where is the evidence that it is actually dangerous? Oh yes: that was all deleted—to save people from the danger!
Faced with this, it is pretty hard not to be sceptical.
I really don’t have a handle on the situation, but the censored material has allegedly caused serious and lasting psychological stress to at least one person, and could easily be interpreted as an attempt to get gullible people to donate more to SIAI. I don’t see any way out for an administrator of human-level intelligence.
AFAICT, the stresses seem to be largely confined to those in the close orbit of the Singularity Institute. Eliezer once said: “Beware lest Friendliness eat your soul”. So: perhaps the associated pathology could be christened Singularity Fever—or something.
I don’t donate to SIAI on a regular basis, but I haven’t donated because of being scared of UFAI. I think more about aging and death. So, I’m assuming that UFAI is not why most people donate. Also, this incident seems like a net loss for PR, so it being a strategy for more donations doesn’t really seem to make sense. As for the evidence, what’d you’d expect to see in a universe where it was dangerous would be it being deleted.
(Going somewhere, will be back in a couple of hours)
I have little doubt that some smart people honestly believe that it’s dangerous. The deletions are sufficient evidence of that belief for me. The belief, however, is not sufficient evidence for me of the actual danger, given that I see such danger as implausible on the face of it.
In other words, sure, it gets deleted in the world where it’s dangerous, as in the world where people falsely believe it is. Any good Bayesian should consider both possibilities. I happen to think that the latter is more probable.
However, of course I grant that there is some possibility that I’m wrong, so I assign some weight to this alleged danger. The important point is that that is not enough, because the value of free expression and debate weighs on the other side.
Even if I grant “full” weight to the alleged danger, I’m not sure it beats free expression. There are a lot of dangerous ideas—for example, dispensationalist christianity—and, while I’d probably be willing to suppress them if I had the power to do so cleanly, I think any real-world efforts of mine to do so would be a net negative because I’d harm free debate and lower my own credibility while failing to supress the idea. Since the forbidden idea, insofar as I know what it is, seems far more likely to independently occur to various people than something like dispensationalism, while the idea of suppressing it is less likely to do so than in that case, I think that such an argument is even stronger in this case.
Well, I figure if people that have been proven rational in the past see something potentially dangerous, it’s not proof but it lends it more weight. Basically that the idea of there being something dangerous there should be taken seriously.
Hmm, what I meant was that it being deleted isn’t evidence of foul play, since it’d happen in both instances.
I don’t see any arguments against except for surface implausibility?
Free expression doesn’t trump everything. For example, in the Riddle Theory story, the spread of the riddle would be a bad idea. It might occur to people independently, but they might not take it seriously, at at least the spread will be lessened.
I’m not sure if it turned out for the better, deleting it, because people only wanted to know more after its deletion. But who knows.
I have several reasons, not just surface implausibility, for believing what I do. There’s little point in further discussion until the ground rules are cleared up.
In real life, humans are not truth-proving machines. If confronted with their Godel sentences, they will just shrug—and say “you expect me to do what?”
Fiction isn’t evidence. If anything it shows that there is so little real evidence of ideas so harmful that they deserve censorship, that people have to make things up in order to prove their point.
Also, this incident seems like a net loss for PR, so it being a strategy for
more donations doesn’t really seem to make sense.
There are PR upsides: the shephard protects his flock from the unspeakable danger; it makes for good drama and folklaw; there’s opportunity for further drama caused by leaks. Also, it shows everyone who’s the boss.
A popular motto claims that there is no such thing as bad publicity.
Firstly, if there’s an unspeakable danger, surely it’d be best to try and not let others be exposed, so this one’s really a question of if it’s dangerous, and not an argument in itself. It’s only a PR stunt if it’s not dangerous, if it’s dangerous good PR would merely be a side effect.
The drama was bad IMO. Looks like bad publicity to me.
I discredit the PR stunt idea because I don’t think SIAI would’ve dumb enough to pull something like this as a stunt. If we were being modeled as ones who’d simply go along with a lie- well, there’s no way we’d be modeled as such fools. If we were modeled as ones who would look at a lie carefully, a PR stunt wouldn’t work anyways.
There’s also the fact that people who have read the post and are unaffiliated with the SIAI are taking it seriously. That says something, too.
Edit: Why do I keep getting downvoted?
This comment wasn’t meant sarcastically, though it might’ve been worded carelessly. I’m also confused about the other two in this thread that got downvoted.
Not blaming you, wnoise.
I discredit the PR stunt idea because I don’t think SIAI would’ve dumb enough
to pull something like this as a stunt. If we were being modeled as ones who’d
simply go along with a lie- well, there’s no way we’d be modeled as such fools.
If we were modeled as ones who would look at a lie carefully, a PR stunt
wouldn’t work anyways.
Well, it doesn’t really matter what the people involved were thinking, the issue is whether all the associated drama eventually has a net positive or negative effect. It evidently drives some people away—but may increase engagement and interest among those who remain. I can see how it contributes to the site’s mythology and mystique—even if to me it looks more like a car crash that I can’t help looking at.
It may not be over yet—we may see more drama around the forbidden topic in the future—with the possibility of leaks, and further transgressions. After all, if this is really such a terrible risk, shouldn’t other people be aware of it—so they can avoid thinking about it for themselves?
Firstly, if there’s an unspeakable danger, surely it’d be best to try and not let others be exposed, so this one’s really a question of if it’s dangerous
Not quite. It’s a question of what the probability that it’s dangerous is, what the magnitude of the effect is if so, what the cost (including goodwill and credibility) to suppressing it are, and what the cost (including psychological harm to third parties) to not suppressing it is. To make a proper judgement, you must determine all four of these, separately, and perform the expected utility computation (probabiltiy effect-if-dangerous + effect-if-not-dangerous vs cost). A sufficiently large magnitude of effect is sufficient to outweigh both* a small probability and large cost.
That’s the problem here. Some people see a small probability, round it off to 0, and see that the effect-if-not-dangerous isn’t huge, and conclude that it’s ok to talk about it, without computing the expected utility.
I tell you that I have done the computation, and that the utility of hearing, discussing, and allowing discussion of the banned topic are all negative. Furthermore, they are negative by enough orders of magnitude that I believe anyone who concludes otherwise must be either missing a piece of information vital to the computation, or have made an error in their reasoning. They remain negative even if one of the probability or the effect-if-not-dangerous is set to zero. Both missing information and miscalculation are especially likely—the former because information is not readily shared on this topic, and the latter because it is inherently confusing.
You also have to calculate what the effectiveness of your suppression is. If that effectiveness is negative, as is plausibly the case with hamhanded tactics, the rest of the calculation is moot.
Also, I believe I have information about the supposed threat. I think that there are several flaws in the supposed mechanisms, but that even if all the effects work as advertised, there is a factor which you’re not considering which makes 0 the only stable value for the effect-if-dangerous in current conditions.
I agree with you about the effect-if-not-dangerous. This is a good argument, and should be your main one, because you can largely make it without touching the third rail. That would allow an explicit, rather than a secret, policy, which would reduce the costs of supression considerably.
Some of us are okay with rejecting Pascal’s Mugging by using heuristics and injunctions, even though the expected utility calculation contradicts our choice. Why not reject the basilisk in the same way?
For what it’s worth, over the last few weeks I’ve slowly updated to considering the ban a Very Bad Thing. One of the reasons: the CEV document hasn’t changed (or even been marked dubious/obsolete), though it really should have.
I tell you that I have done the computation, and that the utility of hearing,
discussing, and allowing discussion of the banned topic are all negative.
Furthermore, they are negative by enough orders of magnitude that I
believe anyone who concludes otherwise must be either missing a
piece of information vital to the computation, or have made an error
in their reasoning. They remain negative even if one of the probability
or the effect-if-not-dangerous is set to zero.
You sum doesn’t seem like useful evidence. You can’t cite your sources, because that information is self-censored. Since you can’t support your argument, I am not sure why you are bothering to post it. People are supposed to think you conclusions are true—because Jim said so? Pah! Support your assertions, or drop them.
It’s not a special immunity, it’s a special vulnerability which some people have. For most people reading the forbidden topic would be safe. Unfortunately most of those people don’t take the matter serious enough so allowing them to read it is not safe for others.
EDIT: Removed first paragraph since it might have served as a minor clue.
Well, if that’s the case, I can state with high confidence that I am not vulnerable to the forbidden idea. I don’t believe it, and even if I saw something that would rationally convince me, I am too much of a constitutional optimist to let that kind of danger get me.
So, what’s the secret knock so people will tell me the secret? I promise I can keep a secret, and I know I can keep a promise. In fact, the past shows that I am more likely to draw attention to the idea accidentally, in ignorance, than deliberately.
(Of course, I would have to know a little more about the extent of my promise before I’d consider it binding. But I believe I’d make such a promise, once I knew more about its bounds.)
Your comment gave me a funny idea: what if the forbidden meme also says “you must spread the forbidden meme”? I wonder how PeerInfinity, Roko and others would react to this.
Very good question, but AFAIK Eliezer tries to not think the dangerous thought, too.
Seconded.
I don’t think there was ever any good evidence that the thought was dangerous.
At the time I argued that youthful agents that might become powerful would be able to promise much to helpers and to threaten supporters of their competitors—if they were so inclined. They would still be able to do that whether people think the forbidden thought or not. All that is needed is for people not to be able to block out such messages. That seems reasonable—if the message needs to get out it can be put into TV adverts and billboards—and then few will escape exposure.
In which case, the thought seems to be more forbidden than dangerous.
If there was any such evidence, it would be in the form of additional details, and sharing it with someone would be worse than punching them in the face. So don’t take the lack of publically disclosed evidence as an indication that no evidence exists, because it isn’t.
It actually is, in the sense we use the term here.
Exactly. One must be careful to distinguish between “this is not evidence” and “accounting for this evidence should not leave you with a high posterior”.
I think we already had most of the details, many of them in BOLD CAPS for good measure.
But there is the issue of probabilities—of how much it is likely to matter. FWIW, I do not fear thinking the forbidden thought. Indeed, it seems reasonable to expect that people will think similar thoughts more in the future—and that those thoughts will motivate people to act.
No, you haven’t. The worst of it has never appeared in public, deleted or otherwise.
Fine. The thought is evidently forbidden, but merely alleged dangerous.
I see no good reason to call it “dangerous”—in the absence of publicly verifiable evidence on the issue—unless the aim is to scare people without the inconvenience of having to back up the story with evidence.
If one backed it up with how exactly it was dangerous, people would be exposed to the danger.
The hypothetical danger. The alleged danger. Note that it was alleged dangerous by someone whose living apparently depends on scaring people about machine intelligence. So: now we have the danger-that-is-too-awful-to-even-think about. And where is the evidence that it is actually dangerous? Oh yes: that was all deleted—to save people from the danger!
Faced with this, it is pretty hard not to be sceptical.
I really don’t have a handle on the situation, but the censored material has allegedly caused serious and lasting psychological stress to at least one person, and could easily be interpreted as an attempt to get gullible people to donate more to SIAI. I don’t see any way out for an administrator of human-level intelligence.
AFAICT, the stresses seem to be largely confined to those in the close orbit of the Singularity Institute. Eliezer once said: “Beware lest Friendliness eat your soul”. So: perhaps the associated pathology could be christened Singularity Fever—or something.
I don’t donate to SIAI on a regular basis, but I haven’t donated because of being scared of UFAI. I think more about aging and death. So, I’m assuming that UFAI is not why most people donate. Also, this incident seems like a net loss for PR, so it being a strategy for more donations doesn’t really seem to make sense. As for the evidence, what’d you’d expect to see in a universe where it was dangerous would be it being deleted.
(Going somewhere, will be back in a couple of hours)
I have little doubt that some smart people honestly believe that it’s dangerous. The deletions are sufficient evidence of that belief for me. The belief, however, is not sufficient evidence for me of the actual danger, given that I see such danger as implausible on the face of it.
In other words, sure, it gets deleted in the world where it’s dangerous, as in the world where people falsely believe it is. Any good Bayesian should consider both possibilities. I happen to think that the latter is more probable.
However, of course I grant that there is some possibility that I’m wrong, so I assign some weight to this alleged danger. The important point is that that is not enough, because the value of free expression and debate weighs on the other side.
Even if I grant “full” weight to the alleged danger, I’m not sure it beats free expression. There are a lot of dangerous ideas—for example, dispensationalist christianity—and, while I’d probably be willing to suppress them if I had the power to do so cleanly, I think any real-world efforts of mine to do so would be a net negative because I’d harm free debate and lower my own credibility while failing to supress the idea. Since the forbidden idea, insofar as I know what it is, seems far more likely to independently occur to various people than something like dispensationalism, while the idea of suppressing it is less likely to do so than in that case, I think that such an argument is even stronger in this case.
Well, I figure if people that have been proven rational in the past see something potentially dangerous, it’s not proof but it lends it more weight. Basically that the idea of there being something dangerous there should be taken seriously.
Hmm, what I meant was that it being deleted isn’t evidence of foul play, since it’d happen in both instances.
I don’t see any arguments against except for surface implausibility?
Free expression doesn’t trump everything. For example, in the Riddle Theory story, the spread of the riddle would be a bad idea. It might occur to people independently, but they might not take it seriously, at at least the spread will be lessened.
I’m not sure if it turned out for the better, deleting it, because people only wanted to know more after its deletion. But who knows.
I have several reasons, not just surface implausibility, for believing what I do. There’s little point in further discussion until the ground rules are cleared up.
Okay.
Riddle theory is fiction.
In real life, humans are not truth-proving machines. If confronted with their Godel sentences, they will just shrug—and say “you expect me to do what?”
Fiction isn’t evidence. If anything it shows that there is so little real evidence of ideas so harmful that they deserve censorship, that people have to make things up in order to prove their point.
There are PR upsides: the shephard protects his flock from the unspeakable danger; it makes for good drama and folklaw; there’s opportunity for further drama caused by leaks. Also, it shows everyone who’s the boss.
A popular motto claims that there is no such thing as bad publicity.
Firstly, if there’s an unspeakable danger, surely it’d be best to try and not let others be exposed, so this one’s really a question of if it’s dangerous, and not an argument in itself. It’s only a PR stunt if it’s not dangerous, if it’s dangerous good PR would merely be a side effect.
The drama was bad IMO. Looks like bad publicity to me.
I discredit the PR stunt idea because I don’t think SIAI would’ve dumb enough to pull something like this as a stunt. If we were being modeled as ones who’d simply go along with a lie- well, there’s no way we’d be modeled as such fools. If we were modeled as ones who would look at a lie carefully, a PR stunt wouldn’t work anyways.
There’s also the fact that people who have read the post and are unaffiliated with the SIAI are taking it seriously. That says something, too.
Well, many are only taking it seriously under pain of censorship.
I dunno, I’d call that putting up with it.
Edit: Why do I keep getting downvoted? This comment wasn’t meant sarcastically, though it might’ve been worded carelessly. I’m also confused about the other two in this thread that got downvoted. Not blaming you, wnoise.
Edit2: Back to zeroes. Huh.
I only just read your comments and my votes seem to bring you up to 1.
Well, it doesn’t really matter what the people involved were thinking, the issue is whether all the associated drama eventually has a net positive or negative effect. It evidently drives some people away—but may increase engagement and interest among those who remain. I can see how it contributes to the site’s mythology and mystique—even if to me it looks more like a car crash that I can’t help looking at.
It may not be over yet—we may see more drama around the forbidden topic in the future—with the possibility of leaks, and further transgressions. After all, if this is really such a terrible risk, shouldn’t other people be aware of it—so they can avoid thinking about it for themselves?
Not quite. It’s a question of what the probability that it’s dangerous is, what the magnitude of the effect is if so, what the cost (including goodwill and credibility) to suppressing it are, and what the cost (including psychological harm to third parties) to not suppressing it is. To make a proper judgement, you must determine all four of these, separately, and perform the expected utility computation (probabiltiy effect-if-dangerous + effect-if-not-dangerous vs cost). A sufficiently large magnitude of effect is sufficient to outweigh both* a small probability and large cost.
That’s the problem here. Some people see a small probability, round it off to 0, and see that the effect-if-not-dangerous isn’t huge, and conclude that it’s ok to talk about it, without computing the expected utility.
I tell you that I have done the computation, and that the utility of hearing, discussing, and allowing discussion of the banned topic are all negative. Furthermore, they are negative by enough orders of magnitude that I believe anyone who concludes otherwise must be either missing a piece of information vital to the computation, or have made an error in their reasoning. They remain negative even if one of the probability or the effect-if-not-dangerous is set to zero. Both missing information and miscalculation are especially likely—the former because information is not readily shared on this topic, and the latter because it is inherently confusing.
You also have to calculate what the effectiveness of your suppression is. If that effectiveness is negative, as is plausibly the case with hamhanded tactics, the rest of the calculation is moot.
Also, I believe I have information about the supposed threat. I think that there are several flaws in the supposed mechanisms, but that even if all the effects work as advertised, there is a factor which you’re not considering which makes 0 the only stable value for the effect-if-dangerous in current conditions.
I agree with you about the effect-if-not-dangerous. This is a good argument, and should be your main one, because you can largely make it without touching the third rail. That would allow an explicit, rather than a secret, policy, which would reduce the costs of supression considerably.
Tiny probabilities of vast utilities again?
Some of us are okay with rejecting Pascal’s Mugging by using heuristics and injunctions, even though the expected utility calculation contradicts our choice. Why not reject the basilisk in the same way?
For what it’s worth, over the last few weeks I’ve slowly updated to considering the ban a Very Bad Thing. One of the reasons: the CEV document hasn’t changed (or even been marked dubious/obsolete), though it really should have.
You sum doesn’t seem like useful evidence. You can’t cite your sources, because that information is self-censored. Since you can’t support your argument, I am not sure why you are bothering to post it. People are supposed to think you conclusions are true—because Jim said so? Pah! Support your assertions, or drop them.
It’s not a special immunity, it’s a special vulnerability which some people have. For most people reading the forbidden topic would be safe. Unfortunately most of those people don’t take the matter serious enough so allowing them to read it is not safe for others.
EDIT: Removed first paragraph since it might have served as a minor clue.
Interesting.
Well, if that’s the case, I can state with high confidence that I am not vulnerable to the forbidden idea. I don’t believe it, and even if I saw something that would rationally convince me, I am too much of a constitutional optimist to let that kind of danger get me.
So, what’s the secret knock so people will tell me the secret? I promise I can keep a secret, and I know I can keep a promise. In fact, the past shows that I am more likely to draw attention to the idea accidentally, in ignorance, than deliberately.
(Of course, I would have to know a little more about the extent of my promise before I’d consider it binding. But I believe I’d make such a promise, once I knew more about its bounds.)
Your comment gave me a funny idea: what if the forbidden meme also says “you must spread the forbidden meme”? I wonder how PeerInfinity, Roko and others would react to this.