I think you/we’re fine—just alternate between two tabs when replying, and paste it to the rationalwiki if it gets deleted.
Don’t let EY chill your free speech—this is supposed to be a community blog devoted to rationality… not a SIAI blog where comments are deleted whenever convenient.
Besides, it’s looking like after the Roko thing they’ve decided to cut back on such silliness.
Don’t let EY chill your free speech—this is supposed to be a community blog devoted to rationality… not a SIAI blog where comments are deleted whenever convenient.
You are compartmentalizing. What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech. That the decision conflicts with freedom of speech doesn’t necessarily mean that it’s incorrect, and if the correct decision conflicts with freedom of speech, or has you kill a thousand children (estimation of its correctness must of course take this consequence into account), it’s still correct and should be taken.
(There is only one proper criterion to anyone’s actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn’t yield the correct decision. )
(This is a note about a problem in your argument, not an argument for correctness of EY’s decision. My argument for correctness of EY’s decision is here and here.)
This is possible but by no means assured. It is also possible that he simply didn’t choose to write a full evaluation of consequences in this particular comment.
What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech.
Upvoted. This just helped me get unstuck on a problem I’ve been procrastinating on.
whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech.
Sounds like a good argument for WikiLeaks dilemma (which is of course confused by the possibility the government is lying their asses off about potential harm)
The question with WikiLeaks is about long-term consequences. As I understand it, the (sane) arguments in favor can be summarized as stating that expected long-term good outweighs expected short-term harm. It’s difficult (for me) to estimate whether it’s so.
I suspect it’s also difficult for Julian (or pretty much anybody) to estimate these things; I guess intelligent people will just have to make best guesses about this type of stuff. In this specific case a rationalist would be very cautious of “having an agenda”, as there is significant opportunity to do harm either way.
(There is only one proper criterion to anyone’s actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn’t yield the correct decision.)
Shouldn’t AI researchers precommit to not build AI capable of this kind of acausal self-creation? This will lower chances of disaster both causally and acausally.
And please, define how do you tell moral heuristics and moral values apart. E.g. which is “don’t change moral values of humans by wireheading”?
We’re basically talking about a logical illusion… an AI Ontological Argument… with all the flaws of an ontological argument (such as bearing no proof)… that was foolishly censored leading to a lot of bad press, hurt feelings, lost donations, and general existential risk increase.
From, as you call it, a purely correctness optimizing perspective, It’s long term bad having silly, irrational stuff like this associated with LW. I think that EY should apologize, and we should get an explicit moderation policy for LW, but in the mean time I’ll just undo any existential risk savings hoped to be gained from censorship.
In other words, this is less about Free Speech, as it is about Dumb Censors :p
It’s long term bad having silly, irrational stuff like this associated with LW.
Whether it’s irrational is one of the questions we are discussing in this thread, so it’s bad conduct to use your answer as an element of an argument. I of course agree that it appears silly and irrational and absurd, and that associating that with LW and SIAI is in itself a bad idea, but I don’t believe it’s actually irrational, and I don’t believe you’ve seriously considered that question.
We’re basically talking about a logical illusion… an AI Ontological Argument… with all the flaws of an ontological argument (such as bearing no proof)…
In other words, you don’t understand the argument, and are not moved by it, and so your estimation of improbability of the outrageous prediction stays the same. The only proper way to argue past this point is to discuss the subject matter, all else would be sophistry that equally applies to predictions of Astrology.
Don’t let EY chill your free speech—this is supposed to be a community blog devoted to rationality… not a SIAI blog where comments are deleted whenever convenient.
Following is another analysis.
Consider a die that was tossed 20 times, and each time it fell even side up. It’s not surprising because it’s a low-probability event: you wouldn’t be surprised if you observed most other combinations equally improbable under the hypothesis that the die is fair. You are surprised because a pattern you see suggests that there is an explanation for your observations that you’ve missed. You notice your own confusion.
In this case, you look at the event of censoring a post (topic), and you’re surprised, you don’t understand why that happened. And then your brain pattern matches all sorts of hypotheses that are not just improbable, but probably meaningless cached phrases, like “It’s convenient”, or “To oppose freedom of speech”, or “To manifest dictatorial power”.
Instead of leaving the choice of a hypothesis to the stupid intuitive processes, you should notice your own confusion, and recognize that you don’t know the answer. Acknowledging that you don’t know the answer is better than suggesting an obviously incorrect theory, if much more probability is concentrated outside that theory, where you can’t suggest a hypothesis.
Since we’re playing the condescension game, following is another analysis:
You read a (well written) slogan, and assumed that the writer must be irrational. You didn’t read the thread he linked you to, you focused on your first impression and held to it.
I’m not. Seriously. “Whenever convenient” is a very weak theory, and thus using it is a more serious flaw, but I missed that on first reading and addressed a different problem.
You read a (well written) slogan, and assumed that the writer must be irrational. You didn’t read the thread he linked you to, you focused on your first impression and held to it.
Sorry, it looks like we’re suffering from a bit of cultural crosstalk. Slogans, much like ontological arguments, are designed to make something of an illusion in the mind—a lever to change the change the way you look at the world. “Whenever convenient” isn’t there as a statement of belief, so much as a prod to get you thinking...
“How much to I trust that EY knows what he’s doing?”
You may as well argue with Nike: “Well, I can hardly do everything...” (re: Just Do It)
That said I am a rationalist… I just don’t see any harm in communicating to the best of my ability.
I linked you to this thread, where I did display some biases, but also decent evidence for not having the ones you’re describing… which I take to be roughly what you’d expect of a smart person off the street.
I can’t place this argument at all in relation to the thread above it. Looks like a collection of unrelated notes to me. Honest. (I’m open to any restatement; don’t see what to add to the notes themselves as I understand them.)
The whole post you’re replying to comes from your request to “Please unpack the references”.
Here’s the bit with references, for easy reference:
You read a (well written) slogan, and assumed that the writer must be irrational. You didn’t read the thread he linked you to, you focused on your first impression and held to it.
The first part of the post you’re replying to’s “Sorry, it looks… best of my ability” maps to “You read a.. irrational” in the quote above, and this tries to explain the problem as I understand it: that you were responding to a slogans words not it’s meaning. Explained it’s meaning. Explained how “Whenever convenient” was a pointer to the “Do I trust EY?” thought. Gave a backup example via the Nike slogan.
The last paragraph in the post you’re replying to tried to unpack the “you focused… held to it” from the above quote
I see. So the “writer” in the quote is you. I didn’t address your statement per se, more a general disposition of the people who state ridiculous things as explanation for the banning incident, but your comment did make the same impression on me. If you correctly disagree that it applies to your intended meaning, good, you didn’t make that error, and I don’t understand what did cause you to make that statement, but I’m not convinced by your explanation so far. You’d need to unpack “Distrusting EY” to make it clear that it doesn’t fall in the same category of ridiculous hypotheses.
I’d like to ask you the following. How would you, as an editor (moderator), handle dangerous information that are more harmful the more people know about it? Just imagine a detailed description of how to code an AGI or create bio weapons. Would you stay away from censoring such information in favor of free speech?
The subject matter here has a somewhat different nature that rather fits a more people—more probable pattern. The question is if it is better to discuss it as to possible resolve it or to censor it and thereby impede it. The problem is that this very question can not be discussed without deciding to not censor it. That doesn’t mean that people can not work on it, but rather just a few people in private. It is very likely that those people who already know about it are the most likely to solve the issue anyway. The general public would probably only add noise and make it much more likely to happen by simply knowing about it.
How would you, as an editor (moderator), handle dangerous information that are more harmful the more people know about it?
Step 1. Write down the clearest non-dangerous articulation of the boundaries of the dangerous idea that I could.
If necessary, make this two articulations: one that is easy to understand (in the sense of answering “is what I’m about to say a problem?”) even if it’s way overinclusive, and one that is not too overinclusive even if it requires effort to understand. Think of this as a cheap test with lots of false positives, and a more expensive follow-up test.
Add to this the most compelling explanation I can come up with of why violating those boundaries is dangerous that doesn’t itself violate those boundaries.
Step 2. Create a secondary forum, not public-access (e.g., a dangerous-idea mailing list), for the discussion of the dangerous idea. Add all the people I think belong there. If that’s more than just me, run my boundary articulation(s) past the group and edit as appropriate.
Step 3. Create a mechanism whereby people can request to be added to dangerous-idea. (e.g., sending dangerous-idea-request).
Step 4. Publish the boundary articulations, a request that people avoid any posts or comments that violate those boundaries, an overview of what steps are being taken (if any) by those in the know, and a pointer to dangerous-idea-request for anyone who feels they really ought to be included in discussion of it (with no promise of actually adding them).
Step 5. In forums where I have editorial control, censor contributions that violate those boundaries, with a pointer to the published bit in step 4.
==
That said, if it genuinely is the sort of thing where a suppression strategy can work, I would also breathe a huge sigh of relief for having dodged a bullet, because in most cases it just doesn’t.
A real-life example that people might accept the danger of would be the 2008 DNS flaw discovered by Dan Kaminsky—he discovered something really scary for the Internet and promptly assembled a DNS Cabal to handle it.
And, of course, it leaked before a fix was in place. But the delay did, they think, mitigate damage.
Note that the solution had to be in place very quickly indeed, because Kaminsky assumed that if he could find it, others could. Always assume you aren’t the only person in the whole world smart enough to find the flaw.
Interesting. Do you have links? I rather publicly vowed to undo any assumed existential risk savings EY thought were to be had via censorship.
That one stayed up, and although I haven’t been the most vigilant in checking for deletions, I had (perhaps naively) assumed they stopped after that :-/
I think you/we’re fine—just alternate between two tabs when replying, and paste it to the rationalwiki if it gets deleted.
Don’t let EY chill your free speech—this is supposed to be a community blog devoted to rationality… not a SIAI blog where comments are deleted whenever convenient.
Besides, it’s looking like after the Roko thing they’ve decided to cut back on such silliness.
You are compartmentalizing. What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech. That the decision conflicts with freedom of speech doesn’t necessarily mean that it’s incorrect, and if the correct decision conflicts with freedom of speech, or has you kill a thousand children (estimation of its correctness must of course take this consequence into account), it’s still correct and should be taken.
(There is only one proper criterion to anyone’s actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn’t yield the correct decision. )
(This is a note about a problem in your argument, not an argument for correctness of EY’s decision. My argument for correctness of EY’s decision is here and here.)
This is possible but by no means assured. It is also possible that he simply didn’t choose to write a full evaluation of consequences in this particular comment.
Upvoted. This just helped me get unstuck on a problem I’ve been procrastinating on.
Sounds like a good argument for WikiLeaks dilemma (which is of course confused by the possibility the government is lying their asses off about potential harm)
The question with WikiLeaks is about long-term consequences. As I understand it, the (sane) arguments in favor can be summarized as stating that expected long-term good outweighs expected short-term harm. It’s difficult (for me) to estimate whether it’s so.
I suspect it’s also difficult for Julian (or pretty much anybody) to estimate these things; I guess intelligent people will just have to make best guesses about this type of stuff. In this specific case a rationalist would be very cautious of “having an agenda”, as there is significant opportunity to do harm either way.
Very much agree btw
Shouldn’t AI researchers precommit to not build AI capable of this kind of acausal self-creation? This will lower chances of disaster both causally and acausally.
And please, define how do you tell moral heuristics and moral values apart. E.g. which is “don’t change moral values of humans by wireheading”?
We’re basically talking about a logical illusion… an AI Ontological Argument… with all the flaws of an ontological argument (such as bearing no proof)… that was foolishly censored leading to a lot of bad press, hurt feelings, lost donations, and general existential risk increase.
From, as you call it, a purely correctness optimizing perspective, It’s long term bad having silly, irrational stuff like this associated with LW. I think that EY should apologize, and we should get an explicit moderation policy for LW, but in the mean time I’ll just undo any existential risk savings hoped to be gained from censorship.
In other words, this is less about Free Speech, as it is about Dumb Censors :p
Whether it’s irrational is one of the questions we are discussing in this thread, so it’s bad conduct to use your answer as an element of an argument. I of course agree that it appears silly and irrational and absurd, and that associating that with LW and SIAI is in itself a bad idea, but I don’t believe it’s actually irrational, and I don’t believe you’ve seriously considered that question.
In other words, you don’t understand the argument, and are not moved by it, and so your estimation of improbability of the outrageous prediction stays the same. The only proper way to argue past this point is to discuss the subject matter, all else would be sophistry that equally applies to predictions of Astrology.
Following is another analysis.
Consider a die that was tossed 20 times, and each time it fell even side up. It’s not surprising because it’s a low-probability event: you wouldn’t be surprised if you observed most other combinations equally improbable under the hypothesis that the die is fair. You are surprised because a pattern you see suggests that there is an explanation for your observations that you’ve missed. You notice your own confusion.
In this case, you look at the event of censoring a post (topic), and you’re surprised, you don’t understand why that happened. And then your brain pattern matches all sorts of hypotheses that are not just improbable, but probably meaningless cached phrases, like “It’s convenient”, or “To oppose freedom of speech”, or “To manifest dictatorial power”.
Instead of leaving the choice of a hypothesis to the stupid intuitive processes, you should notice your own confusion, and recognize that you don’t know the answer. Acknowledging that you don’t know the answer is better than suggesting an obviously incorrect theory, if much more probability is concentrated outside that theory, where you can’t suggest a hypothesis.
Since we’re playing the condescension game, following is another analysis:
You read a (well written) slogan, and assumed that the writer must be irrational. You didn’t read the thread he linked you to, you focused on your first impression and held to it.
I’m not. Seriously. “Whenever convenient” is a very weak theory, and thus using it is a more serious flaw, but I missed that on first reading and addressed a different problem.
Please unpack the references. I don’t understand.
Sorry, it looks like we’re suffering from a bit of cultural crosstalk. Slogans, much like ontological arguments, are designed to make something of an illusion in the mind—a lever to change the change the way you look at the world. “Whenever convenient” isn’t there as a statement of belief, so much as a prod to get you thinking...
“How much to I trust that EY knows what he’s doing?”
You may as well argue with Nike: “Well, I can hardly do everything...” (re: Just Do It)
That said I am a rationalist… I just don’t see any harm in communicating to the best of my ability.
I linked you to this thread, where I did display some biases, but also decent evidence for not having the ones you’re describing… which I take to be roughly what you’d expect of a smart person off the street.
I can’t place this argument at all in relation to the thread above it. Looks like a collection of unrelated notes to me. Honest. (I’m open to any restatement; don’t see what to add to the notes themselves as I understand them.)
The whole post you’re replying to comes from your request to “Please unpack the references”.
Here’s the bit with references, for easy reference:
The first part of the post you’re replying to’s “Sorry, it looks… best of my ability” maps to “You read a.. irrational” in the quote above, and this tries to explain the problem as I understand it: that you were responding to a slogans words not it’s meaning. Explained it’s meaning. Explained how “Whenever convenient” was a pointer to the “Do I trust EY?” thought. Gave a backup example via the Nike slogan.
The last paragraph in the post you’re replying to tried to unpack the “you focused… held to it” from the above quote
I see. So the “writer” in the quote is you. I didn’t address your statement per se, more a general disposition of the people who state ridiculous things as explanation for the banning incident, but your comment did make the same impression on me. If you correctly disagree that it applies to your intended meaning, good, you didn’t make that error, and I don’t understand what did cause you to make that statement, but I’m not convinced by your explanation so far. You’d need to unpack “Distrusting EY” to make it clear that it doesn’t fall in the same category of ridiculous hypotheses.
The Nike slogan is “Just Do It”, if it helps.
Thanks. It doesn’t change the argument, but I’ll still delete that obnoxious paragraph.
I believe EY takes this issue very seriously.
Ahh. Are you aware of any other deletions?
Here...
I’d like to ask you the following. How would you, as an editor (moderator), handle dangerous information that are more harmful the more people know about it? Just imagine a detailed description of how to code an AGI or create bio weapons. Would you stay away from censoring such information in favor of free speech?
The subject matter here has a somewhat different nature that rather fits a more people—more probable pattern. The question is if it is better to discuss it as to possible resolve it or to censor it and thereby impede it. The problem is that this very question can not be discussed without deciding to not censor it. That doesn’t mean that people can not work on it, but rather just a few people in private. It is very likely that those people who already know about it are the most likely to solve the issue anyway. The general public would probably only add noise and make it much more likely to happen by simply knowing about it.
Step 1. Write down the clearest non-dangerous articulation of the boundaries of the dangerous idea that I could.
If necessary, make this two articulations: one that is easy to understand (in the sense of answering “is what I’m about to say a problem?”) even if it’s way overinclusive, and one that is not too overinclusive even if it requires effort to understand. Think of this as a cheap test with lots of false positives, and a more expensive follow-up test.
Add to this the most compelling explanation I can come up with of why violating those boundaries is dangerous that doesn’t itself violate those boundaries.
Step 2. Create a secondary forum, not public-access (e.g., a dangerous-idea mailing list), for the discussion of the dangerous idea. Add all the people I think belong there. If that’s more than just me, run my boundary articulation(s) past the group and edit as appropriate.
Step 3. Create a mechanism whereby people can request to be added to dangerous-idea. (e.g., sending dangerous-idea-request).
Step 4. Publish the boundary articulations, a request that people avoid any posts or comments that violate those boundaries, an overview of what steps are being taken (if any) by those in the know, and a pointer to dangerous-idea-request for anyone who feels they really ought to be included in discussion of it (with no promise of actually adding them).
Step 5. In forums where I have editorial control, censor contributions that violate those boundaries, with a pointer to the published bit in step 4.
==
That said, if it genuinely is the sort of thing where a suppression strategy can work, I would also breathe a huge sigh of relief for having dodged a bullet, because in most cases it just doesn’t.
A real-life example that people might accept the danger of would be the 2008 DNS flaw discovered by Dan Kaminsky—he discovered something really scary for the Internet and promptly assembled a DNS Cabal to handle it.
And, of course, it leaked before a fix was in place. But the delay did, they think, mitigate damage.
Note that the solution had to be in place very quickly indeed, because Kaminsky assumed that if he could find it, others could. Always assume you aren’t the only person in the whole world smart enough to find the flaw.
Yes, several times other poster’s have brought up the subject and had their comments deleted.
I hadn’t seen a lot of stubs of deleted comments around before the recent episode, but you say people’s comments had gotten deleted several times.
So, have you seen comments being deleted in a special way that doesn’t leave a stub?
Comments only leave a stub if they have replies that aren’t deleted.
Interesting. Do you have links? I rather publicly vowed to undo any assumed existential risk savings EY thought were to be had via censorship.
That one stayed up, and although I haven’t been the most vigilant in checking for deletions, I had (perhaps naively) assumed they stopped after that :-/