Not very hard in principle, but I gather it tends to be rather stressful, with stuff like payments not arriving when they’re supposed to happening every now and then. Also, I couldn’t avoid the feeling of being a leech, justified or not.
Non-academic think tanks are a possibility, but for Singularity-related matters I can’t think of others than the SIAI, and their resources are limited.
Many people would steal food to save lives of the starving, and that’s illegal.
Working within the national support system to increase the chance of saving everybody/everything? If you would do the first, you should probably do the second. But you need to weigh the plausibility of the get-rich-and-fund-institute option, including the positive contributions of the others you could potentially hire.
I wonder how far some people would go for the cause. For Kaj, clearly, leeching of an already wasteful state is too far.
I was once criticized by a senior singinst member for not being prepared to be tortured or raped for the cause. I mean not actually, but, you know, in theory. Precommiting to being prepared to make a sacrifice that big. shrugs
I was once chastized by a senior singinst member for not being prepared to be tortured or raped for the cause.
Forget entirely ‘the cause’ nonsense. How far would you go just to avoid not personally getting killed? How much torture per chance that your personal contribution at the margin will prevent your near term death?
I’m not aware that LW moderators have ever deleted content merely for being critical of or potentially bad PR for SIAI, and I don’t think they’re naive enough to believe deletion would help. (Roko’s infamous post was considered harmful for other reasons.)
“Harmful for other reasons” still has a chilling effect on free speech… and given that those reasons were vague but had something to do with torture, it’s not unreasonable to worry about deletion of replies to the above question.
There doesn’t seem to be anything censor relevant in my question and for my part I tend to let big brother worry about his own paranoia and just go about my business. In any case while the question is an interesting one to me it doesn’t seem important enough to create a discussion somewhere else. At least not until I make a post. Putting aside presumptions of extreme altruism just how much contribution to FAI development is rational? To what extent does said rational contribution rely on newcomblike reasoning? How much would a CDT agent contribute on the expectation that his personal contribution will make the difference and save his life?
On second thoughts maybe the discussion does seem to interest me sufficiently. If you are particularly interested in answering me feel free to copy and paste my questions elsewhere and leave a back-link. ;)
I think you/we’re fine—just alternate between two tabs when replying, and paste it to the rationalwiki if it gets deleted.
Don’t let EY chill your free speech—this is supposed to be a community blog devoted to rationality… not a SIAI blog where comments are deleted whenever convenient.
Besides, it’s looking like after the Roko thing they’ve decided to cut back on such silliness.
Don’t let EY chill your free speech—this is supposed to be a community blog devoted to rationality… not a SIAI blog where comments are deleted whenever convenient.
You are compartmentalizing. What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech. That the decision conflicts with freedom of speech doesn’t necessarily mean that it’s incorrect, and if the correct decision conflicts with freedom of speech, or has you kill a thousand children (estimation of its correctness must of course take this consequence into account), it’s still correct and should be taken.
(There is only one proper criterion to anyone’s actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn’t yield the correct decision. )
(This is a note about a problem in your argument, not an argument for correctness of EY’s decision. My argument for correctness of EY’s decision is here and here.)
This is possible but by no means assured. It is also possible that he simply didn’t choose to write a full evaluation of consequences in this particular comment.
What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech.
Upvoted. This just helped me get unstuck on a problem I’ve been procrastinating on.
whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech.
Sounds like a good argument for WikiLeaks dilemma (which is of course confused by the possibility the government is lying their asses off about potential harm)
The question with WikiLeaks is about long-term consequences. As I understand it, the (sane) arguments in favor can be summarized as stating that expected long-term good outweighs expected short-term harm. It’s difficult (for me) to estimate whether it’s so.
I suspect it’s also difficult for Julian (or pretty much anybody) to estimate these things; I guess intelligent people will just have to make best guesses about this type of stuff. In this specific case a rationalist would be very cautious of “having an agenda”, as there is significant opportunity to do harm either way.
(There is only one proper criterion to anyone’s actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn’t yield the correct decision.)
Shouldn’t AI researchers precommit to not build AI capable of this kind of acausal self-creation? This will lower chances of disaster both causally and acausally.
And please, define how do you tell moral heuristics and moral values apart. E.g. which is “don’t change moral values of humans by wireheading”?
We’re basically talking about a logical illusion… an AI Ontological Argument… with all the flaws of an ontological argument (such as bearing no proof)… that was foolishly censored leading to a lot of bad press, hurt feelings, lost donations, and general existential risk increase.
From, as you call it, a purely correctness optimizing perspective, It’s long term bad having silly, irrational stuff like this associated with LW. I think that EY should apologize, and we should get an explicit moderation policy for LW, but in the mean time I’ll just undo any existential risk savings hoped to be gained from censorship.
In other words, this is less about Free Speech, as it is about Dumb Censors :p
It’s long term bad having silly, irrational stuff like this associated with LW.
Whether it’s irrational is one of the questions we are discussing in this thread, so it’s bad conduct to use your answer as an element of an argument. I of course agree that it appears silly and irrational and absurd, and that associating that with LW and SIAI is in itself a bad idea, but I don’t believe it’s actually irrational, and I don’t believe you’ve seriously considered that question.
We’re basically talking about a logical illusion… an AI Ontological Argument… with all the flaws of an ontological argument (such as bearing no proof)…
In other words, you don’t understand the argument, and are not moved by it, and so your estimation of improbability of the outrageous prediction stays the same. The only proper way to argue past this point is to discuss the subject matter, all else would be sophistry that equally applies to predictions of Astrology.
Don’t let EY chill your free speech—this is supposed to be a community blog devoted to rationality… not a SIAI blog where comments are deleted whenever convenient.
Following is another analysis.
Consider a die that was tossed 20 times, and each time it fell even side up. It’s not surprising because it’s a low-probability event: you wouldn’t be surprised if you observed most other combinations equally improbable under the hypothesis that the die is fair. You are surprised because a pattern you see suggests that there is an explanation for your observations that you’ve missed. You notice your own confusion.
In this case, you look at the event of censoring a post (topic), and you’re surprised, you don’t understand why that happened. And then your brain pattern matches all sorts of hypotheses that are not just improbable, but probably meaningless cached phrases, like “It’s convenient”, or “To oppose freedom of speech”, or “To manifest dictatorial power”.
Instead of leaving the choice of a hypothesis to the stupid intuitive processes, you should notice your own confusion, and recognize that you don’t know the answer. Acknowledging that you don’t know the answer is better than suggesting an obviously incorrect theory, if much more probability is concentrated outside that theory, where you can’t suggest a hypothesis.
Since we’re playing the condescension game, following is another analysis:
You read a (well written) slogan, and assumed that the writer must be irrational. You didn’t read the thread he linked you to, you focused on your first impression and held to it.
I’m not. Seriously. “Whenever convenient” is a very weak theory, and thus using it is a more serious flaw, but I missed that on first reading and addressed a different problem.
You read a (well written) slogan, and assumed that the writer must be irrational. You didn’t read the thread he linked you to, you focused on your first impression and held to it.
Sorry, it looks like we’re suffering from a bit of cultural crosstalk. Slogans, much like ontological arguments, are designed to make something of an illusion in the mind—a lever to change the change the way you look at the world. “Whenever convenient” isn’t there as a statement of belief, so much as a prod to get you thinking...
“How much to I trust that EY knows what he’s doing?”
You may as well argue with Nike: “Well, I can hardly do everything...” (re: Just Do It)
That said I am a rationalist… I just don’t see any harm in communicating to the best of my ability.
I linked you to this thread, where I did display some biases, but also decent evidence for not having the ones you’re describing… which I take to be roughly what you’d expect of a smart person off the street.
I can’t place this argument at all in relation to the thread above it. Looks like a collection of unrelated notes to me. Honest. (I’m open to any restatement; don’t see what to add to the notes themselves as I understand them.)
The whole post you’re replying to comes from your request to “Please unpack the references”.
Here’s the bit with references, for easy reference:
You read a (well written) slogan, and assumed that the writer must be irrational. You didn’t read the thread he linked you to, you focused on your first impression and held to it.
The first part of the post you’re replying to’s “Sorry, it looks… best of my ability” maps to “You read a.. irrational” in the quote above, and this tries to explain the problem as I understand it: that you were responding to a slogans words not it’s meaning. Explained it’s meaning. Explained how “Whenever convenient” was a pointer to the “Do I trust EY?” thought. Gave a backup example via the Nike slogan.
The last paragraph in the post you’re replying to tried to unpack the “you focused… held to it” from the above quote
I see. So the “writer” in the quote is you. I didn’t address your statement per se, more a general disposition of the people who state ridiculous things as explanation for the banning incident, but your comment did make the same impression on me. If you correctly disagree that it applies to your intended meaning, good, you didn’t make that error, and I don’t understand what did cause you to make that statement, but I’m not convinced by your explanation so far. You’d need to unpack “Distrusting EY” to make it clear that it doesn’t fall in the same category of ridiculous hypotheses.
I’d like to ask you the following. How would you, as an editor (moderator), handle dangerous information that are more harmful the more people know about it? Just imagine a detailed description of how to code an AGI or create bio weapons. Would you stay away from censoring such information in favor of free speech?
The subject matter here has a somewhat different nature that rather fits a more people—more probable pattern. The question is if it is better to discuss it as to possible resolve it or to censor it and thereby impede it. The problem is that this very question can not be discussed without deciding to not censor it. That doesn’t mean that people can not work on it, but rather just a few people in private. It is very likely that those people who already know about it are the most likely to solve the issue anyway. The general public would probably only add noise and make it much more likely to happen by simply knowing about it.
How would you, as an editor (moderator), handle dangerous information that are more harmful the more people know about it?
Step 1. Write down the clearest non-dangerous articulation of the boundaries of the dangerous idea that I could.
If necessary, make this two articulations: one that is easy to understand (in the sense of answering “is what I’m about to say a problem?”) even if it’s way overinclusive, and one that is not too overinclusive even if it requires effort to understand. Think of this as a cheap test with lots of false positives, and a more expensive follow-up test.
Add to this the most compelling explanation I can come up with of why violating those boundaries is dangerous that doesn’t itself violate those boundaries.
Step 2. Create a secondary forum, not public-access (e.g., a dangerous-idea mailing list), for the discussion of the dangerous idea. Add all the people I think belong there. If that’s more than just me, run my boundary articulation(s) past the group and edit as appropriate.
Step 3. Create a mechanism whereby people can request to be added to dangerous-idea. (e.g., sending dangerous-idea-request).
Step 4. Publish the boundary articulations, a request that people avoid any posts or comments that violate those boundaries, an overview of what steps are being taken (if any) by those in the know, and a pointer to dangerous-idea-request for anyone who feels they really ought to be included in discussion of it (with no promise of actually adding them).
Step 5. In forums where I have editorial control, censor contributions that violate those boundaries, with a pointer to the published bit in step 4.
==
That said, if it genuinely is the sort of thing where a suppression strategy can work, I would also breathe a huge sigh of relief for having dodged a bullet, because in most cases it just doesn’t.
A real-life example that people might accept the danger of would be the 2008 DNS flaw discovered by Dan Kaminsky—he discovered something really scary for the Internet and promptly assembled a DNS Cabal to handle it.
And, of course, it leaked before a fix was in place. But the delay did, they think, mitigate damage.
Note that the solution had to be in place very quickly indeed, because Kaminsky assumed that if he could find it, others could. Always assume you aren’t the only person in the whole world smart enough to find the flaw.
Interesting. Do you have links? I rather publicly vowed to undo any assumed existential risk savings EY thought were to be had via censorship.
That one stayed up, and although I haven’t been the most vigilant in checking for deletions, I had (perhaps naively) assumed they stopped after that :-/
Am I the only one who can honestly say that it would depend on the day?
There’s a TED talk I once watched about how republicans reason on five moral channels and democrats only reason on two.
They were (roughly):
harm/care
fairness/reciprocity
in-group/out-group
authority
purity/scarcity/correctness
According to the talk, Democrats reason with primarily the first two and Republicans with all of them.
I took this to mean that Republicans were allowed to do moral calculus that Democrats could not… for instance, if I can only reason with the firs two, then punching a baby is always wrong (it causes harm, and isn’t fair)… If, on the other hand, I’m allowed to reason with all five, it might be okay to punch a baby because my Leader said to do it, or because the baby isn’t from my home town, or because my religion says to.
Republicans therefore have it much easier in rationalizing self-serving motives.
(As an aside, it’s interesting to note that Democrats must have started with more than just the two when they were young. “Mommy said not to” is a very good reason to do something when you’re young. It seems that they must have grown out of it).
After watching the TED talk, I was reflecting on how it seems that smart people (myself sadly included) let relatively minor moral problems stop them from doing great things… and on how if I were just a little more Republican (in the five channel moral reasoning sense) I might be able to be significantly more successful.
The result is a WFG that cycles in and out of 2-channel/5-channel reasoning.
On my 2-channel days, I’d have a very hard time hurting another person to save myself. If I saw them, and could feel that human connection, I doubt I could do much more than I myself would be willing to endure to save another’s life (perhaps two hours assuming hand-over-a-candle level of pain—permanent disfigurement would be harder to justify, but if it was relatively minor).
On my 5-channel days, I’m (surprisingly not so embarrassed to say) I’d probably go arbitrarily high… after all, what’s their life compared to mine?
Probably a bit more than you were looking to hear.
I took this to mean that Republicans were allowed to do moral calculus that Democrats could not… for instance, if I can only reason with the firs two, then punching a baby is always wrong (it causes harm, and isn’t fair)… If, on the other hand, I’m allowed to reason with all five, it might be okay to punch a baby because my Leader said to do it, or because the baby isn’t from my home town, or because my religion says to.
First let me say that as a Republican/libertarian I don’t entirely agree with Haidt’s analysis.
In any case, the above is not quiet how I understand Haidt’s analysis. My understanding is that Democracts have no way to categorically say that punching (or even killing) a baby is wrong. While they can say it’s wrong because as you said it causes harm and isn’t fair, they can always override that judgement by coming up with a reason why not punching and/or killing the baby would also cause harm. (See the philosophy of Peter Singer for an example).
Republicans on the other hand can invoke sanctity of life.
Sure, agreed. The way I presented it only showed very simplistic reasoning.
Let’s just say that, if you imagine a Democrat that desperately wants to do x but can’t justify it morally (punch a baby, start a somewhat shady business, not return a lost wallet full of cash), one way to resolve this conflict is to add Republican channels to his reasoning.
It doesn’t always work (sanctity of life, etc), but I think for a large number of situations where we Democrats-at-heart get cold feet it works like a champ :)
It doesn’t always work (sanctity of life, etc), but I think for a large number of situations where we Democrats-at-heart get cold feet it works like a champ :)
So I’ve noticed. See the discussion following this comment for an example.
Peter Singer’s media-touted “position on infanticide” is an excellent example of why even philosophers might shy away from talking about hypotheticals in public. You appear to have just become Desrtopa’s nighmare.
It’s evident you really need to read the post. He can’t get people to answer hypotheticals in almost any circumstances and thought this was a defect in the people. Approximately everyone responded pointing out that in the real world, the main use of hypotheticals is to use them against people politically. This would be precisely what happened with the factoid about Singer.
How hard is it to live off the dole in Finland? Also, non-academic research positions in think tanks and the like (including, of course, SIAI).
Not very hard in principle, but I gather it tends to be rather stressful, with stuff like payments not arriving when they’re supposed to happening every now and then. Also, I couldn’t avoid the feeling of being a leech, justified or not.
Non-academic think tanks are a possibility, but for Singularity-related matters I can’t think of others than the SIAI, and their resources are limited.
Many people would steal food to save lives of the starving, and that’s illegal.
Working within the national support system to increase the chance of saving everybody/everything? If you would do the first, you should probably do the second. But you need to weigh the plausibility of the get-rich-and-fund-institute option, including the positive contributions of the others you could potentially hire.
I wonder how far some people would go for the cause. For Kaj, clearly, leeching of an already wasteful state is too far.
I was once criticized by a senior singinst member for not being prepared to be tortured or raped for the cause. I mean not actually, but, you know, in theory. Precommiting to being prepared to make a sacrifice that big. shrugs
Forget entirely ‘the cause’ nonsense. How far would you go just to avoid not personally getting killed? How much torture per chance that your personal contribution at the margin will prevent your near term death?
Could we move this discussion somewhere, where we don’t have to constantly worry about it getting deleted.
I’m not aware that LW moderators have ever deleted content merely for being critical of or potentially bad PR for SIAI, and I don’t think they’re naive enough to believe deletion would help. (Roko’s infamous post was considered harmful for other reasons.)
“Harmful for other reasons” still has a chilling effect on free speech… and given that those reasons were vague but had something to do with torture, it’s not unreasonable to worry about deletion of replies to the above question.
The reasons weren’t vague.
Of course this is just your assertion against mine since we’re not going to actually discuss the reasons here.
There doesn’t seem to be anything censor relevant in my question and for my part I tend to let big brother worry about his own paranoia and just go about my business. In any case while the question is an interesting one to me it doesn’t seem important enough to create a discussion somewhere else. At least not until I make a post. Putting aside presumptions of extreme altruism just how much contribution to FAI development is rational? To what extent does said rational contribution rely on newcomblike reasoning? How much would a CDT agent contribute on the expectation that his personal contribution will make the difference and save his life?
On second thoughts maybe the discussion does seem to interest me sufficiently. If you are particularly interested in answering me feel free to copy and paste my questions elsewhere and leave a back-link. ;)
I think you/we’re fine—just alternate between two tabs when replying, and paste it to the rationalwiki if it gets deleted.
Don’t let EY chill your free speech—this is supposed to be a community blog devoted to rationality… not a SIAI blog where comments are deleted whenever convenient.
Besides, it’s looking like after the Roko thing they’ve decided to cut back on such silliness.
You are compartmentalizing. What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech. That the decision conflicts with freedom of speech doesn’t necessarily mean that it’s incorrect, and if the correct decision conflicts with freedom of speech, or has you kill a thousand children (estimation of its correctness must of course take this consequence into account), it’s still correct and should be taken.
(There is only one proper criterion to anyone’s actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn’t yield the correct decision. )
(This is a note about a problem in your argument, not an argument for correctness of EY’s decision. My argument for correctness of EY’s decision is here and here.)
This is possible but by no means assured. It is also possible that he simply didn’t choose to write a full evaluation of consequences in this particular comment.
Upvoted. This just helped me get unstuck on a problem I’ve been procrastinating on.
Sounds like a good argument for WikiLeaks dilemma (which is of course confused by the possibility the government is lying their asses off about potential harm)
The question with WikiLeaks is about long-term consequences. As I understand it, the (sane) arguments in favor can be summarized as stating that expected long-term good outweighs expected short-term harm. It’s difficult (for me) to estimate whether it’s so.
I suspect it’s also difficult for Julian (or pretty much anybody) to estimate these things; I guess intelligent people will just have to make best guesses about this type of stuff. In this specific case a rationalist would be very cautious of “having an agenda”, as there is significant opportunity to do harm either way.
Very much agree btw
Shouldn’t AI researchers precommit to not build AI capable of this kind of acausal self-creation? This will lower chances of disaster both causally and acausally.
And please, define how do you tell moral heuristics and moral values apart. E.g. which is “don’t change moral values of humans by wireheading”?
We’re basically talking about a logical illusion… an AI Ontological Argument… with all the flaws of an ontological argument (such as bearing no proof)… that was foolishly censored leading to a lot of bad press, hurt feelings, lost donations, and general existential risk increase.
From, as you call it, a purely correctness optimizing perspective, It’s long term bad having silly, irrational stuff like this associated with LW. I think that EY should apologize, and we should get an explicit moderation policy for LW, but in the mean time I’ll just undo any existential risk savings hoped to be gained from censorship.
In other words, this is less about Free Speech, as it is about Dumb Censors :p
Whether it’s irrational is one of the questions we are discussing in this thread, so it’s bad conduct to use your answer as an element of an argument. I of course agree that it appears silly and irrational and absurd, and that associating that with LW and SIAI is in itself a bad idea, but I don’t believe it’s actually irrational, and I don’t believe you’ve seriously considered that question.
In other words, you don’t understand the argument, and are not moved by it, and so your estimation of improbability of the outrageous prediction stays the same. The only proper way to argue past this point is to discuss the subject matter, all else would be sophistry that equally applies to predictions of Astrology.
Following is another analysis.
Consider a die that was tossed 20 times, and each time it fell even side up. It’s not surprising because it’s a low-probability event: you wouldn’t be surprised if you observed most other combinations equally improbable under the hypothesis that the die is fair. You are surprised because a pattern you see suggests that there is an explanation for your observations that you’ve missed. You notice your own confusion.
In this case, you look at the event of censoring a post (topic), and you’re surprised, you don’t understand why that happened. And then your brain pattern matches all sorts of hypotheses that are not just improbable, but probably meaningless cached phrases, like “It’s convenient”, or “To oppose freedom of speech”, or “To manifest dictatorial power”.
Instead of leaving the choice of a hypothesis to the stupid intuitive processes, you should notice your own confusion, and recognize that you don’t know the answer. Acknowledging that you don’t know the answer is better than suggesting an obviously incorrect theory, if much more probability is concentrated outside that theory, where you can’t suggest a hypothesis.
Since we’re playing the condescension game, following is another analysis:
You read a (well written) slogan, and assumed that the writer must be irrational. You didn’t read the thread he linked you to, you focused on your first impression and held to it.
I’m not. Seriously. “Whenever convenient” is a very weak theory, and thus using it is a more serious flaw, but I missed that on first reading and addressed a different problem.
Please unpack the references. I don’t understand.
Sorry, it looks like we’re suffering from a bit of cultural crosstalk. Slogans, much like ontological arguments, are designed to make something of an illusion in the mind—a lever to change the change the way you look at the world. “Whenever convenient” isn’t there as a statement of belief, so much as a prod to get you thinking...
“How much to I trust that EY knows what he’s doing?”
You may as well argue with Nike: “Well, I can hardly do everything...” (re: Just Do It)
That said I am a rationalist… I just don’t see any harm in communicating to the best of my ability.
I linked you to this thread, where I did display some biases, but also decent evidence for not having the ones you’re describing… which I take to be roughly what you’d expect of a smart person off the street.
I can’t place this argument at all in relation to the thread above it. Looks like a collection of unrelated notes to me. Honest. (I’m open to any restatement; don’t see what to add to the notes themselves as I understand them.)
The whole post you’re replying to comes from your request to “Please unpack the references”.
Here’s the bit with references, for easy reference:
The first part of the post you’re replying to’s “Sorry, it looks… best of my ability” maps to “You read a.. irrational” in the quote above, and this tries to explain the problem as I understand it: that you were responding to a slogans words not it’s meaning. Explained it’s meaning. Explained how “Whenever convenient” was a pointer to the “Do I trust EY?” thought. Gave a backup example via the Nike slogan.
The last paragraph in the post you’re replying to tried to unpack the “you focused… held to it” from the above quote
I see. So the “writer” in the quote is you. I didn’t address your statement per se, more a general disposition of the people who state ridiculous things as explanation for the banning incident, but your comment did make the same impression on me. If you correctly disagree that it applies to your intended meaning, good, you didn’t make that error, and I don’t understand what did cause you to make that statement, but I’m not convinced by your explanation so far. You’d need to unpack “Distrusting EY” to make it clear that it doesn’t fall in the same category of ridiculous hypotheses.
The Nike slogan is “Just Do It”, if it helps.
Thanks. It doesn’t change the argument, but I’ll still delete that obnoxious paragraph.
I believe EY takes this issue very seriously.
Ahh. Are you aware of any other deletions?
Here...
I’d like to ask you the following. How would you, as an editor (moderator), handle dangerous information that are more harmful the more people know about it? Just imagine a detailed description of how to code an AGI or create bio weapons. Would you stay away from censoring such information in favor of free speech?
The subject matter here has a somewhat different nature that rather fits a more people—more probable pattern. The question is if it is better to discuss it as to possible resolve it or to censor it and thereby impede it. The problem is that this very question can not be discussed without deciding to not censor it. That doesn’t mean that people can not work on it, but rather just a few people in private. It is very likely that those people who already know about it are the most likely to solve the issue anyway. The general public would probably only add noise and make it much more likely to happen by simply knowing about it.
Step 1. Write down the clearest non-dangerous articulation of the boundaries of the dangerous idea that I could.
If necessary, make this two articulations: one that is easy to understand (in the sense of answering “is what I’m about to say a problem?”) even if it’s way overinclusive, and one that is not too overinclusive even if it requires effort to understand. Think of this as a cheap test with lots of false positives, and a more expensive follow-up test.
Add to this the most compelling explanation I can come up with of why violating those boundaries is dangerous that doesn’t itself violate those boundaries.
Step 2. Create a secondary forum, not public-access (e.g., a dangerous-idea mailing list), for the discussion of the dangerous idea. Add all the people I think belong there. If that’s more than just me, run my boundary articulation(s) past the group and edit as appropriate.
Step 3. Create a mechanism whereby people can request to be added to dangerous-idea. (e.g., sending dangerous-idea-request).
Step 4. Publish the boundary articulations, a request that people avoid any posts or comments that violate those boundaries, an overview of what steps are being taken (if any) by those in the know, and a pointer to dangerous-idea-request for anyone who feels they really ought to be included in discussion of it (with no promise of actually adding them).
Step 5. In forums where I have editorial control, censor contributions that violate those boundaries, with a pointer to the published bit in step 4.
==
That said, if it genuinely is the sort of thing where a suppression strategy can work, I would also breathe a huge sigh of relief for having dodged a bullet, because in most cases it just doesn’t.
A real-life example that people might accept the danger of would be the 2008 DNS flaw discovered by Dan Kaminsky—he discovered something really scary for the Internet and promptly assembled a DNS Cabal to handle it.
And, of course, it leaked before a fix was in place. But the delay did, they think, mitigate damage.
Note that the solution had to be in place very quickly indeed, because Kaminsky assumed that if he could find it, others could. Always assume you aren’t the only person in the whole world smart enough to find the flaw.
Yes, several times other poster’s have brought up the subject and had their comments deleted.
I hadn’t seen a lot of stubs of deleted comments around before the recent episode, but you say people’s comments had gotten deleted several times.
So, have you seen comments being deleted in a special way that doesn’t leave a stub?
Comments only leave a stub if they have replies that aren’t deleted.
Interesting. Do you have links? I rather publicly vowed to undo any assumed existential risk savings EY thought were to be had via censorship.
That one stayed up, and although I haven’t been the most vigilant in checking for deletions, I had (perhaps naively) assumed they stopped after that :-/
Hard to say. Probably a lot if I could precommit to it in advance, so that once it had began I couldn’t change my mind.
There are many complicating factors, though.
Am I the only one who can honestly say that it would depend on the day?
There’s a TED talk I once watched about how republicans reason on five moral channels and democrats only reason on two.
They were (roughly):
harm/care
fairness/reciprocity
in-group/out-group
authority
purity/scarcity/correctness
According to the talk, Democrats reason with primarily the first two and Republicans with all of them.
I took this to mean that Republicans were allowed to do moral calculus that Democrats could not… for instance, if I can only reason with the firs two, then punching a baby is always wrong (it causes harm, and isn’t fair)… If, on the other hand, I’m allowed to reason with all five, it might be okay to punch a baby because my Leader said to do it, or because the baby isn’t from my home town, or because my religion says to.
Republicans therefore have it much easier in rationalizing self-serving motives.
(As an aside, it’s interesting to note that Democrats must have started with more than just the two when they were young. “Mommy said not to” is a very good reason to do something when you’re young. It seems that they must have grown out of it).
After watching the TED talk, I was reflecting on how it seems that smart people (myself sadly included) let relatively minor moral problems stop them from doing great things… and on how if I were just a little more Republican (in the five channel moral reasoning sense) I might be able to be significantly more successful.
The result is a WFG that cycles in and out of 2-channel/5-channel reasoning.
On my 2-channel days, I’d have a very hard time hurting another person to save myself. If I saw them, and could feel that human connection, I doubt I could do much more than I myself would be willing to endure to save another’s life (perhaps two hours assuming hand-over-a-candle level of pain—permanent disfigurement would be harder to justify, but if it was relatively minor).
On my 5-channel days, I’m (surprisingly not so embarrassed to say) I’d probably go arbitrarily high… after all, what’s their life compared to mine?
Probably a bit more than you were looking to hear.
What’s your answer?
First let me say that as a Republican/libertarian I don’t entirely agree with Haidt’s analysis.
In any case, the above is not quiet how I understand Haidt’s analysis. My understanding is that Democracts have no way to categorically say that punching (or even killing) a baby is wrong. While they can say it’s wrong because as you said it causes harm and isn’t fair, they can always override that judgement by coming up with a reason why not punching and/or killing the baby would also cause harm. (See the philosophy of Peter Singer for an example).
Republicans on the other hand can invoke sanctity of life.
Sure, agreed. The way I presented it only showed very simplistic reasoning.
Let’s just say that, if you imagine a Democrat that desperately wants to do x but can’t justify it morally (punch a baby, start a somewhat shady business, not return a lost wallet full of cash), one way to resolve this conflict is to add Republican channels to his reasoning.
It doesn’t always work (sanctity of life, etc), but I think for a large number of situations where we Democrats-at-heart get cold feet it works like a champ :)
So I’ve noticed. See the discussion following this comment for an example.
On the other hand other times Democrats take positions that Republicans horrific, e.g., euthanasia, abortion, Peter Singer’s position on infanticide.
Peter Singer’s media-touted “position on infanticide” is an excellent example of why even philosophers might shy away from talking about hypotheticals in public. You appear to have just become Desrtopa’s nighmare.
My problem with Singer is that his “hypotheticals” don’t appear all that hypothetical.
What specifically are you referring to? (I haven’t been following Desporta’s posts.)
It’s evident you really need to read the post. He can’t get people to answer hypotheticals in almost any circumstances and thought this was a defect in the people. Approximately everyone responded pointing out that in the real world, the main use of hypotheticals is to use them against people politically. This would be precisely what happened with the factoid about Singer.
Thanks for the link—very interesting reading :)
Here I was thinking it was, well, nearly the opposite of that! :)
Given the current economic situation in Europe, I’m not sure that’s a good long term strategy.
Also, I suspect spending to long on the dole may cause you to develop habits that’ll make it harder to work a paying job.