One man’s modus ponens is another man’s modus tollens.
kodos96
BTW, I know it’s not terribly rare for anti-marijuana laws to be enforced against middle-class people where I am; so he should have either specified “against middle-class people in Northern California”
Also, even in California, and even for people of middle class, you’ll get marijuana laws enforced against you if you manage to piss off the wrong cop/prosecutor.
Before I spend any more time replying to this, can you clarify for me… do you and I actually disagree about something of substance here? I.e. how an organization should, in the real world, deal with PR concerns? Or are we just arguing about the most technically correct way to go about stating our position?
You seem to be using a very narrow definition of “crypto”.. I’m not sure whether you’re just being pedantic about definitions, in which case you may be correct, or if you’re actually disputing the substance of what I’m saying. To answer your question, I’m not a cryptographer, but I have a CS degree and am quite capable of reading and understanding crypto papers (though not of retaining the knowledge for long)… it’s been several years since I read the relevant papers, so I might be getting some of the details wrong in how I’m explaining it, but the basic concept of deniable message authentication is something that’s well understood by mainstream cryptographers.
You seem to be aware of the existence of OTR, so I’m confused—are you claiming that it doesn’t accomplish what it says it does? Or just that something about the way I’m proposing to apply similar technology to this use case would break some of its assumptions? The latter case is entirely possible, as so far I’ve put a grand total of about 5 minutes thought into it… if that’s the case I’d be curious to know what are the relevant assumptions my proposed use case would break?
As much as people who don’t like this policy, might wish that it were impossible for anyone to tell the difference so that they could thereby argue against the policy, it’s not actually very hard to tell the difference.
I didn’t interpret CronoDAS’s post as intending to actually advocate violence. I viewed it as really silly and kind of dickish, and a good thing that he ultimately removed it, but an actual call to violence? No. It was a thought experiment. His thought experiment was set in the present day, while yours was set in the far future, but other than that I don’t see a bright line separating them.
It may not be be very hard for you to tell the difference, since you wrote the policy, so you may very well have a clear bright line separating the two in your head, but we don’t.
I am asking in advance if anyone has non-obvious consequences they want to point out or policy considerations they would like to raise. In other words, the form of this discussion is not ‘Do you like this?’ - you probably have a different cost function from people who are held responsible for how LW looks as a whole—but rather, ’Are there any predictable consequences we didn’t think of that you would like to point out
Eliezer, at this point I think it’s fair to ask: has anything anyone has said so far caused you to update? If not, why not?
I realize some of my replies to you in this thread have been rather harsh, so perhaps I should take this opportunity to clarify: I consider myself a big fan of yours. I think you’re a brilliant guy, and I agree with you on just about everything regarding FAI, x-risk, SIAI’s mission.… I think you’re probably mankind’s best bet if we want to successfully navigate the singularity. But at the same time, I also think you can demonstrate some remarkably poor judgement from time to time… hey, we’re all running on corrupted hardware after all. It’s the combination of these two facts that really bothers me.
I don’t know of any way to say this that isn’t going to come off sounding horribly condescending, so I’m just going to say it, and hope you evaluate it in the context of the fact that I’m a big fan of your work, and in the grand scheme of things, we’re on the same side.
I think what’s going on here is that your feelings have gotten hurt by various people misattributing various positions to you that you don’t actually hold. That’s totally understandable. But I think you’re confusing the extent to which your feelings have been hurt with the extent to which actual harm has been done to SIAI’s mission, and are overreacting as a result. I’m not a psychologist—this is just armchair speculation.… I’m just telling you how it looks from the outside.
Again, we’re all running on corrupted hardware, so it’s entirely natural for even the best amongst us to make these kinds of mistakes… I don’t expect you to be an emotionless Straw Vulcan (and indeed, I wouldn’t trust you if you were)… but your apparent unwillingness to update in response to other’s input when it comes to certain emotionally charged issues is very troubling to me.
So to answer your question “Are there any predictable consequences we didn’t think of that you would like to point out”… well I’ve pointed out many already, but the most concise, and most important predictable consequence of this policy which I believe you’re failing to take into account, is this: IT LOOKS HORRIBLE… like, really really bad. Way worse than the things it’s intended to combat.
Well, point 3 can be eliminated by proper use of crypto. See OTR
The response to point 2 is that by having it be publicly known to everyone that messages’ contents are formally mathematically provably deniable (as can be guaranteed by proper crypto implementation), that disincentivizes people from even bothering to re-post content in the first place.
Point 1, however, I agree with completely, and that’s why I’m not actually advocating this solution.
One the one hand, you’re deciding policy based on non-PR related factors, then thinking about the most PR friendly way to proceed from there. On the other hand, you’re letting PR actually determine policy.
You can argue that LessWrong shouldn’t care about PR, or that censorship is going to be bad PR, or that censorship is unnecessary, but you can’t argue that PR is a fundamentally horrible idea without some very strong evidence (which you did not provide).
That was perhaps a bit of an overstatement on my part. Considering PR consequences of actions is certainly a good thing to do. But if PR concerns are driving your policy, rather than simply informing it, that’s bad.
it would be nice to have the counter-counterargument, “Unlike this bad person here, we have a policy of deleting posts which claim Q->specific-violence even if the post claims not to believe in Q because the identifiable target would have a reasonable complaint of being threatened”.
I would find this counter-counter-argument extremely uncompelling if made by an opponent. Suppose you read someone’s blog who made statements which could be interpreted as vaguely anti-Semitic, but it could go either way. Now suppose someone in the comments of that blog post replied by saying “Yeah, you’re totally right, we should kill all the Jews!”.
Which type of response from the blog owner do you think would be more likely to convince you that he was not actually an anti-Semite: 1) deleting the comment, covering up its existence, and never speaking of it, or 2) Leaving the comment in place, and refuting it—carefully laying out why the commenter is wrong.
I know that I for one would find the latter response much more convincing of the author’s benign intent.
Note: in order to post this comment, despite it being, IMHO entirely on-point and important to the conversation, I had to take a 5 point karma hit.… due to the LAST poorly thought out, dictatorially imposed, consensus-defying policy change.
But that’s the whole point of my objection. This distinction is what makes this policy such a bad idea. Ignoring the distinction is to ignore the point.
You do realize that much of the world, including much of the supposedly “civilized” world, has blasphemy laws on the books? What percentage of articles on LW (including their comment sections) do you think would run afoul of strict readings of such laws?
Also, I said “chill all speech”, not forbid it outright. If you’re forced, while writing, to wonder “is this violating some rule? Should I rephrase it to make it not violate?”, that’s what “chilled” speech means—forcing on you the cognitive burden of thinking in terms of “what won’t get me in trouble” rather than “what will communicate most effectively”
Or how about this: you characterized “Three Felonies a Day” as propaganda… I’m sure the author of the book would be quite upset to hear that. He might consider it to constitute some manner of defamation, or perhaps intentional infliction of emotional distress. Tortious interference perhaps? Disturbing the peace? YOUR COMMENT IS NOW BANNED!
In neither case would I be confident that the bad PR they’ve avoided by hiding embarrassing things hasn’t been worse than the bad PR they’ve abetted by obviously dissembling and/or by increasing the suspicion that they’re hiding even worse things.
Exactly. That’s why I’m not actually advocating any of these technical solutions, just pointing out that they do exist in solution-space.
The solution that I’m actually advocating is even simpler still: do nothing. Rely on self-policing and the “don’t be an asshole” principle, and in the event that that fails (which it hasn’t yet), then counter bad speech with more speech: clearly state “LW/SIAI does not endorse this suggestion, and renounces the use of violence.” If people out there still insist on slandering SIAI by association to something some random guy on LW said, then fuck em—haters gonna hate.
In particular, I think people are underestimating how important it is for LW not to look too bad
I’m not underestimating that at all… I’m saying that this policy makes us look bad… WAY worse than the disease it’s intended to cure, especially in light of the fact that that disease cleared itself up in a few hours with no intervention necessary.
It would clearly seem to be you who has not thought about this for five minutes. The absurdly broad extension you propose to the already absurdly broad policy would effectively chill all speech on any topics other than puppies or unicorn farts. Actually maybe just puppies… unicorn farts might after all violate EPA air quality standards.
Keep in mind, the average American commits Three Felonies a Day
The average professional in this country wakes up in the morning, goes to work, comes home, eats dinner, and then goes to sleep, unaware that he or she has likely committed several federal crimes that day. Why? The answer lies in the very nature of modern federal criminal laws, which have exploded in number but also become impossibly broad and vague.
Yes, this is the unstated policy we’ve all been working under up until this point, and it’s worked. Which is why it’s so irrational to propose a censorship rule.
On a neutral note—We aren’t enemies here. We all have very similar utility functions, with slightly different weights on certain terminal values (PR) - which is understandable as some of us have more or less to lose from LW’s PR.
I disagree that this is the entire source of the dispute. I think that even when constrained to optimizing only for good PR, this is an instrumentally ineffective method of achieving that. Censorship is worse for PR than the problem in question, especially given that that problem in question is thus far nonexistent
To convince Eliezer—you must show him a model of the world given the policy that causes ill effects he finds worse than the positive effects of enacting the policy.
This is trivially easy to do, since the positive effects of enacting the policy are zero, given that the one and only time this has ever been a problem, the problem resolved itself without censorship, via self-policing.
Well… the showing him the model part is trivially easy anyway. Convincing him… apparently not so much.
True, but that doesn’t change the fact that the wording of the proposed policy is heavily subject to interpretation, which is the point I was trying to make.
Good question. I don’t know.
Anybody?