This is silly—there’s simply no way to assign a probability of his posts increasing the chance of UFAI with any degree of confidence, to the point where I doubt you could even get the sign right.
For example, deleting posts because they might add an infinitesimally small amount to the probability of UFAI being created makes this community look slightly more like a bunch of paranoid nutjobs, which overall will hinder its ability to accomplish its goals and makes UFAI more likely.
From what I understand, the actual banning was due to it’s likely negative effects on the community, as Eliezer has seen similar things on the SL4 mailing list—which I won’t comment on. But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.
Thank you for pointing out the difficulty of quantifying existential risks posed blog posts.
The danger from deleting blog posts is much more tangible, and the results of censorship are conspicuous. You have pointed out two such dangers in your comment--(1) LWers will look nuttier and (2) it sets a bad precedent.
(Of course, if there is a way to quantify the marginal benefit of an LW post, then there is also a way to quantify the marginal cost from a bad one—just reverse the sign, and you’ll be right on average.)
(Of course, if there is a way to quantify the marginal benefit of an LW post, then there is also a way to quantify the marginal cost from a bad one—just reverse the sign, and you’ll be right on average.)
That makes sense for evaluating the cost/benefit to me of reading a post. But if I want to evaluate the overall cost/benefit of the post itself, I should also take into account the number of people who read one vs. the other. Given the ostensible purpose of karma and promotion, these ought to be significantly different.
Are you saying: (1) A bad post is less likely to be read because it will not be promoted and it will be downvoted; (2) Because bad posts are less read, they have a smaller cost than good posts’ benefits?
I think I agree with that. I had not considered karma and promotion, which behave like advertisements in their informational value, when making that comment.
But I think that what you’re saying only strengthens the case against moderators’ deleting posts against the poster’s will because it renders the objectionable material less objectionable.
(Of course, if there is a way to quantify the marginal benefit of an LW post, then there is also a way to quantify the marginal cost from a bad one—just reverse the sign, and you’ll be right on average.)
Huh? Why should these be equal? Why should they even be on the same order of magnitude? For example, an advertising spam post that gets deleted does orders of magnitude much less harm than an average good post does good. And a post that contained designs for a UFAI would do orders of magnitude more harm.
You are right to say that it’s possible to have extemely harmful blog posts, and it is also possible to have mostly harmless blog posts. I also agree that the examples you’ve cited are apt.
However, it is also possible to have extremely good blog posts (such as one containing designs for a tool to prevent the rise of UFAI or that changed many powerful people’s minds for the better) and to have barely beneficial ones.
Do we have a reason to think that the big bads are more likely than big goods? Or that a few really big bads are more likely than many moderate goods? I think that’s the kind of reason that would topple what I’ve said.
One of my assumptions here is that whether a post is good or bad does not change the magnitude of its impact. The magnitide of its positivity or negativity might change its the magnitude of its impact, but why should the sign?
I’m sorry if I’ve misunderstood your criticism. If I have, please give me another chance.
Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won’t in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and I think [some but not all others]
I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don’t know, but there seems to be a decent chance that you’ll find out about it soon enough.
Don’t assume it’s Ok because you understand the need for friendliness and aren’t writing code. There are no secrets to intelligence in hidden comments. (Though I didn’t see the original thread, I think I figured it out and it’s not giving me any insights.)
Don’t feel left out or not smart for not ‘getting it’ we only ‘got it’ because it was told to us. Try to compensate for your ego. if you fail, Stop reading anyway.
Ab ernyyl fgbc ybbxvat. Phevbfvgl erfvfgnapr snvy.
Sorry, forgot that not everyone saw the thread in question. Eliezer replied to the original post and explicitly said that it was dangerous and should not be published. I am willing to take his word for it, as he knows far more about AI than I.
I have not seen the original post, but can’t someone simply post it somewhere else? Is deleting it from here really a solution (assuming there’s real danger)? BTW, I can’t really see how a post on a board can be dangerous in a way implied here.
The likely explanation is that people who read the article agreed it was dangerous. If some of them had decided the censorship was unjustified, LW might look like Digg after the AACS key controversy.
I read the article, and it struck me as dangerous. I’m going to be somewhat vague and say that the article caused two potential forms of danger, one unlikely but extremely bad if it occurs and the other less damaging but having evidence(in the article itself) that the damage type had occurred. FWIW, Roko seemed to agree after some discussion that spreading the idea did pose a real danger.
But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.
I’ve read the post. That excuse is actually relevant.
This is silly—there’s simply no way to assign a probability of his posts increasing the chance of UFAI with any degree of confidence, to the point where I doubt you could even get the sign right.
For example, deleting posts because they might add an infinitesimally small amount to the probability of UFAI being created makes this community look slightly more like a bunch of paranoid nutjobs, which overall will hinder its ability to accomplish its goals and makes UFAI more likely.
From what I understand, the actual banning was due to it’s likely negative effects on the community, as Eliezer has seen similar things on the SL4 mailing list—which I won’t comment on. But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.
Thank you for pointing out the difficulty of quantifying existential risks posed blog posts.
The danger from deleting blog posts is much more tangible, and the results of censorship are conspicuous. You have pointed out two such dangers in your comment--(1) LWers will look nuttier and (2) it sets a bad precedent.
(Of course, if there is a way to quantify the marginal benefit of an LW post, then there is also a way to quantify the marginal cost from a bad one—just reverse the sign, and you’ll be right on average.)
That makes sense for evaluating the cost/benefit to me of reading a post. But if I want to evaluate the overall cost/benefit of the post itself, I should also take into account the number of people who read one vs. the other. Given the ostensible purpose of karma and promotion, these ought to be significantly different.
Are you saying: (1) A bad post is less likely to be read because it will not be promoted and it will be downvoted; (2) Because bad posts are less read, they have a smaller cost than good posts’ benefits?
I think I agree with that. I had not considered karma and promotion, which behave like advertisements in their informational value, when making that comment.
But I think that what you’re saying only strengthens the case against moderators’ deleting posts against the poster’s will because it renders the objectionable material less objectionable.
Yes, that’s what I’m saying.
And I’m not attempting to weaken or strengthen the case against anything in particular.
Huh? Why should these be equal? Why should they even be on the same order of magnitude? For example, an advertising spam post that gets deleted does orders of magnitude much less harm than an average good post does good. And a post that contained designs for a UFAI would do orders of magnitude more harm.
You are right to say that it’s possible to have extemely harmful blog posts, and it is also possible to have mostly harmless blog posts. I also agree that the examples you’ve cited are apt.
However, it is also possible to have extremely good blog posts (such as one containing designs for a tool to prevent the rise of UFAI or that changed many powerful people’s minds for the better) and to have barely beneficial ones.
Do we have a reason to think that the big bads are more likely than big goods? Or that a few really big bads are more likely than many moderate goods? I think that’s the kind of reason that would topple what I’ve said.
One of my assumptions here is that whether a post is good or bad does not change the magnitude of its impact. The magnitide of its positivity or negativity might change its the magnitude of its impact, but why should the sign?
I’m sorry if I’ve misunderstood your criticism. If I have, please give me another chance.
http://www.damninteresting.com/this-place-is-not-a-place-of-honor
Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won’t in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and I think [some but not all others]
I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don’t know, but there seems to be a decent chance that you’ll find out about it soon enough.
Don’t assume it’s Ok because you understand the need for friendliness and aren’t writing code. There are no secrets to intelligence in hidden comments. (Though I didn’t see the original thread, I think I figured it out and it’s not giving me any insights.)
Don’t feel left out or not smart for not ‘getting it’ we only ‘got it’ because it was told to us. Try to compensate for your ego. if you fail, Stop reading anyway.
Ab ernyyl fgbc ybbxvat. Phevbfvgl erfvfgnapr snvy.
http://www.damninteresting.com/this-place-is-not-a-place-of-honor
Sorry, forgot that not everyone saw the thread in question. Eliezer replied to the original post and explicitly said that it was dangerous and should not be published. I am willing to take his word for it, as he knows far more about AI than I.
I have not seen the original post, but can’t someone simply post it somewhere else? Is deleting it from here really a solution (assuming there’s real danger)? BTW, I can’t really see how a post on a board can be dangerous in a way implied here.
The likely explanation is that people who read the article agreed it was dangerous. If some of them had decided the censorship was unjustified, LW might look like Digg after the AACS key controversy.
I read the article, and it struck me as dangerous. I’m going to be somewhat vague and say that the article caused two potential forms of danger, one unlikely but extremely bad if it occurs and the other less damaging but having evidence(in the article itself) that the damage type had occurred. FWIW, Roko seemed to agree after some discussion that spreading the idea did pose a real danger.
In fact, there are hundreds of deleted articles on LW. The community is small enough for it to be manually policed—it seems.
Sadly, those who saw the original post have declined to share.
I’ve read the post. That excuse is actually relevant.