I understand why you might be angry, but please think of the scale involved here. If any particular post or comment increases the chance of an AI going wrong by one trillionth of a percent, it is almost certainly not worth it.
This is silly—there’s simply no way to assign a probability of his posts increasing the chance of UFAI with any degree of confidence, to the point where I doubt you could even get the sign right.
For example, deleting posts because they might add an infinitesimally small amount to the probability of UFAI being created makes this community look slightly more like a bunch of paranoid nutjobs, which overall will hinder its ability to accomplish its goals and makes UFAI more likely.
From what I understand, the actual banning was due to it’s likely negative effects on the community, as Eliezer has seen similar things on the SL4 mailing list—which I won’t comment on. But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.
Thank you for pointing out the difficulty of quantifying existential risks posed blog posts.
The danger from deleting blog posts is much more tangible, and the results of censorship are conspicuous. You have pointed out two such dangers in your comment--(1) LWers will look nuttier and (2) it sets a bad precedent.
(Of course, if there is a way to quantify the marginal benefit of an LW post, then there is also a way to quantify the marginal cost from a bad one—just reverse the sign, and you’ll be right on average.)
(Of course, if there is a way to quantify the marginal benefit of an LW post, then there is also a way to quantify the marginal cost from a bad one—just reverse the sign, and you’ll be right on average.)
That makes sense for evaluating the cost/benefit to me of reading a post. But if I want to evaluate the overall cost/benefit of the post itself, I should also take into account the number of people who read one vs. the other. Given the ostensible purpose of karma and promotion, these ought to be significantly different.
Are you saying: (1) A bad post is less likely to be read because it will not be promoted and it will be downvoted; (2) Because bad posts are less read, they have a smaller cost than good posts’ benefits?
I think I agree with that. I had not considered karma and promotion, which behave like advertisements in their informational value, when making that comment.
But I think that what you’re saying only strengthens the case against moderators’ deleting posts against the poster’s will because it renders the objectionable material less objectionable.
(Of course, if there is a way to quantify the marginal benefit of an LW post, then there is also a way to quantify the marginal cost from a bad one—just reverse the sign, and you’ll be right on average.)
Huh? Why should these be equal? Why should they even be on the same order of magnitude? For example, an advertising spam post that gets deleted does orders of magnitude much less harm than an average good post does good. And a post that contained designs for a UFAI would do orders of magnitude more harm.
You are right to say that it’s possible to have extemely harmful blog posts, and it is also possible to have mostly harmless blog posts. I also agree that the examples you’ve cited are apt.
However, it is also possible to have extremely good blog posts (such as one containing designs for a tool to prevent the rise of UFAI or that changed many powerful people’s minds for the better) and to have barely beneficial ones.
Do we have a reason to think that the big bads are more likely than big goods? Or that a few really big bads are more likely than many moderate goods? I think that’s the kind of reason that would topple what I’ve said.
One of my assumptions here is that whether a post is good or bad does not change the magnitude of its impact. The magnitide of its positivity or negativity might change its the magnitude of its impact, but why should the sign?
I’m sorry if I’ve misunderstood your criticism. If I have, please give me another chance.
Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won’t in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and I think [some but not all others]
I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don’t know, but there seems to be a decent chance that you’ll find out about it soon enough.
Don’t assume it’s Ok because you understand the need for friendliness and aren’t writing code. There are no secrets to intelligence in hidden comments. (Though I didn’t see the original thread, I think I figured it out and it’s not giving me any insights.)
Don’t feel left out or not smart for not ‘getting it’ we only ‘got it’ because it was told to us. Try to compensate for your ego. if you fail, Stop reading anyway.
Ab ernyyl fgbc ybbxvat. Phevbfvgl erfvfgnapr snvy.
Sorry, forgot that not everyone saw the thread in question. Eliezer replied to the original post and explicitly said that it was dangerous and should not be published. I am willing to take his word for it, as he knows far more about AI than I.
I have not seen the original post, but can’t someone simply post it somewhere else? Is deleting it from here really a solution (assuming there’s real danger)? BTW, I can’t really see how a post on a board can be dangerous in a way implied here.
The likely explanation is that people who read the article agreed it was dangerous. If some of them had decided the censorship was unjustified, LW might look like Digg after the AACS key controversy.
I read the article, and it struck me as dangerous. I’m going to be somewhat vague and say that the article caused two potential forms of danger, one unlikely but extremely bad if it occurs and the other less damaging but having evidence(in the article itself) that the damage type had occurred. FWIW, Roko seemed to agree after some discussion that spreading the idea did pose a real danger.
But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.
I’ve read the post. That excuse is actually relevant.
You people have fabricated a fantastic argument for all kinds of wrongdoing and idiot decisions, “it could increase the chance of an AI going wrong...”.
“I deleted my comment because it was maybe going to increase the chance of an AI going wrong...”
“Hey, I had to punch that guy in the face, he was going to increase the chance of an AI going wrong by uttering something stupid...”
“Sorry, I had to exterminate those people because they were going to increase the chance of an AI going wrong.”
I’m beginning to wonder if not unfriendly AI but rather EY and this movement might be the bigger risk.
Why would I care about some feverish dream of a galactic civilization if we have to turn into our own oppressor and that of others? Screw you. That’s not what I want. Either I win like this, or I don’t care to win at all. What’s winning worth, what’s left of a victory, if you have to relinquish all that you value? That’s not winning, it’s worse than losing, it means to surrender to mere possibilities, preemptively.
This is why deleting the comments was the bigger risk: doing so makes people think (incorrectly) that EY and this movement are the bigger risk, instead of unfriendly AI.
The problem is, are you people sure you want to take this route? If you are serious about all this, what would stop you from killing a million people if your probability estimates showed that there was a serious risk posed by those people?
If you read this comment thread you’ll see what I mean and what danger there might be posed by this movement, ‘follow Eliezer’, ‘donating as much as possible to SIAI’, ‘kill a whole planet’, ‘afford to leave one planet’s worth’, ‘maybe we could even afford to leave their brains unmodified’...lesswrong.com sometimes makes me feel more than a bit uncomfortable, especially if you read between the lines.
Yes, you might be right about all the risks in question. But you might be wrong about the means of stopping the same.
I’m not sure if this was meant for me; I agree with you about free speech and not deleting the posts. I don’t think it means EY and this movement are a great danger, though. Deleting the posts was the wrong decision, and hopefully it will be reversed soon, but I don’t see that as indicating that anyone would go out and kill people to help the Singularity occur. If there really were a Langford Basilisk, say, a joke that made you die laughing, I would want it removed.
As to that comment thread: Peer is a very cool person and a good friend, but he is a little crazy and his beliefs and statements shouldn’t be taken to reflect anything about anyone else.
I know, it wasn’t my intention to discredit Peer, I quite like his ideas. I’m probably more crazy than him anyway.
But if I can come up with such conclusions, who else will? Also, why isn’t anyone out to kill people, or will be? I’m serious, why not? Just imagine EY found out that we can be reasonable sure, for example Google, would let loose a rogue AI soon. Given how the LW audience is inclined to act upon ‘mere’ probability estimates, how wouldn’t it be appropriate to bomb Google, given that was the only way to stop them in due time from turning the world into a living hell? And how isn’t this meme, given the right people and circumstances, a great danger? Sure, me saying EY might be a greater danger was nonsense, just said to provoke some response. By definition, not much could be worse than uFAI.
This incident is simply a good situation to extrapolate. If a thought-experiment can be deemed to be dangerous enough to be not just censored and deleted but for people to be told not even to seek any knowledge of it, much less discuss it, I’m wondering about the possible reaction to a imminent and tangible danger.
Before someone accuses me of it, I want to issue the point of people suffering psychological malaise by related information.
Is active denial of information an appropriate handling of serious
personal problems? If there are people who suffer from mental illness due to mere thought-experiments, I’m sorry but I think along the lines of the proponents that support the deletion of information for reasons of increasing the chance of an AI going wrong. Namely, as you abandon freedom of expression to an extent, I’m advocating to draw the line between the balance of freedom of information and protection of individual well-being at this point. That is not to say that, for example, I’d go all the way and advocate the depiction of cruelty to children.
A delicate issue indeed, but what one has to keep care of is not to slide into extremism that causes a relinquishment of values it is meant to serve and protect.
I don’t consider frogs to be objects of moral worth. -- Eliezer Yudkowsky
Yeah ok, frogs...but wait! This is the person who’s going to design the moral seed of our coming god-emperor. I’m not sure if everyone here is aware of the range of consequences while using the same as corroboration of the correctness pursuing this route. That is, are we going to replace unfriendly AI with unknown EY? Are we yet at the point that we can tell EY is THE master who’ll decide upon what’s reasonable to say in public and what should be deleted?
Ask yourself if you really, seriously believe into the ideas posed on LW, enough to follow into the realms of radical oppression in the name of good and evil.
You people have fabricated a fantastic argument for all kinds of wrongdoing and idiot decisions, “it could increase the chance of an AI going wrong...”.
“I deleted my comment because it was maybe going to increase the chance of an AI going wrong...”
“Hey, I had to punch that guy in the face, he was going to increase the chance of an AI going wrong by uttering something stupid...”
“Sorry, I had to exterminate those people because they were going to increase the chance of an AI going wrong.”
I’m beginning to wonder if not unfriendly AI but rather EY and this movement might be the bigger risk.
Why would I care about some feverish dream of a galactic civilization if we have to turn into our own oppressor and that of others? Screw you. That’s not what I want. Either I win like this, or I don’t care to win at all. What’s winning worth, what’s left of a victory, if you have to relinquish all that you value? That’s not winning, it’s worse than losing, it means to surrender to mere possibilities, preemptively.
P.S.
I haven’t read Roko’s post and comments yet but I got a backup of every single one of them. And not just Roko’s, all of LW including EY deleted comments. Are you going to assassinate me now?
I understand why you might be angry, but please think of the scale involved here. If any particular post or comment increases the chance of an AI going wrong by one trillionth of a percent, it is almost certainly not worth it.
This is silly—there’s simply no way to assign a probability of his posts increasing the chance of UFAI with any degree of confidence, to the point where I doubt you could even get the sign right.
For example, deleting posts because they might add an infinitesimally small amount to the probability of UFAI being created makes this community look slightly more like a bunch of paranoid nutjobs, which overall will hinder its ability to accomplish its goals and makes UFAI more likely.
From what I understand, the actual banning was due to it’s likely negative effects on the community, as Eliezer has seen similar things on the SL4 mailing list—which I won’t comment on. But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.
Thank you for pointing out the difficulty of quantifying existential risks posed blog posts.
The danger from deleting blog posts is much more tangible, and the results of censorship are conspicuous. You have pointed out two such dangers in your comment--(1) LWers will look nuttier and (2) it sets a bad precedent.
(Of course, if there is a way to quantify the marginal benefit of an LW post, then there is also a way to quantify the marginal cost from a bad one—just reverse the sign, and you’ll be right on average.)
That makes sense for evaluating the cost/benefit to me of reading a post. But if I want to evaluate the overall cost/benefit of the post itself, I should also take into account the number of people who read one vs. the other. Given the ostensible purpose of karma and promotion, these ought to be significantly different.
Are you saying: (1) A bad post is less likely to be read because it will not be promoted and it will be downvoted; (2) Because bad posts are less read, they have a smaller cost than good posts’ benefits?
I think I agree with that. I had not considered karma and promotion, which behave like advertisements in their informational value, when making that comment.
But I think that what you’re saying only strengthens the case against moderators’ deleting posts against the poster’s will because it renders the objectionable material less objectionable.
Yes, that’s what I’m saying.
And I’m not attempting to weaken or strengthen the case against anything in particular.
Huh? Why should these be equal? Why should they even be on the same order of magnitude? For example, an advertising spam post that gets deleted does orders of magnitude much less harm than an average good post does good. And a post that contained designs for a UFAI would do orders of magnitude more harm.
You are right to say that it’s possible to have extemely harmful blog posts, and it is also possible to have mostly harmless blog posts. I also agree that the examples you’ve cited are apt.
However, it is also possible to have extremely good blog posts (such as one containing designs for a tool to prevent the rise of UFAI or that changed many powerful people’s minds for the better) and to have barely beneficial ones.
Do we have a reason to think that the big bads are more likely than big goods? Or that a few really big bads are more likely than many moderate goods? I think that’s the kind of reason that would topple what I’ve said.
One of my assumptions here is that whether a post is good or bad does not change the magnitude of its impact. The magnitide of its positivity or negativity might change its the magnitude of its impact, but why should the sign?
I’m sorry if I’ve misunderstood your criticism. If I have, please give me another chance.
http://www.damninteresting.com/this-place-is-not-a-place-of-honor
Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won’t in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and I think [some but not all others]
I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don’t know, but there seems to be a decent chance that you’ll find out about it soon enough.
Don’t assume it’s Ok because you understand the need for friendliness and aren’t writing code. There are no secrets to intelligence in hidden comments. (Though I didn’t see the original thread, I think I figured it out and it’s not giving me any insights.)
Don’t feel left out or not smart for not ‘getting it’ we only ‘got it’ because it was told to us. Try to compensate for your ego. if you fail, Stop reading anyway.
Ab ernyyl fgbc ybbxvat. Phevbfvgl erfvfgnapr snvy.
http://www.damninteresting.com/this-place-is-not-a-place-of-honor
Sorry, forgot that not everyone saw the thread in question. Eliezer replied to the original post and explicitly said that it was dangerous and should not be published. I am willing to take his word for it, as he knows far more about AI than I.
I have not seen the original post, but can’t someone simply post it somewhere else? Is deleting it from here really a solution (assuming there’s real danger)? BTW, I can’t really see how a post on a board can be dangerous in a way implied here.
The likely explanation is that people who read the article agreed it was dangerous. If some of them had decided the censorship was unjustified, LW might look like Digg after the AACS key controversy.
I read the article, and it struck me as dangerous. I’m going to be somewhat vague and say that the article caused two potential forms of danger, one unlikely but extremely bad if it occurs and the other less damaging but having evidence(in the article itself) that the damage type had occurred. FWIW, Roko seemed to agree after some discussion that spreading the idea did pose a real danger.
In fact, there are hundreds of deleted articles on LW. The community is small enough for it to be manually policed—it seems.
Sadly, those who saw the original post have declined to share.
I’ve read the post. That excuse is actually relevant.
Something really crazy is going on here.
You people have fabricated a fantastic argument for all kinds of wrongdoing and idiot decisions, “it could increase the chance of an AI going wrong...”.
“I deleted my comment because it was maybe going to increase the chance of an AI going wrong...”
“Hey, I had to punch that guy in the face, he was going to increase the chance of an AI going wrong by uttering something stupid...”
“Sorry, I had to exterminate those people because they were going to increase the chance of an AI going wrong.”
I’m beginning to wonder if not unfriendly AI but rather EY and this movement might be the bigger risk.
Why would I care about some feverish dream of a galactic civilization if we have to turn into our own oppressor and that of others? Screw you. That’s not what I want. Either I win like this, or I don’t care to win at all. What’s winning worth, what’s left of a victory, if you have to relinquish all that you value? That’s not winning, it’s worse than losing, it means to surrender to mere possibilities, preemptively.
This is why deleting the comments was the bigger risk: doing so makes people think (incorrectly) that EY and this movement are the bigger risk, instead of unfriendly AI.
The problem is, are you people sure you want to take this route? If you are serious about all this, what would stop you from killing a million people if your probability estimates showed that there was a serious risk posed by those people?
If you read this comment thread you’ll see what I mean and what danger there might be posed by this movement, ‘follow Eliezer’, ‘donating as much as possible to SIAI’, ‘kill a whole planet’, ‘afford to leave one planet’s worth’, ‘maybe we could even afford to leave their brains unmodified’...lesswrong.com sometimes makes me feel more than a bit uncomfortable, especially if you read between the lines.
Yes, you might be right about all the risks in question. But you might be wrong about the means of stopping the same.
I’m not sure if this was meant for me; I agree with you about free speech and not deleting the posts. I don’t think it means EY and this movement are a great danger, though. Deleting the posts was the wrong decision, and hopefully it will be reversed soon, but I don’t see that as indicating that anyone would go out and kill people to help the Singularity occur. If there really were a Langford Basilisk, say, a joke that made you die laughing, I would want it removed.
As to that comment thread: Peer is a very cool person and a good friend, but he is a little crazy and his beliefs and statements shouldn’t be taken to reflect anything about anyone else.
I know, it wasn’t my intention to discredit Peer, I quite like his ideas. I’m probably more crazy than him anyway.
But if I can come up with such conclusions, who else will? Also, why isn’t anyone out to kill people, or will be? I’m serious, why not? Just imagine EY found out that we can be reasonable sure, for example Google, would let loose a rogue AI soon. Given how the LW audience is inclined to act upon ‘mere’ probability estimates, how wouldn’t it be appropriate to bomb Google, given that was the only way to stop them in due time from turning the world into a living hell? And how isn’t this meme, given the right people and circumstances, a great danger? Sure, me saying EY might be a greater danger was nonsense, just said to provoke some response. By definition, not much could be worse than uFAI.
This incident is simply a good situation to extrapolate. If a thought-experiment can be deemed to be dangerous enough to be not just censored and deleted but for people to be told not even to seek any knowledge of it, much less discuss it, I’m wondering about the possible reaction to a imminent and tangible danger.
Before someone accuses me of it, I want to issue the point of people suffering psychological malaise by related information.
Is active denial of information an appropriate handling of serious personal problems? If there are people who suffer from mental illness due to mere thought-experiments, I’m sorry but I think along the lines of the proponents that support the deletion of information for reasons of increasing the chance of an AI going wrong. Namely, as you abandon freedom of expression to an extent, I’m advocating to draw the line between the balance of freedom of information and protection of individual well-being at this point. That is not to say that, for example, I’d go all the way and advocate the depiction of cruelty to children.
A delicate issue indeed, but what one has to keep care of is not to slide into extremism that causes a relinquishment of values it is meant to serve and protect.
Maybe you should read the comments in question before you make this sort of post?
This really isn’t worth arguing and there isn’t any reason to be angry...
You are wrong on both. There is strong signalling going on that gives good evidence regarding both Eliezer’s intent and his competence.
What Roko said matters little, what Eliezer said (and did) matters far more. He is the one trying to take over the world.
I don’t consider frogs to be objects of moral worth. -- Eliezer Yudkowsky
Yeah ok, frogs...but wait! This is the person who’s going to design the moral seed of our coming god-emperor. I’m not sure if everyone here is aware of the range of consequences while using the same as corroboration of the correctness pursuing this route. That is, are we going to replace unfriendly AI with unknown EY? Are we yet at the point that we can tell EY is THE master who’ll decide upon what’s reasonable to say in public and what should be deleted?
Ask yourself if you really, seriously believe into the ideas posed on LW, enough to follow into the realms of radical oppression in the name of good and evil.
There are some good questions buried in there that may be worth discussing in more detail at some point.
I am vaguely confused by your question and am going to stop having this discussion.
Before getting angry, it’s always a good idea to check whether you’re confused. And you are.
Something really crazy is going on here.
You people have fabricated a fantastic argument for all kinds of wrongdoing and idiot decisions, “it could increase the chance of an AI going wrong...”.
“I deleted my comment because it was maybe going to increase the chance of an AI going wrong...”
“Hey, I had to punch that guy in the face, he was going to increase the chance of an AI going wrong by uttering something stupid...”
“Sorry, I had to exterminate those people because they were going to increase the chance of an AI going wrong.”
I’m beginning to wonder if not unfriendly AI but rather EY and this movement might be the bigger risk.
Why would I care about some feverish dream of a galactic civilization if we have to turn into our own oppressor and that of others? Screw you. That’s not what I want. Either I win like this, or I don’t care to win at all. What’s winning worth, what’s left of a victory, if you have to relinquish all that you value? That’s not winning, it’s worse than losing, it means to surrender to mere possibilities, preemptively.
P.S. I haven’t read Roko’s post and comments yet but I got a backup of every single one of them. And not just Roko’s, all of LW including EY deleted comments. Are you going to assassinate me now?