Why are Roko’s posts deleted? Every comment or post he made since April last year is gone! WTF?
Edit: It looks like this discussion sheds some light on it. As best I can tell, Roko said something that someone didn’t want to get out, so someone (maybe Roko?) deleted a huge chunk of his posts just to be safe.
I’ve deleted them myself. I think that my time is better spent looking for a quant job to fund x-risk research than on LW, where it seems I am actually doing active harm by staying rather than merely wasting time. I must say, it has been fun, but I think I am in the region of negative returns, not just diminishing ones.
So you’ve deleted the posts you’ve made in the past. This is harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable.
For example, consider these posts, and comments on them, that you deleted:
And I’d like the post of Roko’s that got banned restored. If I were Roko I would be very angry about having my post deleted because of an infinitesimal far-fetched chance of an AI going wrong. I’m angry about it now and I didn’t even write it. That’s what was “harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable.” That’s what should be against the blog ethics.
I don’t blame him for removing all of his contributions after his post was treated like that.
I understand why you might be angry, but please think of the scale involved here. If any particular post or comment increases the chance of an AI going wrong by one trillionth of a percent, it is almost certainly not worth it.
This is silly—there’s simply no way to assign a probability of his posts increasing the chance of UFAI with any degree of confidence, to the point where I doubt you could even get the sign right.
For example, deleting posts because they might add an infinitesimally small amount to the probability of UFAI being created makes this community look slightly more like a bunch of paranoid nutjobs, which overall will hinder its ability to accomplish its goals and makes UFAI more likely.
From what I understand, the actual banning was due to it’s likely negative effects on the community, as Eliezer has seen similar things on the SL4 mailing list—which I won’t comment on. But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.
Thank you for pointing out the difficulty of quantifying existential risks posed blog posts.
The danger from deleting blog posts is much more tangible, and the results of censorship are conspicuous. You have pointed out two such dangers in your comment--(1) LWers will look nuttier and (2) it sets a bad precedent.
(Of course, if there is a way to quantify the marginal benefit of an LW post, then there is also a way to quantify the marginal cost from a bad one—just reverse the sign, and you’ll be right on average.)
(Of course, if there is a way to quantify the marginal benefit of an LW post, then there is also a way to quantify the marginal cost from a bad one—just reverse the sign, and you’ll be right on average.)
That makes sense for evaluating the cost/benefit to me of reading a post. But if I want to evaluate the overall cost/benefit of the post itself, I should also take into account the number of people who read one vs. the other. Given the ostensible purpose of karma and promotion, these ought to be significantly different.
Are you saying: (1) A bad post is less likely to be read because it will not be promoted and it will be downvoted; (2) Because bad posts are less read, they have a smaller cost than good posts’ benefits?
I think I agree with that. I had not considered karma and promotion, which behave like advertisements in their informational value, when making that comment.
But I think that what you’re saying only strengthens the case against moderators’ deleting posts against the poster’s will because it renders the objectionable material less objectionable.
(Of course, if there is a way to quantify the marginal benefit of an LW post, then there is also a way to quantify the marginal cost from a bad one—just reverse the sign, and you’ll be right on average.)
Huh? Why should these be equal? Why should they even be on the same order of magnitude? For example, an advertising spam post that gets deleted does orders of magnitude much less harm than an average good post does good. And a post that contained designs for a UFAI would do orders of magnitude more harm.
You are right to say that it’s possible to have extemely harmful blog posts, and it is also possible to have mostly harmless blog posts. I also agree that the examples you’ve cited are apt.
However, it is also possible to have extremely good blog posts (such as one containing designs for a tool to prevent the rise of UFAI or that changed many powerful people’s minds for the better) and to have barely beneficial ones.
Do we have a reason to think that the big bads are more likely than big goods? Or that a few really big bads are more likely than many moderate goods? I think that’s the kind of reason that would topple what I’ve said.
One of my assumptions here is that whether a post is good or bad does not change the magnitude of its impact. The magnitide of its positivity or negativity might change its the magnitude of its impact, but why should the sign?
I’m sorry if I’ve misunderstood your criticism. If I have, please give me another chance.
Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won’t in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and I think [some but not all others]
I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don’t know, but there seems to be a decent chance that you’ll find out about it soon enough.
Don’t assume it’s Ok because you understand the need for friendliness and aren’t writing code. There are no secrets to intelligence in hidden comments. (Though I didn’t see the original thread, I think I figured it out and it’s not giving me any insights.)
Don’t feel left out or not smart for not ‘getting it’ we only ‘got it’ because it was told to us. Try to compensate for your ego. if you fail, Stop reading anyway.
Ab ernyyl fgbc ybbxvat. Phevbfvgl erfvfgnapr snvy.
Sorry, forgot that not everyone saw the thread in question. Eliezer replied to the original post and explicitly said that it was dangerous and should not be published. I am willing to take his word for it, as he knows far more about AI than I.
I have not seen the original post, but can’t someone simply post it somewhere else? Is deleting it from here really a solution (assuming there’s real danger)? BTW, I can’t really see how a post on a board can be dangerous in a way implied here.
The likely explanation is that people who read the article agreed it was dangerous. If some of them had decided the censorship was unjustified, LW might look like Digg after the AACS key controversy.
I read the article, and it struck me as dangerous. I’m going to be somewhat vague and say that the article caused two potential forms of danger, one unlikely but extremely bad if it occurs and the other less damaging but having evidence(in the article itself) that the damage type had occurred. FWIW, Roko seemed to agree after some discussion that spreading the idea did pose a real danger.
But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.
I’ve read the post. That excuse is actually relevant.
You people have fabricated a fantastic argument for all kinds of wrongdoing and idiot decisions, “it could increase the chance of an AI going wrong...”.
“I deleted my comment because it was maybe going to increase the chance of an AI going wrong...”
“Hey, I had to punch that guy in the face, he was going to increase the chance of an AI going wrong by uttering something stupid...”
“Sorry, I had to exterminate those people because they were going to increase the chance of an AI going wrong.”
I’m beginning to wonder if not unfriendly AI but rather EY and this movement might be the bigger risk.
Why would I care about some feverish dream of a galactic civilization if we have to turn into our own oppressor and that of others? Screw you. That’s not what I want. Either I win like this, or I don’t care to win at all. What’s winning worth, what’s left of a victory, if you have to relinquish all that you value? That’s not winning, it’s worse than losing, it means to surrender to mere possibilities, preemptively.
This is why deleting the comments was the bigger risk: doing so makes people think (incorrectly) that EY and this movement are the bigger risk, instead of unfriendly AI.
The problem is, are you people sure you want to take this route? If you are serious about all this, what would stop you from killing a million people if your probability estimates showed that there was a serious risk posed by those people?
If you read this comment thread you’ll see what I mean and what danger there might be posed by this movement, ‘follow Eliezer’, ‘donating as much as possible to SIAI’, ‘kill a whole planet’, ‘afford to leave one planet’s worth’, ‘maybe we could even afford to leave their brains unmodified’...lesswrong.com sometimes makes me feel more than a bit uncomfortable, especially if you read between the lines.
Yes, you might be right about all the risks in question. But you might be wrong about the means of stopping the same.
I’m not sure if this was meant for me; I agree with you about free speech and not deleting the posts. I don’t think it means EY and this movement are a great danger, though. Deleting the posts was the wrong decision, and hopefully it will be reversed soon, but I don’t see that as indicating that anyone would go out and kill people to help the Singularity occur. If there really were a Langford Basilisk, say, a joke that made you die laughing, I would want it removed.
As to that comment thread: Peer is a very cool person and a good friend, but he is a little crazy and his beliefs and statements shouldn’t be taken to reflect anything about anyone else.
I know, it wasn’t my intention to discredit Peer, I quite like his ideas. I’m probably more crazy than him anyway.
But if I can come up with such conclusions, who else will? Also, why isn’t anyone out to kill people, or will be? I’m serious, why not? Just imagine EY found out that we can be reasonable sure, for example Google, would let loose a rogue AI soon. Given how the LW audience is inclined to act upon ‘mere’ probability estimates, how wouldn’t it be appropriate to bomb Google, given that was the only way to stop them in due time from turning the world into a living hell? And how isn’t this meme, given the right people and circumstances, a great danger? Sure, me saying EY might be a greater danger was nonsense, just said to provoke some response. By definition, not much could be worse than uFAI.
This incident is simply a good situation to extrapolate. If a thought-experiment can be deemed to be dangerous enough to be not just censored and deleted but for people to be told not even to seek any knowledge of it, much less discuss it, I’m wondering about the possible reaction to a imminent and tangible danger.
Before someone accuses me of it, I want to issue the point of people suffering psychological malaise by related information.
Is active denial of information an appropriate handling of serious
personal problems? If there are people who suffer from mental illness due to mere thought-experiments, I’m sorry but I think along the lines of the proponents that support the deletion of information for reasons of increasing the chance of an AI going wrong. Namely, as you abandon freedom of expression to an extent, I’m advocating to draw the line between the balance of freedom of information and protection of individual well-being at this point. That is not to say that, for example, I’d go all the way and advocate the depiction of cruelty to children.
A delicate issue indeed, but what one has to keep care of is not to slide into extremism that causes a relinquishment of values it is meant to serve and protect.
I don’t consider frogs to be objects of moral worth. -- Eliezer Yudkowsky
Yeah ok, frogs...but wait! This is the person who’s going to design the moral seed of our coming god-emperor. I’m not sure if everyone here is aware of the range of consequences while using the same as corroboration of the correctness pursuing this route. That is, are we going to replace unfriendly AI with unknown EY? Are we yet at the point that we can tell EY is THE master who’ll decide upon what’s reasonable to say in public and what should be deleted?
Ask yourself if you really, seriously believe into the ideas posed on LW, enough to follow into the realms of radical oppression in the name of good and evil.
You people have fabricated a fantastic argument for all kinds of wrongdoing and idiot decisions, “it could increase the chance of an AI going wrong...”.
“I deleted my comment because it was maybe going to increase the chance of an AI going wrong...”
“Hey, I had to punch that guy in the face, he was going to increase the chance of an AI going wrong by uttering something stupid...”
“Sorry, I had to exterminate those people because they were going to increase the chance of an AI going wrong.”
I’m beginning to wonder if not unfriendly AI but rather EY and this movement might be the bigger risk.
Why would I care about some feverish dream of a galactic civilization if we have to turn into our own oppressor and that of others? Screw you. That’s not what I want. Either I win like this, or I don’t care to win at all. What’s winning worth, what’s left of a victory, if you have to relinquish all that you value? That’s not winning, it’s worse than losing, it means to surrender to mere possibilities, preemptively.
P.S.
I haven’t read Roko’s post and comments yet but I got a backup of every single one of them. And not just Roko’s, all of LW including EY deleted comments. Are you going to assassinate me now?
It’s also generally impolite (though completely within the TOS) to delete a person’s contributions according to some arbitrary rules. Given that Roko is the seventh highest contributor to the site, I think he deserves some more respect. Since Roko was insulted, there doesn’t seem to be a reason for him to act nicely to everyone else. If you really want the posts restored, it would probably be more effective to request an admin to do so.
Since Roko was insulted, there doesn’t seem to be a reason for him to act nicely to everyone else. If you really want the posts restored, it would probably be more effective to request an admin to do so.
I didn’t insult Roko. The decision, and justification given, seem wholly irrational to me (which is separate from claiming a right to demand that decision altered).
It’s ironic that, from a timeless point of view, Roko has done well. Future copies of Roko on LessWrong will not receive the same treatment as this copy did, because this copy’s actions constitute proof of what happens as a result.
(This comment is part of my ongoing experiment to explain anything at all with timeless/acausal reasoning.)
What “treatment” did you have in mind? At best, Roko made a honest mistake, and the deletion of a single post of his was necessary to avoid more severe consequences (such as FAI never being built). Roko’s MindWipe was within his rights, but he can’t help having this very public action judged by others.
What many people will infer from this is that he cares more about arguing for his position (about CEV and other issues) than honestly providing info, and now that he has “failed” to do that he’s just picking up his toys and going home.
Parent is inaccurate: although Roko’s comments are not, Roko’s posts (i.e., top-level submissions) are still available, as are their comment sections minus Roko’s comments (but Roko’s name is no longer on them and they are no longer accessible via /user/Roko/ URLs).
Not via user/Roko or via /tag/ or via /new/ or via /top/ or via / - they are only accessible through direct links saved by previous users, and that makes them much harder to stumble upon. This remains a cost.
I’m deeply confused by this logic. There was one post where due to a potentially weird quirk of some small fraction of the population, reading that post could create harm. I fail to see how the vast majority of other posts are therefore harmful. This is all the more the case because this breaks the flow of a lot of posts and a lot of very interesting arguments and points you’ve made.
ETA: To be more clear, leaving LW doesn’t mean you need to delete the posts.
I am disapointed. I have just started on LW, and found many of Roko’s posts and comments interesting and consilient with my current and to be a useful bridge between aspects of LW that are less consilient. :(
Allow me to provide a little context by quoting from a comment, now deleted, Eliezer made this weekend in reply to Roko and clearly addressed to Roko:
I don’t usually talk like this, but I’m going to make an exception for this case.
Listen to me very closely, you idiot.
[paragraph entirely in bolded caps.]
[four paragraphs of technical explanation.]
I am disheartened that people can be . . . not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.
Although it does not IMHO make it praiseworthy, the above quote probably makes Roko’s decision to mass delete his comments more understandable on an emotional level.
In defense of Eliezer, the occasion of Eliezer’s comment was one in which IMHO strong emotion and strong language might reasonably be seen as appropriate.
If either Roko or Eliezer wants me to delete (part of all of) this comment, I will.
EDIT: added the “I don’t usually talk like this” paragraph to my quote in repsonse to criticism by Aleksei.
Out of curiosity, what’s the purpose of the banning? Is it really assumed that banning the post will mean it can’t be found in the future via other means or is it effectively a punishment to discourage other people from taking similar actions in the future?
Does not seem very nice to take such an out-of-context partial quote from Eliezer’s comment. You could have included the first paragraph, where he commented on the unusual nature of the language he’s going to use now (the comment indeed didn’t start off as you here implied), and also the later parts where he again commented on why he thought such unusual language was appropriate.
I’m still having trouble seeing how so much global utility could be lost because of a short blog comment. If your plans are that brittle, with that much downside, I’m not sure security by obscurity is such a wise strategy either...
The major issue as I understand it wasn’t the global utility problem but the issue that when Roko posted the comment he knew that some people were having nightmares about the scenario in question. Presumably increasing the set of people who are nervous wrecks is not good.
I was told it was something that, if thought about too much, would cause post-epic level problems. The nightmare aspect wasn’t part of my concept of whatever it is until now.
I also get the feeling Eliezer wouldn’t react as dramatically as an above synopsis implies unless it was a big deal (or hilarious to do so). He seems pretty … rational, I think is the word. Despite his denial of being Quirrell in a parent post, a non-deliberate explosive rant and topic banning seems unlikely.
He also mentions that only a certain inappropriate post was banned, and Roko said he deleted his own posts himself. And yet the implication going around that it was all deleted as administrative action. A rumor started by Eliezer himself so he could deny being “evil,” knowing some wouldn’t believe him? Quirrell wouldn’t do that, right? ;)
I see. A side effect of banning one post, I think; only one post should’ve been banned, for certain. I’ll try to undo it. There was a point when a prototype of LW had just gone up, someone somehow found it and posted using an obscene user name (“masterbater”), and code changes were quickly made to get that out of the system when their post was banned.
Holy Cthulhu, are you people paranoid about your evil administrator. Notice: I am not Professor Quirrell in real life.
EDIT: No, it wasn’t a side effect, Roko did it on purpose.
Professor Quirrell wouldn’t give himself away by writing about Professor Quirrell, even after taking into account that this is exactly what he wants you to think.
Professor Quirrell wouldn’t give himself away by writing about Professor Quirrell, even after taking into account that this is exactly what he wants you to think.
Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won’t in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and [some but not all others]
I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don’t know, but there seems to be a decent chance that you’ll find out about it soon enough.
Don’t assume it’s Ok because you understand the need for friendliness and aren’t writing code. There are no secrets to intelligence in hidden comments. (Though I didn’t see the original thread, I think I figured it out and it’s not giving me any insights.)
Don’t feel left out or not smart for not ‘getting it’ we only ‘got it’ because it was told to us. Try to compensate for your ego. if you fail, Stop reading anyway.
Ab ernyyl fgbc ybbxvat. Phevbfvgl erfvfgnapr snvy.
Why are Roko’s posts deleted? Every comment or post he made since April last year is gone! WTF?
Edit: It looks like this discussion sheds some light on it. As best I can tell, Roko said something that someone didn’t want to get out, so someone (maybe Roko?) deleted a huge chunk of his posts just to be safe.
I’ve deleted them myself. I think that my time is better spent looking for a quant job to fund x-risk research than on LW, where it seems I am actually doing active harm by staying rather than merely wasting time. I must say, it has been fun, but I think I am in the region of negative returns, not just diminishing ones.
So you’ve deleted the posts you’ve made in the past. This is harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable.
For example, consider these posts, and comments on them, that you deleted:
11 core rationalist skills (39 points, 33 comments, promoted)
The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It (23 points, 108 comments, promoted)
Supporting the underdog is explained by Hanson’s Near/Far distinction (20 points, 18 comments, promoted)
Max Tegmark on our place in history: “We’re Not Insignificant After All” (14 points, 68 comments)
I believe it’s against community blog ethics to delete posts in this manner. I’d like them restored.
Edit: Roko accepted this argument and said he’s OK with restoring the posts under an anonymous username (if it’s technically possible).
And I’d like the post of Roko’s that got banned restored. If I were Roko I would be very angry about having my post deleted because of an infinitesimal far-fetched chance of an AI going wrong. I’m angry about it now and I didn’t even write it. That’s what was “harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable.” That’s what should be against the blog ethics.
I don’t blame him for removing all of his contributions after his post was treated like that.
I understand why you might be angry, but please think of the scale involved here. If any particular post or comment increases the chance of an AI going wrong by one trillionth of a percent, it is almost certainly not worth it.
This is silly—there’s simply no way to assign a probability of his posts increasing the chance of UFAI with any degree of confidence, to the point where I doubt you could even get the sign right.
For example, deleting posts because they might add an infinitesimally small amount to the probability of UFAI being created makes this community look slightly more like a bunch of paranoid nutjobs, which overall will hinder its ability to accomplish its goals and makes UFAI more likely.
From what I understand, the actual banning was due to it’s likely negative effects on the community, as Eliezer has seen similar things on the SL4 mailing list—which I won’t comment on. But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.
Thank you for pointing out the difficulty of quantifying existential risks posed blog posts.
The danger from deleting blog posts is much more tangible, and the results of censorship are conspicuous. You have pointed out two such dangers in your comment--(1) LWers will look nuttier and (2) it sets a bad precedent.
(Of course, if there is a way to quantify the marginal benefit of an LW post, then there is also a way to quantify the marginal cost from a bad one—just reverse the sign, and you’ll be right on average.)
That makes sense for evaluating the cost/benefit to me of reading a post. But if I want to evaluate the overall cost/benefit of the post itself, I should also take into account the number of people who read one vs. the other. Given the ostensible purpose of karma and promotion, these ought to be significantly different.
Are you saying: (1) A bad post is less likely to be read because it will not be promoted and it will be downvoted; (2) Because bad posts are less read, they have a smaller cost than good posts’ benefits?
I think I agree with that. I had not considered karma and promotion, which behave like advertisements in their informational value, when making that comment.
But I think that what you’re saying only strengthens the case against moderators’ deleting posts against the poster’s will because it renders the objectionable material less objectionable.
Yes, that’s what I’m saying.
And I’m not attempting to weaken or strengthen the case against anything in particular.
Huh? Why should these be equal? Why should they even be on the same order of magnitude? For example, an advertising spam post that gets deleted does orders of magnitude much less harm than an average good post does good. And a post that contained designs for a UFAI would do orders of magnitude more harm.
You are right to say that it’s possible to have extemely harmful blog posts, and it is also possible to have mostly harmless blog posts. I also agree that the examples you’ve cited are apt.
However, it is also possible to have extremely good blog posts (such as one containing designs for a tool to prevent the rise of UFAI or that changed many powerful people’s minds for the better) and to have barely beneficial ones.
Do we have a reason to think that the big bads are more likely than big goods? Or that a few really big bads are more likely than many moderate goods? I think that’s the kind of reason that would topple what I’ve said.
One of my assumptions here is that whether a post is good or bad does not change the magnitude of its impact. The magnitide of its positivity or negativity might change its the magnitude of its impact, but why should the sign?
I’m sorry if I’ve misunderstood your criticism. If I have, please give me another chance.
http://www.damninteresting.com/this-place-is-not-a-place-of-honor
Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won’t in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and I think [some but not all others]
I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don’t know, but there seems to be a decent chance that you’ll find out about it soon enough.
Don’t assume it’s Ok because you understand the need for friendliness and aren’t writing code. There are no secrets to intelligence in hidden comments. (Though I didn’t see the original thread, I think I figured it out and it’s not giving me any insights.)
Don’t feel left out or not smart for not ‘getting it’ we only ‘got it’ because it was told to us. Try to compensate for your ego. if you fail, Stop reading anyway.
Ab ernyyl fgbc ybbxvat. Phevbfvgl erfvfgnapr snvy.
http://www.damninteresting.com/this-place-is-not-a-place-of-honor
Sorry, forgot that not everyone saw the thread in question. Eliezer replied to the original post and explicitly said that it was dangerous and should not be published. I am willing to take his word for it, as he knows far more about AI than I.
I have not seen the original post, but can’t someone simply post it somewhere else? Is deleting it from here really a solution (assuming there’s real danger)? BTW, I can’t really see how a post on a board can be dangerous in a way implied here.
The likely explanation is that people who read the article agreed it was dangerous. If some of them had decided the censorship was unjustified, LW might look like Digg after the AACS key controversy.
I read the article, and it struck me as dangerous. I’m going to be somewhat vague and say that the article caused two potential forms of danger, one unlikely but extremely bad if it occurs and the other less damaging but having evidence(in the article itself) that the damage type had occurred. FWIW, Roko seemed to agree after some discussion that spreading the idea did pose a real danger.
In fact, there are hundreds of deleted articles on LW. The community is small enough for it to be manually policed—it seems.
Sadly, those who saw the original post have declined to share.
I’ve read the post. That excuse is actually relevant.
Something really crazy is going on here.
You people have fabricated a fantastic argument for all kinds of wrongdoing and idiot decisions, “it could increase the chance of an AI going wrong...”.
“I deleted my comment because it was maybe going to increase the chance of an AI going wrong...”
“Hey, I had to punch that guy in the face, he was going to increase the chance of an AI going wrong by uttering something stupid...”
“Sorry, I had to exterminate those people because they were going to increase the chance of an AI going wrong.”
I’m beginning to wonder if not unfriendly AI but rather EY and this movement might be the bigger risk.
Why would I care about some feverish dream of a galactic civilization if we have to turn into our own oppressor and that of others? Screw you. That’s not what I want. Either I win like this, or I don’t care to win at all. What’s winning worth, what’s left of a victory, if you have to relinquish all that you value? That’s not winning, it’s worse than losing, it means to surrender to mere possibilities, preemptively.
This is why deleting the comments was the bigger risk: doing so makes people think (incorrectly) that EY and this movement are the bigger risk, instead of unfriendly AI.
The problem is, are you people sure you want to take this route? If you are serious about all this, what would stop you from killing a million people if your probability estimates showed that there was a serious risk posed by those people?
If you read this comment thread you’ll see what I mean and what danger there might be posed by this movement, ‘follow Eliezer’, ‘donating as much as possible to SIAI’, ‘kill a whole planet’, ‘afford to leave one planet’s worth’, ‘maybe we could even afford to leave their brains unmodified’...lesswrong.com sometimes makes me feel more than a bit uncomfortable, especially if you read between the lines.
Yes, you might be right about all the risks in question. But you might be wrong about the means of stopping the same.
I’m not sure if this was meant for me; I agree with you about free speech and not deleting the posts. I don’t think it means EY and this movement are a great danger, though. Deleting the posts was the wrong decision, and hopefully it will be reversed soon, but I don’t see that as indicating that anyone would go out and kill people to help the Singularity occur. If there really were a Langford Basilisk, say, a joke that made you die laughing, I would want it removed.
As to that comment thread: Peer is a very cool person and a good friend, but he is a little crazy and his beliefs and statements shouldn’t be taken to reflect anything about anyone else.
I know, it wasn’t my intention to discredit Peer, I quite like his ideas. I’m probably more crazy than him anyway.
But if I can come up with such conclusions, who else will? Also, why isn’t anyone out to kill people, or will be? I’m serious, why not? Just imagine EY found out that we can be reasonable sure, for example Google, would let loose a rogue AI soon. Given how the LW audience is inclined to act upon ‘mere’ probability estimates, how wouldn’t it be appropriate to bomb Google, given that was the only way to stop them in due time from turning the world into a living hell? And how isn’t this meme, given the right people and circumstances, a great danger? Sure, me saying EY might be a greater danger was nonsense, just said to provoke some response. By definition, not much could be worse than uFAI.
This incident is simply a good situation to extrapolate. If a thought-experiment can be deemed to be dangerous enough to be not just censored and deleted but for people to be told not even to seek any knowledge of it, much less discuss it, I’m wondering about the possible reaction to a imminent and tangible danger.
Before someone accuses me of it, I want to issue the point of people suffering psychological malaise by related information.
Is active denial of information an appropriate handling of serious personal problems? If there are people who suffer from mental illness due to mere thought-experiments, I’m sorry but I think along the lines of the proponents that support the deletion of information for reasons of increasing the chance of an AI going wrong. Namely, as you abandon freedom of expression to an extent, I’m advocating to draw the line between the balance of freedom of information and protection of individual well-being at this point. That is not to say that, for example, I’d go all the way and advocate the depiction of cruelty to children.
A delicate issue indeed, but what one has to keep care of is not to slide into extremism that causes a relinquishment of values it is meant to serve and protect.
Maybe you should read the comments in question before you make this sort of post?
This really isn’t worth arguing and there isn’t any reason to be angry...
You are wrong on both. There is strong signalling going on that gives good evidence regarding both Eliezer’s intent and his competence.
What Roko said matters little, what Eliezer said (and did) matters far more. He is the one trying to take over the world.
I don’t consider frogs to be objects of moral worth. -- Eliezer Yudkowsky
Yeah ok, frogs...but wait! This is the person who’s going to design the moral seed of our coming god-emperor. I’m not sure if everyone here is aware of the range of consequences while using the same as corroboration of the correctness pursuing this route. That is, are we going to replace unfriendly AI with unknown EY? Are we yet at the point that we can tell EY is THE master who’ll decide upon what’s reasonable to say in public and what should be deleted?
Ask yourself if you really, seriously believe into the ideas posed on LW, enough to follow into the realms of radical oppression in the name of good and evil.
There are some good questions buried in there that may be worth discussing in more detail at some point.
I am vaguely confused by your question and am going to stop having this discussion.
Before getting angry, it’s always a good idea to check whether you’re confused. And you are.
Something really crazy is going on here.
You people have fabricated a fantastic argument for all kinds of wrongdoing and idiot decisions, “it could increase the chance of an AI going wrong...”.
“I deleted my comment because it was maybe going to increase the chance of an AI going wrong...”
“Hey, I had to punch that guy in the face, he was going to increase the chance of an AI going wrong by uttering something stupid...”
“Sorry, I had to exterminate those people because they were going to increase the chance of an AI going wrong.”
I’m beginning to wonder if not unfriendly AI but rather EY and this movement might be the bigger risk.
Why would I care about some feverish dream of a galactic civilization if we have to turn into our own oppressor and that of others? Screw you. That’s not what I want. Either I win like this, or I don’t care to win at all. What’s winning worth, what’s left of a victory, if you have to relinquish all that you value? That’s not winning, it’s worse than losing, it means to surrender to mere possibilities, preemptively.
P.S. I haven’t read Roko’s post and comments yet but I got a backup of every single one of them. And not just Roko’s, all of LW including EY deleted comments. Are you going to assassinate me now?
It’s also generally impolite (though completely within the TOS) to delete a person’s contributions according to some arbitrary rules. Given that Roko is the seventh highest contributor to the site, I think he deserves some more respect. Since Roko was insulted, there doesn’t seem to be a reason for him to act nicely to everyone else. If you really want the posts restored, it would probably be more effective to request an admin to do so.
I didn’t insult Roko. The decision, and justification given, seem wholly irrational to me (which is separate from claiming a right to demand that decision altered).
It’s ironic that, from a timeless point of view, Roko has done well. Future copies of Roko on LessWrong will not receive the same treatment as this copy did, because this copy’s actions constitute proof of what happens as a result.
(This comment is part of my ongoing experiment to explain anything at all with timeless/acausal reasoning.)
What “treatment” did you have in mind? At best, Roko made a honest mistake, and the deletion of a single post of his was necessary to avoid more severe consequences (such as FAI never being built). Roko’s MindWipe was within his rights, but he can’t help having this very public action judged by others.
What many people will infer from this is that he cares more about arguing for his position (about CEV and other issues) than honestly providing info, and now that he has “failed” to do that he’s just picking up his toys and going home.
I just noticed this. A brilliant disclaimer!
Parent is inaccurate: although Roko’s comments are not, Roko’s posts (i.e., top-level submissions) are still available, as are their comment sections minus Roko’s comments (but Roko’s name is no longer on them and they are no longer accessible via /user/Roko/ URLs).
Not via user/Roko or via /tag/ or via /new/ or via /top/ or via / - they are only accessible through direct links saved by previous users, and that makes them much harder to stumble upon. This remains a cost.
Could the people who have such links post them here?
I don’t really see what the fuss is. His articles and comments were mediocre at best.
I understand. I’ve been thinking about quitting LessWrong so that I can devote more time to earning money for paperclips.
lol
I’m deeply confused by this logic. There was one post where due to a potentially weird quirk of some small fraction of the population, reading that post could create harm. I fail to see how the vast majority of other posts are therefore harmful. This is all the more the case because this breaks the flow of a lot of posts and a lot of very interesting arguments and points you’ve made.
ETA: To be more clear, leaving LW doesn’t mean you need to delete the posts.
I am disapointed. I have just started on LW, and found many of Roko’s posts and comments interesting and consilient with my current and to be a useful bridge between aspects of LW that are less consilient. :(
FTFY
Allow me to provide a little context by quoting from a comment, now deleted, Eliezer made this weekend in reply to Roko and clearly addressed to Roko:
Although it does not IMHO make it praiseworthy, the above quote probably makes Roko’s decision to mass delete his comments more understandable on an emotional level.
In defense of Eliezer, the occasion of Eliezer’s comment was one in which IMHO strong emotion and strong language might reasonably be seen as appropriate.
If either Roko or Eliezer wants me to delete (part of all of) this comment, I will.
EDIT: added the “I don’t usually talk like this” paragraph to my quote in repsonse to criticism by Aleksei.
I’m not them, but I’d very much like your comment to stay here and never be deleted.
Your up-votes didn’t help, it seems.
Woah.
Thanks for alerting me to this fact, Tim.
Out of curiosity, what’s the purpose of the banning? Is it really assumed that banning the post will mean it can’t be found in the future via other means or is it effectively a punishment to discourage other people from taking similar actions in the future?
Does not seem very nice to take such an out-of-context partial quote from Eliezer’s comment. You could have included the first paragraph, where he commented on the unusual nature of the language he’s going to use now (the comment indeed didn’t start off as you here implied), and also the later parts where he again commented on why he thought such unusual language was appropriate.
I’m still having trouble seeing how so much global utility could be lost because of a short blog comment. If your plans are that brittle, with that much downside, I’m not sure security by obscurity is such a wise strategy either...
The major issue as I understand it wasn’t the global utility problem but the issue that when Roko posted the comment he knew that some people were having nightmares about the scenario in question. Presumably increasing the set of people who are nervous wrecks is not good.
I was told it was something that, if thought about too much, would cause post-epic level problems. The nightmare aspect wasn’t part of my concept of whatever it is until now.
I also get the feeling Eliezer wouldn’t react as dramatically as an above synopsis implies unless it was a big deal (or hilarious to do so). He seems pretty … rational, I think is the word. Despite his denial of being Quirrell in a parent post, a non-deliberate explosive rant and topic banning seems unlikely.
He also mentions that only a certain inappropriate post was banned, and Roko said he deleted his own posts himself. And yet the implication going around that it was all deleted as administrative action. A rumor started by Eliezer himself so he could deny being “evil,” knowing some wouldn’t believe him? Quirrell wouldn’t do that, right? ;)
I see. A side effect of banning one post, I think; only one post should’ve been banned, for certain. I’ll try to undo it. There was a point when a prototype of LW had just gone up, someone somehow found it and posted using an obscene user name (“masterbater”), and code changes were quickly made to get that out of the system when their post was banned.
Holy Cthulhu, are you people paranoid about your evil administrator. Notice: I am not Professor Quirrell in real life.
EDIT: No, it wasn’t a side effect, Roko did it on purpose.
Indeed. You are open about your ambition to take over the world, rather than hiding behind the identity of an academic.
And that is exactly what Professor Quirrell would say!
Professor Quirrell wouldn’t give himself away by writing about Professor Quirrell, even after taking into account that this is exactly what he wants you to think.
cf. Order of the Stick on the double-bluff.
Of course as you know very well. :)
In a certain sense, it is.
Of course, we already established that you’re Light Yagami.
I’m not sure we should believe you.
http://www.damninteresting.com/this-place-is-not-a-place-of-honor
Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won’t in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and [some but not all others]
I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don’t know, but there seems to be a decent chance that you’ll find out about it soon enough.
Don’t assume it’s Ok because you understand the need for friendliness and aren’t writing code. There are no secrets to intelligence in hidden comments. (Though I didn’t see the original thread, I think I figured it out and it’s not giving me any insights.)
Don’t feel left out or not smart for not ‘getting it’ we only ‘got it’ because it was told to us. Try to compensate for your ego. if you fail, Stop reading anyway.
Ab ernyyl fgbc ybbxvat. Phevbfvgl erfvfgnapr snvy.
http://www.damninteresting.com/this-place-is-not-a-place-of-honor
Technically, you didn’t say “for now”.