Breaking the vicious cycle
You may know me as the guy who posts a lot of controversial stuff about LW and MIRI. I don’t enjoy doing this and do not want to continue with it. One reason being that the debate is turning into a flame war. Another reason is that I noticed that it does affect my health negatively (e.g. my high blood pressure (I actually had a single-sided hearing loss over this xkcd comic on Friday)).
This all started in 2010 when I encountered something I perceived to be wrong. But the specifics are irrelevant for this post. The problem is that ever since that time there have been various reasons that made me feel forced to continue the controversy. Sometimes it was the urge to clarify what I wrote, other times I thought it was necessary to respond to a reply I got. What matters is that I couldn’t stop. But I believe that this is now possible, given my health concerns.
One problem is that I don’t want to leave possible misrepresentations behind. And there very likely exist misrepresentations. There are many reasons for this, but I can assure you that I never deliberately lied and that I never deliberately tried to misrepresent anyone. The main reason might be that I feel very easily overwhelmed and never had the ability to force myself to invest the time that is necessary to do something correctly if I don’t really enjoy doing it (for the same reason I probably failed school). Which means that most comments and posts are written in a tearing hurry, akin to a reflexive retraction from the painful stimulus.
<tldr>
I hate this fight and want to end it once and for all. I don’t expect you to take my word for it. So instead, here is an offer:
I am willing to post counterstatements, endorsed by MIRI, of any length and content[1] at the top of any of my blog posts. You can either post them in the comments below or send me an email (da [at] kruel.co).
</tldr>
I have no idea if MIRI believes this to be worthwhile. But I couldn’t think of a better way to solve this dilemma in a way that everyone can live with happily. But I am open to suggestions that don’t stress me too much (also about how to prove that I am trying to be honest).
You obviously don’t need to read all my posts. It can also be a general statement.
I am also aware that LW and MIRI are bothered by RationalWiki. As you can easily check from the fossil record, I have at points tried to correct specific problems. But, for the reasons given above, I have problems investing the time to go through every sentence to find possible errors and attempt to correct it in such a way that the edit is not reverted and that people who feel offended are satisfied.
[1] There are obviously some caveats regarding the content, such as no nude photos of Yudkowsky ;-)
- Where can I read about the Roko’s Basilisk drama? by 30 Jan 2022 16:29 UTC; 1 point) (
- 23 Nov 2014 19:56 UTC; -7 points) 's comment on xkcd on the AI box experiment by (
To be honest, I had you pegged as being stuck in a partisan spiral. The fact that you are willing to do this is pretty cool. Have some utils on the house. I don’t know if officially responding to your blog is worth MIRI’s time; it would imply some sort of status equivalence.
Also, you published some very embarrassing quotes from Yudkowsky. I’m guessing you caused him quite a bit of distress, so he’s probably not inclined to do you any favors. Mining someone’s juvenilia for outrageous statements is not productive – I mean he was 16 when he wrote some of the stuff you quote. I would remove those pages. Same with the usenet stuff – I know it was posted publicly but it feels like furtively-recorded conversations to me all these years later. Stick to arguments against positions MIRI and Yudkowsky currently hold. Personally I’ve moved from highly-skeptical of MIRI to moderately approving. I made this comment a year ago:
And MIRI has stayed on course and is becoming a productive think tank with three full-time researchers and, it seems to me, a highly competent CEO. It is a very different organization now than the one you started out criticizing.
For the record, I genuinely object to being thought of as a “highly competent CEO.” I think “non-natural CEO working hard and learning fast and picking up lots of low-hanging fruit but also making lots of mistakes along the way because he had no prior executive experience” is more accurate. The good news is that I’ve been learning even more quickly since Matt Fallshaw joined the Board, since he’s able and willing to put in the time to transfer to me what he’s learned from launching and running multiple startups.
But that’s exactly what the Dunning-Kruger effect would lead us to expect a highly-competent CEOs to say! s/
To be honest, I didn’t mean much by it. Just that MIRI has been more impressive lately, and presumably a good portion of this is due to your leadership.
No. It would lead us to expect that the top quartile would rank themselves well above the median but below their actual scores.
(And then to ask why we’re thinking in such coarse granularity as quartiles.)
So this comment here about a basically unrelated matter to the topic in the title post (but one that it isn’t strange at all that you would reply to of course, since it’s about you), is it essentially a way of communicating that you’ve seen XiXiDu’s post and have chosen to ignore it? I mean, I’m sure it could be very simple and you’re just too busy to deal with this kind of thing, but I can’t help but to notice your appearance in the thread and your conspicuous lack of response of any kind, especially since, you know, you’re the CEO of the organisation XiXiDu is attempting to make peace with.
Hopefully I haven’t been too forward with what I’ve suggested here, and I’m also sorry if I’m speculating in an entirely inaccurate direction.
I haven’t figured out what to say yet. :)
The short version is that I’m not sure we want to make counterclaims at the top of Alexander’s blog posts. I mostly just wish Alexander was more consistently constructive in his criticism, like many of our other critics are. I think I’m far from alone in the impression that his criticisms are an uneven mix of kinda fair criticisms, deliberate straw men (“the intelligence explosion hypothesis is a tautology”), largely irrelevant character assassination (digging up embarrassing things Eliezer wrote when he was 16), and more. But I was resisting saying even this much, because I worry about putting Alexander in a position where he again feels he’s being misrepresented and needs to defend himself.
Well, I guess let’s see what happens. But I can’t promise I’ll think it’s wise for me to reply further.
To make the first step and show that this is not some kind of evil ploy, I now deleted the (1) Yudkowsky quotes page and (2) the post on his personality (explanation on how that post came about).
I realize that they were unnecessarily offending and apologize for that. If I could turn back the clock I would do a lot differently and probably stay completely silent about MIRI and LW.
Thanks.
Things seem basically settled over here, so I’ll just say: kudos for your efforts to break the vicious cycle!
I think “highly competent CEO” is judged at least as much by results as by comparison with a hypothetical superb CEO.
Still, it could be nice if XiXiDu put some kind of disclaimer he came up with himself at the top of his posts.
If I post an embarrassing quote by Sarah Palin, then I am not some kind of school bully who likes causing people distress. Instead I highlight an important shortcoming of an influential person. I have posted quotes of various people other than Yudkowsky. I admire all of them for their achievements and wish them all the best. But as influential people they have to expect that someone might highlight something they said. This is not a smear campaign.
As far as I can tell, Yudkowsky basically grew up on the internet. I think it is more like you went through all the copies of Palin’s school newspaper, and picked up some notes she passed around in class, and then published the most outrageous things she said in such a way that you implied they were written recently. I think this goes against some notion of journalistic tact.
This is exactly the kind of misrepresentation that make me avoid deleting my posts. Most of the most outrageous things he said have been written in the past ten years.
I suppose you are partly referring to the quotes page? Please take a look, there are only two quotes that are older than 2004, for one of which I explicitly note that he doesn’t agree with it anymore, and a second which I believe he still agrees with.
Those two quotes that are dated before 2004 are the least outrageous. They are there mainly to show that he has long been believing into singularitarian ideas and that he can save the world. This is important in evaluating how much of the later arguments are rationalizations of those early beliefs. Which is in turn important because he’s actually asking people for money and giving a whole research field a bad name with his predictions about AI.
This is the most outrageous one to me:
And it’s clearly the exact opposite of what present Eliezer belives.
The stuff that bothers me are Usenet and mailing list quotes (they are equivalent to passing notes and should be considered off the record) and anything written when he was a teenager. The rest, I suppose, should at least be labeled with the date they were written. And if he has explicitly disclaimed the statement, perhaps that should be mentioned, too.
Young Eliezer was a little crankish and has pretty much grown out of it. I feel like you’re criticising someone who no longer exists.
Also, the page where you try to diagnose him with narsisism just seems mean.
I can clarify this. I never intended to write that post but was forced to do so out of self-defense.
I replied to this comment whose author was wondering why Yudkowsky is using Facebook more than LessWrong these days. To which I replied with an on-topic speculation based on evidence.
Then people started viciously attacking me, to which I had to respond. In one of those replies I unfortunately used the term “narcissistic tendencies”. I was then again attacked for using that term. I defended my use of that term with evidence, the result of which is that post.
What do you expect that I do when I am mindlessly attacked by a horde of people? That I just leave it at that and let my name being dragged into dirt?
Many of my posts and comments are direct responses to personal attacks on me from LessWrong members.
So let me get this straight—you did a psychiatric diagnosis over the internet, and instead of saying, ‘obviously I’m using the term colloquially’ you provided evidence.
...
and then you are surprised when you get attacked, and even now characterize these attacks by like as coming from a mindless horde...
when the horde was actually 4 people, only one post was against you personally as opposed to being against that one thing you said, and there were roughly 2 others on your side. And your comments there are upvoted.
Yes, it was a huge overreaction on my side and I shouldn’t have written such a comment in the first place. It was meant as an explanation of how that post came about, it was not meant as an excuse. It was still wrong. The point I want to communicate is that I didn’t do it out of some general interest to cause MIRI distress.
I apologize for offending people and overreacting to what I perceived the way I described it but which was, as you wrote, not that way. I already deleted that post yesterday.
OK!
You do not stand to Eliezer as you stand to Sarah Palin (as a far as public figures go). The equivalent would be a minor congressmen consistently devoting his speaking time to highlight all the stupid things Sarah Palin has said (and retracted). I’m pretty sure such congressman would meet far worse consequences than you have been meeting.
EDIT: Not sure why this comment is being downvoted, but as a clarification I merely meant the difference in social status between Alex and Sarah is bigger than between Eliezer and him. When the gap is big enough, it doesn’t matter what one says about the other, but this is not the case here. Why is that offensive/such a bad idea?
Upvoted for the reflection and change of strategy. Congratulations for using the skills we admire, in very difficult circumstances. On meta level, I admire this article. (EDIT: Although your further comments in this thread kinda undermine it.)
On the object level, uhm, your proposal is between you and MIRI, none of my business. (EDIT: But I think it would be wiser for MIRI to not respond to you officially.)
The health effects you describe are scary. I never had anything this intense, but I think the closest approximation were some political debates (not on LW) which made my heart beat faster, and I felt I have to scream or jump, preferably both. Nice adaptation for a chimp, very unhealthy for a human with a sedentary lifestyle.
Please take care about your health!
The easiest way to not continue with something is to not continue with it.
Someone Is Wrong On The Internet can be a surprisingly powerful force.
The person who is Being Wrong is not exerting any force. The force is from within. This is the only place that a solution can come from.
The subject of “can be” is the entire phrase “Someone Is Wrong On The Internet”, not just “someone”. And this “external events aren’t a cause of your mental state” stuff is bullshit. Trying to hack your mind is hardly the only solution; avoiding triggering interactions can also be a solution, and likely a more effective one.
Er, status-quo bias?
I respect both updates and hostile ceasefires.
You can update by posting a header to all of your blog posts saying, “I wrote this blog during a dark period of my life. I now realize that Eliezer Yudkowsky is a decent and honest person with no ill intent, and that anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret this page, and leave it here as an archive to that regret.” If that is how you feel and that is what you do, I will treat with you starting from scratch in any future endeavors. I’ve been stupid too, in my life. (If you then revert to pattern, you do not get a second second chance.)
I have not found it important to say very much at all about you so far, unless you show up to a thread in which I am participating. If carrying on your one-sided vendetta is affecting your health and you want to declare a one-sided ceasefire for instrumental reasons, and you feel afraid that your brain will helplessly drag you back in if anyone mentions your name, then I state that: if you delete your site, withdraw entirely from all related online discussions, and do not say anything about MIRI or Eliezer Yudkowsky in the future, I will not say anything about Xixidu or Alexander Kruel in the future. I will urge others to do the same. I do not control anyone except myself. I remark that you cannot possibly expect anything except hostility given your past conduct and that feeding your past addiction by posting one little comment anywhere, only to react with shock as people don’t give you the respect to which you consider yourself entitled, is likely to drag you back in and destroy your health again.
Failing either of these actions:
I am probably going to put up a page about Roko’s Basilisk soon. I am not about to mention you just to make your health problems worse, nor avoid mentioning you if I find that a net positive while I happen to be writing; your conduct has placed you outside of my circle of concern. If the name Alexander Kruel happens to arise in some other online discussion or someone links to your site, I will explain that you have been carrying on a one-sided vendetta against MIRI for unknown psychological reasons. If for some reason I am talking about the hazards of my existence, I might bring up the name of Alexander Kruel as that guy who follows me around the ’Net looking for sentences that can be taken out of context to add to his hateblog, and mention with some bemusement that you didn’t stop even after you posted that all the one-sided hate was causing you health problems. Either a ceasefire or an update will prevent me from saying any such thing.
I urge you to see a competent cognitive-behavioral therapist and talk to them about the reason why your brain is making you do this even as it destroys your health.
I have written this note according to the principles of Tell Culture to describe my own future actions conditional on yours. Reacting to it in a way I deem inappropriate, such as taking a sentence out of context and putting it on your hateblog, will result in no future such communications with you.
I apologize for any possible misunderstanding in this comment. My reading comprehension is often bad.
I know that in the original post I offered to add a statement of your choice to any of my posts. I stand by this, although I would have phrased this differently now. I would like to ask you to consider that there are also personal posts which are completely unrelated to you, MIRI, or LW. Such as photography posts and math posts. It would be really weird and confusing to readers to add your suggested header to those posts. If that is what you want, I will do it.
You also mention that I could delete my site (I already deleted a bunch of posts related to you and MIRI). I am not going to do that, as it is my homepage and contains completely unrelated material. I am sorry if I possibly gave a false impression here.
You further talk about withdrawing entirely from all related online discussions. I am willing to entirely stop to add anything negative to any related discussion. But I will still use social media to link to material produced by MIRI or LW (such as MIRI blog posts) and professional third party critiques (such as a possible evaluation of MIRI by GiveWell) without adding my own commentary.
I stand by what I wrote above, irrespective of your future actions. But I would be pleased if you maintain a charitable portrayal of me. I have no problem if you in future write that my arguments are wrong, that I have been offending, or that I only have an average IQ etc. But I would be pleased if you abstain from portraying me as an evil person, or that I deliberately lie. Stating that I misrepresented you is fine. But suggesting that I am a malicious troll who hates you is what I strongly disagree with.
As evidence that I mean what I write I now deleted my recent comments made on reddit.
If that’s the only concern I think the solution is quite easy. You have all the MIRI related material on one page, so you can delete it while leaving the other stuff on your homepage untouched.
Since you have not yet replied to my other comment, here is what I have done so far:
(1) I removed many more posts and edited others in such a way that no mention of you, MIRI or LW can be found anymore (except an occasional link to a LW post).[1]
(2) I slightly changed your given disclaimer and added it to my about page:
The reason for this alteration is that my blog has been around since 2001, and for most of the time it did not contain any mention of you, MIRI, or LW. For a few years it even contained positive referrals to you and MIRI. This can all be checked by looking at e.g. archive.org for domains such as xixidu.com. I estimate that much less than 1% of all content over those years has been related to you or MIRI, and even less was negative.
But my previous comment, in which I asked you to consider that your suggested header would look really weird and confusing if added to completely unrelated posts, still stands. If that’s what you desire, let me know. But I hope you are satisfied with the actions I took so far.
[1] If I missed something, let me know.
I don’t have time to evaluate what you did, so I’ll take this as a possible earnest of a good-faith attempt at something, and not speak ill of you until I get some other piece of positive evidence that something has gone wrong. A header statement only on relevant posts seems fine by me, if you have the time to add it to items individually.
I very strongly advise you, on a personal level, not to talk about these things online at all. No, not even posting links without discussion, especially if your old audience is commenting on them. The probability I estimate of your brain helplessly dragging you back in is very high.
This will be my last comment and I am going to log out after it. If you or MIRI change your mind, or discover any evidence “that something has gone wrong”, please let me know by email or via a private message on e.g. Facebook or some other social network that’s available at that point in time.
Thanks.
I noticed that there is still a post mentioning MIRI. It is not at all judgemental or negative but rather highlights a video that I captured of a media appearance of MIRI on German/French TV. I understand this sort of posts not to be relevant posts for either deletion or any sort of header.
Then there is also an interview with Dr. Laurent Orseau about something you wrote. I added the following header to this post:
Your updates to your blog as of this post seem to replace “Less Wrong”, or “MIRI”, or “Eliezer Yudkowsky”, with the generic term “AI risk advocates”.
This just sounds more insidiously disingenuous.
At least now when I cite Eliezer’s stuff on my doctoral thesis people who don’t know him—there are a lot of them in philosophy—will not say to me “I’ve googled him and some crazy quotes came up eventually, so maybe you should avoid mentioning his name altogether”. This was a much bigger problem to me than what is sounds. I had to do all sort of workarounds to use Eliezer’s ideas as if someone else said it because I was advised not to cite him (and the main, often the only, argument was in fact the crazy quote things).
There might be some very small level of uncertainty as whether Alex’s behaviour had a positive or negative overall impact(maybe it made MIRI slightly update in the right direction, somehow). But I can say with near certainty it made my life objectively worse in very quantifiable measures (i.e. I lost a month or two with the workarounds, and would continue to lose time with this).
How’s the situation now w/ Superintelligence published? Do you think it’d be a good idea for someone to publish a bunch of Eliezer’s ideas passing them off as their own to solve this problem?
I made my above comment because I knew of at least one clear instance where the reason I had to do the workaround was due to someone who found Alex’s stuff. But things haven’t improved that much as I anticipated in my field (Applied Ethics). These things would take time, even if this had Alex’s stuff as the only main cause. Looking back, I also think part of the workarounds were more due to having to relate the discussion to someone in my field who wrote about the same issue (Nick Agar) than due to having to avoid mentioning Eliezer too much.
I see a big difference in the AI community. For instance, I was able to convince a very intelligent, previously long-time superintelligence sceptic, CS grad student of superintelligence’s feasibility. But I am not that much involved with AI directly. What is very clear to me—and I am not sure how obvious this already is to everyone—is that Nick’s book had an enormous impact. Superintelligence scepticism is gradually becoming clearly a minority position. This is huge and awesome.
I don’t think simply publishing Eliezer’s ideas as your own would work; there would need to be a lot of refining to turn it into a publishable philosophy paper. I did this refining of the complexity thesis during my thesis’ second chapter. Refining his ideas made them a lot different, and I applied them to a completely different case (moral enhancement). Note that publishing someone else’s idea as your own is not a good plan, even if the person explicitly grants you permission. But if you are refining them and adding a lot of new stuff you can just briefly mention him and move along—and hopefully that won’t do too much reputation-damage. I am still pondering how and which parts of this chapter to publish. In case you are interested, you can find a presentation summarizing its main arguments here: https://prezi.com/tsxslr_5_36z/deep-moral-enhancement-is-risky/
Agreed, if there are no other indications of a change of mind. Nobody who reads your blog is not going to know who “AI risk advocates” are. Perfectly fine if there’s some other indication.
I deleted any post that could be perceived to be offending to MIRI, Yudkowsky, or LW, and only left fully general posts pertaining AI risks. Many of those posts never mentioned MIRI or LW in the first place. What exactly is “insidiously disingenuous” about that?
We know that the sparring between Kruel and MIRI has caused a great deal of harm but it still should be possible in a less antagonistic fashion.
Not even as a tie-breaker? There are nicer ways to say this.
This advice seems unsolicited and getting psychological advice from your past competitor seems unlikely to be helpful on an outside view.
This seems to be written to make Alexander aware of his risk of later embarassing himself. Telling someone who announces that they have mental health difficulties that have been exascerbated by their sparring with your writing that they’re likely to face embarassment if they relapse is somewhat (and unnecessarily) hostile, especially as they probably already know.
For what it’s worth, I think that as well as producing much of the least helpful criticism of MIRI, I’ve also found that reading Alxanders writing has also led me to think of ways that MIRI and LessWrong could improve (getting more publications and conventional status), and to help AI risk advocates to anticipate kinds of criticism that they might encounter in the future. I also remember that he’s given useful replies when I’ve previously asked him about the x-risk reduction landscape in Germany.
LessWrong needs critics, and also needs basic kindness norms. In light of that, it seems that all that needed to be said about Alexander breaking the vicious cycle was “Good”.
This phrasing feels a little clumsy/rushed to me: “selectively” + “onesidedly” is redundant, and “collecting all of” contradicts “selectively”. Perhaps worth improving if it’s to be copypasted over a whole bunch of posts. How about “cherry-picking his quotes” or something?
Disclaimer: foreign speaker here.
“You can update by posting a header to all of your blog posts saying, “I wrote this blog during a dark period of my life. I now realize that Eliezer Yudkowsky is a decent and honest person with no ill intent, and that anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret this page, and leave it here as an archive to that regret.”″
Wow, just wow. Cult leader demands Stalin-style self-critique on every page (no sane person would consider it reasonable) and censoring of all posts related to Less Wrong after campaign of harassment.
To those who seem to not like the manner in which XiXiDu is apologizing: If someone who genuinely thinks the sky is falling apologizes to you while still wearing their metal hat—then that’s the best you can possibly expect. To reject the apology until the hat is removed is…
I’m not MIRI affiliated, but as a member of the LessWrong forum, and talking for myself alone, I’ll just repeat what I’ve said before: There’s only so many times someone can call me a brainwashed cultist, before I stop forgiving them.
You’ve spent the past few years insulting and mocking people for having different opinions tha8n you. That’s it. That’s the entirety of the crime of LessWrong/MIRI: you’ve not produced a hint of unethicality or dishonesty in regards to any of MIRI and/or LessWrong’s doings, but you bash them viciously for having different opinions.
LessWrongers always treated you (and Rationalwiki too), and is still treating you and any of your different opinions, much more civilly than you (or Rationalwiki) ever did us and any of ours. So you getting health related issues as a result of the viciousness you perpetrate—okay, that’s like repeatedly punching someone and then complaining that your fist has started to hurt.
We don’t have, nor ever had, a “Why Alexander Kruel/Xixidu sucks” page that we can take down. You are the one with the bazillion “Why LessWrong/MIRI sucks” pages. Unlike you have done with EY, I haven’t even screenshotted the comments by you that you’ve later chosen to take down because you found them embarrassing to yourself. Gee, it must be nice NOT having someone devoted to mocking you.
I wish you good health, as a general moral principle of my humanism. But I also care about the problems you caused on the targets of your viciousness.
That’s implying a false equivalence. If I make a quotes page of a public person, a person with far-reaching goals, in order to highlight problematic beliefs this person holds, beliefs that would otherwise be lost in a vast amount of other statements, then this is not the same as making a “random stranger X sucks” page.
Stressful fights adversely affect an existing condition.
I have maybe deleted 5 comments and edited another 5. If I detect other mistakes I will fix them. You make it sound like doing so is somehow bad.
You are one of the people spouting comments such as this one for a long time. I reckon you might not see that such comments are a cause of what I wrote in the past.
Yes, my first encounter with you was when I bashed you for your unfair criticism of Rationalwiki and your unfair support of Eliezer Yudkowsky, yet somehow you failed to call me a brainwashed cultist of Rationalwiki, and you failed to launch a website devoted on how much your bashing of Rationalwiki is justified because they’re horrible cultist people out to brainwash you.
Oh, I’ve actually wondered occassionally whether me bashing you for an Eliezer fanboy drove you to try to prove how much of an Eliezer fanboy you weren’t, and so decided to launch your rabid obsession against him. But even if that’s the case, I don’t confuse causality with moral blame; reasonable people wouldn’t attack third parties for any injustice I performed against them, they’d be angry against me alone—and you never mentioned me once in all your absurd unjust diatribes against Lesswrong and MIRI.
So I think that this is just justification after the fact to blame your attack on someone acting in defense. I’ve only started commenting against you (in support of LessWrong/MIRI, as opposed to in my defense of Rationalwiki) after dozens of mockeries and other attacks by you against the forum, so I don’t think you can believably claim me as a significant ‘cause’ in the manner you imply.
Then again, LW does not have a “Why Anything Sucks” page as far as I’m aware. There are plenty of people/organizations out there with whom LW/MIRI disagree, and who are more visible than you, but I don’t think LW has ever gone out of its way to make posts on why those people/organizations are bad. The fact is that in order to promote good discussion, you just don’t want to have a page saying that the members of website/organization with whom you’re having the discussion suck. (And while you might call it “highlighting problematic beliefs”, the simple fact is that much of what you’ve posted about LW/MIRI is mean-spirited and hurtful, both of which are qualities that I don’t think most “highlighting problematic beliefs” pages have.)
To be clear: much of your criticism is constructive criticism, possibly valid. Another significant portion is neither constructive nor valid. But regardless of whether it’s valid or not, you do not want to be rude or confrontational about it. If your intent is to improve LW/MIRI, then you want to phrase your criticisms in a way that makes them pleasant to engage with. From what I’ve been able to tell based on your posts and comments, both here and elsewhere, arguing with you is generally not a fun thing to do. Do you think people will be more receptive to stuff that’s phrased aggressively, or less receptive? I have very little to say on the object level in response to your concerns. However, if your goal is to foster improvement, then it’s probably a good idea to present the objections without the snideness. It makes it a lot more comfortable for both sides of the discussion if you do so.
You said that engaging in discussion with representatives of LW/MIRI is stressful for you. It doesn’t have to be.
There is one about Stephen J. Gould, but I don’t remember any other.
Mainstream philosophy, maybe?
Ahem...Philosophy, a Diseased Discipline.
I would not characterize that as a post about why philosophy “sucks”, though… more a post on its shortcomings and how they might be overcome. Maybe we’re just different in the way we interpret words, but in my view, “sucks” is connotatively a lot worse than “diseased”, because calling something “diseased” implies that there’s a “cure”, whereas “sucks” just… sucks.
(Edit: Huh. This comment seems to be getting pretty steadily downvoted. Is there something people didn’t like about it?)
I didn’t downvote, but I’ll venture a guess:
You pattern-matched to “commenter refused to update on counterexample and is now in denial”. Also, people probably don’t want to end up in a dead-end argument about definitions of words.
Perhaps a better way would have been to keep the eye on the ball: never mind what “sucks” does or doesn’t mean, the original point of saying “we don’t have an X sucks page” was that we don’t have an axe to grind against any group of people in particular and keep attacking them in a mean-spirited ways that don’t promote good discussion. It would take thin skin indeed to feel personally attacked if you happen to work in philosophy and read that Diseased Discipline article. Nor would you feel, after browsing LW for a while, that the people behind it are devoting serious energy to coming up with bad things to say about you. This I think remains true even if we were to agree that we do have a couple “X sucks” pages.
Congratulations for updating. Writing that post probably wasn’t easy.
I’m not sure he updated on any factual point. I read this as wanting to tap out. [ETA: In a somewhat more formalized way than usual]
I am usually in favor of tapping out of intractable arguments.
I’m approaching the situation solution-oriented. Updating on the method of dealing with the issue is a lot.
XIXiDU is at least updating about the cost/benefit for them about continuing to write on the topic.
Thank you for extending an olive branch! ::sends magic positive feedback rays::
This is an important point brought up in a comment. Even though XiXiDu isn’t a true outside perspective by any means, that’s how I would imagine many newcomers to react, at first glance.
What we need to keep in mind is that the inferential gap between LW’s claims and the general populace is gargantuan. We’re dealing with people who often still believe in a man in the sky. Even if we restricted the target audience to the “decision makers”, who are generally better at compartmentalizing their unreasonable beliefs away, the inferential gap is still very large. Especially because we’re not talking about an unbiased audience. There are strong economic and reputational incentives to pumping out the most intelligent artificial agent, and very weak incentives (who wants to anonymously potentially save the planet if you can get rich instead?) to curtail one’s efforts due to friendliness considerations. Even in a best case scenario, the world is largely doomed if this goes the same way that Tragedies of the Commons usually go.
Yeah, people at MIRI know about the enormity of the task, and yet we should perpetually remind ourselves of it, because most of the claims are just so self-evidently obvious that it’s easy to forget that they’re like an ugh-field to outsiders who don’t want to slay their holy cows and are looking for some motivated-cognition getaway. It’s easy to forget that what is akin to 2+2=4 (e.g., orthogonality thesis) for us is “what? crackpot!” territory for others (even if it’s getting slightly more mainstream-palatable, see e.g. Elon Musk’s recent comments).
Therefore I think that criticisms—immediately available, immediately answered—of those claims are really important. Any official document should have a highlighted link to, if not an appendix of, the most convincing criticisms of the new claim, together with answers. If we lack the manpower, even just encouraging top-level authors to compile criticisms from the comments, or to include a final section with their own best “devil’s advocate” arguments and their responses.
The people who need convincing aren’t the ones who are nodding along anyways. It’s those who suspiciously narrow their eyes while reading, going “surely they are crackpots”, then come up with some convenient “unaddressed/devastating criticism” which serves as a pretense to shake their heads and close the site. But—if there are immediate criticisms, steelmanned, and addressed, a significant fraction might come around. At least their go-to excuse would be invalidated.
A convincing narrative is even more so if it has convincing critics, who are convincingly addressed. As much as XiXiDu may have been considered a thorn in MIRI’s side, the value of publicly addressing criticisms as strengthening MIRI’s arguments may have been underestimated. Also, reporters who during their “research” come across XiXiDu’s blog and see his criticisms addressed, even if only by linking to LW articles, will have a harder job at their usual “lol look at the nerd rapture”-hackjobs.
ETA: Sorry for the subpar phrasing, my writing environment is … not … ideal.
I agree with much of this, but it seems like Eliezer (and MIRI?) mostly want to reach people with strong mathematical talent. We observe much more interest among such people since Eliezer started; in particular he trolled Wei Dai into creating UDT. I haven’t looked, but I would guess XiXiDu has criticized him for some part of the way he proposed TDT with the associated trolling of mathematicians.
Just out of curiosity, what do you mean when you say that Eliezer “trolled Wei Dai” (and other mathematicians)?
“I do have a proposed alternative ritual of cognition which computes this decision, which this margin is too small to contain”
Why was this trolling? This was in fact true, although Wei Dai’s UDT ended up giving rise to a better framework for future and more general DT work.
I think the thing to remember is that, when you’ve run into contexts where you feel like someone might not care that they’re setting you up to be judged unfairly, you’ve been too overwhelmed to keep track of whether or not your self-defense involves doing things that you’d normally be able to see would set them up to be judged unfairly.
You’ve been trying to defend a truth about a question—about what actions you could reasonably be expected to have been sure you should have taken, after having been exposed to existential-risk arguments -- that’s made up of many complex implicit emotional and social associations, like the sort of “is X vs. Y the side everyone should be on?” that Scott Alexander discusses in “Ethnic Tension and Meaningless Arguments”. But you’ve never really developed the necessary emotional perspective to fully realize that the only language you’ve had access to, to do that with, is a different language: that of explicit factual truths. If you try to state truths in one language using the other without accounting for the difference, blinded by pain and driven by the intuitive impulse escape the pain, you’re going to say false things. It only makes sense that you would have screwed up.
Try to progress to having a conscious awareness of your desperation, I mean a conscious understanding of how the desperation works and what it’s tied to emotionally. Once you’ve done that, you should be able to consciously keep in mind better the other ways that the idea of “justice” might also relate to your situation, and so do a lot less unjust damage. (Contrariwise, if you do choose to do damage, a significantly greater fraction of it will be just.)
It might also help to have a stronger deontological proscription against misrepresenting anyone in a way that would cause them to be judged unfairly. That proscription would put you under more pressure to develop this kind of emotional perspective and conscious awareness, although it would do this at the cost of adding extra deontological hoops you have to jump through to escape the pain when it comes. If this leaves you too bound-up to say anything, you can usually go meta and explain how you’re too bound-up, at least once you have enough practice at explaining things like that.
I’m sorry. I claim to have some idea what it’s like.
(Also, on reflection, I should admit that mostly I’m saying this because I’m afraid of third parties keeping mistakenly unfavorable impressions about your motives; so it’s slightly dishonest of me to word some of the above comments as simply directed to you, the way I have. And in the process I’ve converted an emotional truth, “I think it’s important for other people not to believe as-bad things about your motives, because I can see how that amount of badness is likely mistaken”, into a factual claim, “your better-looking motives are exactly X”.)
I read this (from here), smiled and thought, “there’s some karma for you. In no way metaphysical.”
Then, I thought, “we have a literal karma system on this site.” I checked. He has karma in the quadrupedal digits, 75 percent positive. I don’t understand. If XiXiDu was so abusive, why is does he so upvoted? It seems like he must of said things worth saying, perhaps useful critiques. Is the karma system broken (or at least not designed to deal with this sort of thing) or are the accusations not as bad as they seem? Someone explain my confusion.
XiXiDu is generally a smart person and most of his comments are very good. He has this one pet peeve though.
LW karma is not a vote on the person, it’s a collection of the votes on their individual comments. Most of his comments are good. Some of them are… controversial, to put it mildly.
75 percent positive means 25 percent negative. To get a worse outcome, a person would have to be unable to post good comments, or unable to stop bringing the controversial topic everywhere, or unwilling to participate in debates unrelated to the controversial topic. In some situations XiXiDu seems unable to resist, but he usually contributes productively in completely unrelated articles.
XiXiDu has repeatedly claimed that he is not a smart person (unless I have confused him with someone else). (I didn’t believe him but his claim is at least somewhat relevant.)
False humility? Countersignalling? Depression? I don’t want to make an internet diagnosis or mind reading, but from my view these options seem more likely than the hypothesis of low intelligence.
(Unless the context was something like “intelligence lower than extremely high”; i.e. something like “I have IQ 130, but compared with people with IQ 160 I feel stupid”.)
From the context I ruled out countersignalling and for what it is worth my impression was that the humility was real, not false. Given that I err on the side of cynical regarding hypocrisy and had found some of XiXiDu’s comments disruptive I give my positive evaluation of Xi’s sincerity some weight.
I agree that the hypothesis of low intelligence is implausible despite the testimony. Addition possible factors I considered:
Specific weakness in intelligence (eg. ADHD, dyslexia or something less common) that produced low self esteem in intelligence despite overall respectable g.
Perfectionistic or obsessive tendencies which would lead to harsh self judgements relative to an unrealistic ideal. (Potentially similar to the kind of tendencies which would cause the idealism failure mode described in the opening post.)
Not realising just how stupid ‘average’ is. (This is a common error. This wasn’t the first time I’ve called ‘bullshit’ on claims to be below average IQ. Associating with highly educated nerds really biases the sample.)
That would have been more accurate, but no, the context ruled that out.
I’m curious whether XiXiDu’s confidence/objective self evaluation has changed over the intervening years. I hope it has.
This seems weird to me. While I acknowledge that there are widespread social stigmas associated with broadcasting your own intelligence, it hardly seems productive to actively downplay your intelligence either. XiXiDu does not strike me as someone who is of average or below-average intelligence—quite the opposite, in fact. So it seems odd that he would choose to “repeatedly [claim] that he is not a smart person”. Is there some advantage to be gained from saying that kind of thing that I’m just not seeing here?
It seemed weird enough to me that it stuck in my memory more clearly than any of his anti-MIRI comments.
I concur.
My best guess is an ethical compulsion towards sincere expression of reality as he perceives it. For what it is worth that sincerity did influence my evaluation of his behaviour and personality. XiXiDu doesn’t seem like a troll, even when he does things that trolls also would do. My impression is that I would like him if I knew him in person.
Explained by how most of his abuse is not occurring in comments here. Here he often plays at politeness. Then goes to his own blog or other forums, and there we are all a mass of creepy dangerous brainwashed naive cultists.
Now that’s a shitty thing to say, regardless of where one stands on the issue. Wouldn’t you say* that life is too short to be happy about other people’s lives getting shorter? “Haha, my ideological opponent will lose our argument, by ways of dying first!” (Not to slippery slope you.) Also, let’s not do the whole “abusive” reference class. That term has been so dragged through the mud via cultural appropriation by the malcontent that its use is triggering me.
* That being said, upvoted for how actual human beings think and feel, as opposed to what we’re publicly supposed to portray.
If someone get’s forced by his system 1 to quit making an argument that’s not only about his life getting shorter. In general when someone identifies the reasons for a health issue and makes a step to solve the issue, that’s no reason to be sad.
75% positive is not a high ratio of positive karma. It’s also not like every single comment he wrote on LW is flawed. Most of the problematic posts are also written outside of LW.
On a forum called “LessWrong”, where you can infer that people prefer not to hold incorrect beliefs, you can understand how critics who point out when you are being wrong so that you can be less wrong would be upvoted.
Plus, I don’t think he has been uniformly abusive—times when he has gone ‘over the line’ do not represent a significant proportion of his postings on the site. Or so I perceive.
Abusive?
Responding to Capla’s quote:
I was a little bit at first, but then I tried clicking “random page” a few times to get a sense of what RationalWiki is like as a whole. Other than stubs, every page I landed on contained an attack of some sort. Being upset about a RationalWiki entry being unfair and negative is… like being upset about Ambrose Bierce’s The Devil’s Dictionary being linguistically inaccurate. It doesn’t really matter and that’s not really what they do.
I’m bothered by it more than you are I guess. I mean, for people already involved in the rationality community maybe RationalWiki can just be seen as some silly vindictive website dressed up as a place to learn. But I feel like RationalWiki has decent pagerank and random people do get sent there in google searches. To have that site be the first or one of the first introductions a person has to a given rationality topic seems pretty destructive.
Hm....The Devil’s Dictionary is not actually a dictionary, and RationalWiki is....not actually rational! Works for me.
At least RationalWiki is a wiki, so they got 50% right.
I wonder if the Devil’s Dictionary is truly intended for, belonging to, or written by a Devil then.
They are a “wiki” only in the sense that they use technology for collaborative content. But if you ever try to register an account and make edits, you will quickly see you contributions reverted, however factually accurate they may be (see e.g. the history page of the ‘effective altruism’ entry).
So I think it’s fair to say that RationalWiki is like the Holy Roman Empire: neither rational, nor a wiki.
There’s no “fight”. You’ve been a very aggressive and mean-spirited critic of LW/MIRI/EY for a few years. Doesn’t mean that there’s a fight. Doesn’t mean anyone “wins” if, say, you shut up and go away.
Your suggestion is not constructive, because coming up with retorts to mean-spirited past posts and endorsing them would be a poor use of MIRI’s time, and would only add to drama rather than reduce it. Here’s what you should do instead:
First, consider just going away. It may be best for your physical and mental health to stay away from LW and LW-related topics. Delete your old posts, forget you ever cared about this stuff, take up some other hobbies, etc. If you feel you can’t, presumably because you think these issues are really important, read on.
Come up with a generously-sized kindly-worded update that negates the meanness and stick it on top of your relevant past posts. E.g. if I were in your position I would write something like “I wrote the post below during years in which, I now recognize, I was locked in a venom-filled flamewar against a community which I actually like and appreciate, despite what I perceive as its faults. I do not automatically repudiate my arguments and factual points, but if you read the below, please note that I regret the venom and the personal attacks and that I may well have quote-mined and misrepresented persons and communities. I now wish I wrote it all in a kinder spirit”.
Continue participating on LW as you desire, trying your best to be kind and not get into drama. Plenty of people manage to be skeptical of MIRI/EY and criticize them here without being you. (If you’re not sure you can do this well, ask some regular(s) to help you out. Precommit that if people you asked PM you about your future LW comment or blog post saying you’re being an asshole, you’ll believe them and mend it.)
Accept that some people will continue to hate/dislike/hold a grudge against you. Issue private apologies to them if you feel you should, but don’t do it publicly (because drama). If that doesn’t help, accept and move on.
Sounds good. Thanks.
Hmm...the state of criticism leaves a lot to be desired currently. The amount of criticism does not reflect the amount of extraordinary statements being made here. I think there are a lot of people who shy away from open criticism.
Some stuff that is claimed here is just very very (self-censored).
I have recently attempted to voice a minimum of criticism when I said something along the lines of “an intelligence explosion happening within minutes is an extraordinary claim”. I actually believe that the whole concept is...I can’t say this in a way that doesn’t offend people here. It’s hard to recount what happened then, but my perception was that even that was already too much criticism. In a diverse community with healthy disagreement I would expect a different reaction.
Please take the above as my general perception and not a precise recount of a situation.
And… 4. Fashion yourself a large scarlet letter to wear on your clothing.
XiXiDu should discount this suggestion because it seems to be motivated reasoning.
I would totally 100% agree with this if XiXiDu had not himself proposed to do MORE than this in the OP, or had not himself mentioned ill health effects.
I don’t see why this was modded down. Saying that someone should stop posting things that you disagree with, because it is bad for their health, when you are very transparently motivated to not want them to post things you disagree with because you don’t like seeing things you disagree with, is a prime example of motivated reasoning. It’s not as if you have made a general policy of giving XiXiDu advice on his health.
About the only way that this could have been any clearer an example would have been if you said “You should give me your car, because giving me your car is good for your health”.
It’s really difficult to write about this in a way that doesn’t seem motivated. When I wrote my reply, I rewrote the ending three times, because I always had a feeling that it will seem like a different thing than I want it to be.
The fact is, if I had a magical pill that would make XiXiDu’s health problems go away permanently, I would give it to him, even if I knew that as a consequence he will just continue in his attacks with restored strength. It might not be a wise thing to do, but it’s what I would do anyway. But another fact is that I don’t have such pill.
I was aware that suggesting to XiXiDu to make his health a priority will seem motivated. Adding a disclaimer that I don’t mean it that way, seemed to only make things worse, because mentioning things, even in negative, makes them more visible. At the end I decided for the “virtue of silence”.
The situation is such that what is good for XiXiDu’s health also happens to be good for MIRI’s PR. In theory, this should be a good news, because it allows a “win/win” solution. It doesn’t feel so, though. There is already a criticism of XiXiDu online, and he feels that if he removes his attacks, but the criticism of him stays, he was cheated in the deal. On the other hand, the criticism written by XiXiDu was already seen and quoted by many people, so even if he would remove it all now, some damage is already done; so removing all criticism of him (even if all parties could coordinate on this move) would feel like too much. So each side probably feels that their “win” is not as big as they would deserve. The mutually perfectly satisfying solution is not possible, because some damage was already done, and accepting an imperfect solution feels like a status loss to a human. This is a difficult task.
And it would probably be better to negotiate the conditions of the treaty without a written record, because negotiating publicly amplifies the status concerns, as each party is aware that their replies could be used by observers.
I didn’t downvote you, but you’re wrong. I don’t particularly want XiXiDu to go away, and I haven’t felt offended by his posts. It’s simply an option that he should consider, given his description of his health problems. If I wanted him to leave, I would have urged him to leave, not suggested that he consider the option.
I can’t read minds. So if you say you meant it, I will concede that you meant it.
I hope you understand, however, how it sounds exactly like what someone would say when their primary motivation is to shut an opponent up. Giving health advice out of the blue on such a subject as this is very unusual.
Yet you spoke with the assumption that you could, and when many observers do not share your mind-reading conclusions. Hopefully in the future when you choose to do that you will not fail to see why you get downvotes. It’s a rather predictable outcome.
It’s such a plausible conclusion that it makes sense to draw, even if it turns out to be mistaken. Absent the ability to read minds and absent an explicit statement, we have to go on what is likely.
The best we can say is that it is a sufficiently predictable conclusion. Had the author not underestimated inferential distance he could easily have pre-empted your accusation with an additional word or two.
Nevertheless, it is still a naive (and incorrect) conclusion to draw based on the available evidence. Familiarity with human psychology (in general), internet forum arguing (in general), XiXiDu in particular or even a complete read of the opening thread would suggest that the advice you dismiss is clearly, obviously and overwhelmingly good advice for XiXIDu. You have also completely misread the style of dominance manoeuvre Anatoly was employing. Petty sniping of the kind you suggest wouldn’t naturally fit with the more straightforward aggressively condescending style of the comment. ie. Even when interpreting Anatoly’s motives in the worst possible light your interpretation is still sloppy.
‘We’ need to go on the expected consequences of our choices. Your choice was to accuse someone of questionable motives and use that as a premise to give advice for how to handle a serious mental health issue. You should expect that your behaviour will be negatively received by those who:
Don’t want XiXiDu to be distracted by bad advice (that is, to be encouraged to continue exposing himself to a clearly toxic addiction) as a side effect of Jiro playing one-upmanship games. Or,
Don’t like accusation of questionable motives based on mind-reading when there is reasonable doubt. Or,
Think you are wrong (in a way that socially defects against another).
The advice is good enough (and generalizable enough) that the correlation to the speaker’s motives is more likely to be coincidental than causal.
Addicts tend to be hurt by exposing themselves to their addiction triggers.
Talking about aggressive and mean-spirited...
FWIW I actually know you as the guy who made this bleeping awesome resources page. However, your attacks on MIRI/LW/EY have always felt rather silly and harmful (the internet makes doing both of those things easy) and maybe you should just take all of the stuff about it down.
I’ve had to deal with the stress you are contributing to putting on the broader perception of transhumanism for the weekend, and that is on top of preexisting mental problems. (Whether MIRI/LW is actually representative to this is entirely orthogonal to the point; public perception has and is shifting towards viewing the broader context of futurism as run by neoreactionaries and beige-os with parareligious delusions.)
Of course, that’s no reason to stop anything. People are going to be stressed by things independent of their content.
But you are expecting an entity which you have devoted most of blog to criticizing to be caring enough about your psychological state that they take time out to write header statements for each of your posts?
If you want to stop accusations of lying and bad faith, stop spreading the “LW believes in Roko’s Basilisk” meme, and do something less directly reputation-warfare escalatory, and more productive—like hunting down Nazis and creating alternatives to the current decision-theoretic paradigm. (I don’t think anybody’s going to get that upset over abstract discussions of Newcomb’s Problem. At least, I hope.)
How often and for how long did I spread this, and what do you mean by “spread”?
Imagine yourself in my situation back in 2010: After the leader of a community completely freaked out over a crazy post (calling the author an idiot in all bold and caps etc.) he went on to massively nuke any thread mentioning the topic. In addition there are mentions of people having horrible nightmares over it while others are actively trying to dissuade you from mentioning a thought experiment they believe to be dangerous, in private messages and emails, by referring to the leaders superior insight.
This made a lot of alarm bells ring for me.
No. I made an unilateral offer.
I don’t think MIRI has any reason to take you up on this offer, as responding in this way would elevate the status of your writings. High status entities do not need to respond specifically to low status entities, and when they do, it will be obliquely and non-specifically addressed to the broader class which contains the specific low-status entity. Additionally, it would look mean-spirited to try to ‘kick someone while their down’, especially as this post in some ways resembles a call for a truce. As such, it would be a mistake for MIRI to accept your offer, even before taking into account the resources that would be required. If I was MIRI I would totally ignore this.
Given this, either you have failed to understand what apologizing actually consists in, or are still (perhaps subconsciously) trying to undermine MIRI. At the moment all you offer is the implication that you would continue your disruption were it not for the toll it has taken on your health. Contrition would demand at least a genuine apology—something like “I am sorry for acting badly”—if not actively working to undo the harm you have done.
Fortunately, I think you overestimate the impact you had. Probably your biggest effect was wasting everyone’s time.
Yudkowsky has a number of times recently found it necessary to openly attack RationalWiki, rather than ignoring it and clarifying the problem on LessWrong or his website in a polite manner. He also voiced his displeasure over the increasing contrarian attitude on LessWrong. This made me think that there is a small chance that they might desire to mitigate one of only a handful sources who perceive MIRI to be important enough to criticize them.
I will apologize for mistakes I make and try to fix them. The above post was the confession that there very well could be mistakes, and a clarification that the reasons are not malicious.
I don’t really have much to say here, since I’m not affiliated with MIRI, but I’d just like to say that I am genuinely impressed by this attempt at reconciliation. I do think that much of what you have previously written is/was uncharitable toward MIRI and EY, especially some of the more egregious quote-mining you’ve done, but this post has caused me to significantly revise my probability estimate of you deliberately engaging in character assassination downward. Now I think it’s more likely that you simply mentioned some (perceived) shortcomings of MIRI/LW, possibly phrased your criticisms a bit too aggressively, received negative feedback, and then it all just escalated from there. (I was not around in 2010, so obviously the specifics are unavailable to me and this is just conjecture.) I have now positively updated my opinion of you.
I think Toggle’s observations are very good, and that you should consider everything said there. I started writing up my own response before I realised I was essentially repeating it.
I would like to add something, though—I think that it would be fair to characterize some of what you were doing as informed, insightful criticism, and some of what you were doing as unproductive and possibly hurtful. I don’t think it is an all-or-nothing situation. In which case, as well-reasoned constructive criticism is very valuable, you should definitely continue to do that whereever possible, and similarly, you should probably cease the other stuff, the nasty stuff, the unproductive stuff, the hurtful stuff.
Schelling fences are useful. I think simply stopping to have those debates is a better road than hoping to get it right by trying. It’s especially the less stressful way.
I think this depends somewhat on your opinion of how much of what Xixidu was doing was productive, and how much wasn’t. I think a fairly large proportion of his criticisms are valid enough to be constructive and helpful. Stopping all debates seems to me like throwing the baby out with the bath water.
If you read his own version in this story, dealing with the issue brings himself stress that causes health issues. Dealing with the issue is not good for him by his own admission. Putting higher standards on his own behavior means more stress.
There are situations where it simply makes sense to cut the loses and move on.
My consideration of the value of his criticism was separate to these health concerns. Perhaps when taking it into account, yours is the sensible conclusion to come to.
On LW we have the habit to analyse details and have high standards for them. There are cases where that’s valuable. In this case it makes more sense to look at the situation as a whole and see how a solution with beneficial for everyone involved looks like.
Look at the the decision tree and the results and then pick the node that goes in the right direction.
If you believe that I am, or was, a troll then check out this screenshot from 2009 (this was a year before my first criticism). And also check out this capture of my homepage from 2005, on which I link to MIRI’s and Bostrom’s homepage (I have been a fan).
If you believe that I am now doing this because of my health, then check out this screenshot of a very similar offer I made in 2011.
In summary: (a) None of my criticisms were ever made with the intent of giving MIRI or LW a bad name, but were instead meant to highlight or clarify problematic issues (b) I believe that my health issues allow me to quit caring about the problems I see, but they are not the crucial reason for wanting to quit. The main reason is that I hate fights and want people to be happy rather than being constantly engaged in emotional battles.
That said, many of the replies to this post perfectly resemble the reason for why I kept going on for so long: lots of misunderstandings combined with smug personal attacks against me. Anyway, I made the above offer expecting that this would continue, so it still stands. And if this isn’t worthwhile for MIRI, fine. But because of people like ArisKatsaris, paper-machine, wedrifid and others with a history of vicious personal attacks against me, I am unable to just delete everything, because that would only leave their misrepresentations of my motives and actions behind. Yes, you understand that correctly. I believe myself to be the one who has been constantly mishandled and forced to strike back (if you constantly call someone a troll and liar then you shouldn’t be surprised if they call you brainwashed). And yet I offer you the chance to leave this battle as the winner by posting counterstatements to my blog.
This comment ruined my (initially very high) impression from your article. I appreciate that you are trying, and I believe in your good intentions, it’s just… you are doing it somewhat wrong. Not sure if I can explain it or provide a better advice.
Probably the essence is that you were strongly emotionally driven in your critique, but you seem to be also strongly emotionally driven in negotiating peace, and your offers are not well calibrated. You want to stop an unproductive debate, but your offer to MIRI to publish something on your blog seems like another round of the same debate. If you feel there was something wrong about your articles, why can’t you write it there, using your own words? (If I happen to step on someone’s toe, I apologize to them using my own words, instead of inviting them to post something on my facebook page.)
Even if you want to tap out of the debate instead of apologizing, you could do it by writing an article on your blog called “why I am tired of debating MIRI”, describing your reasons to stop debating it, or just the decision to stop debating it, even without any specific details. And then you could add the link to that article from the old MIRI-related articles. And then, just stop writing about MIRI, and stop editing any wiki pages about MIRI in any wiki. And that’s all. And to make it obvious to the “other side”, post the link to the article on your blog to LW. End of story.
Sorry for using this analogy, but once I had a stalker, and she couldn’t resist sending me e-mails, a few of them every day. And anything I did, or didn’t do, was just a pretext for sending another e-mail. Like, she wrote ten e-mails about how she wants to talk with me, or asking me what am I doing right now, or whether I have seen this or that article on the web. I wrote her to stop writing to me, because I don’t want to see her anymore, or talk with her anymore, or interact with her in any way anymore. She wrote back one e-mail saying she is sorry, another e-mail asking me to meet her so we can discuss our misunderstandings, another e-mail apologizing for asking me to meet her, another e-mail retracting the previous apology and saying she has nothing to apologize for and she actually hates me, yet another e-mail apologizing for the previous angry e-mail saying she didn’t mean it, and then another e-mail asking who is the girl I have recently “friended” on facebook. (Long story short, I blocked her on every social network, deleted all her e-mails without a reply, and kept debating only on English-speaking websites for a few years, because I know she doesn’t speak English.)
The point I want to make here is that while you believe your offer to MIRI is generous, to MIRI it may seem like yet another step in an endless unproductive debate they want to avoid completely. Like me receiving an e-mail with an apology from my stalker, when what I really wanted was that she would simply stop writing me new e-mails and preferably forget that I exist, so I can forget her, too. I can’t speak for MIRI, but my guess is that what they really want from you is simply to stop; not to provide them yet another avenue for debate. Just fucking stop and let everyone gradually forget the past. Even writing the one last good-bye article on your blog is likely to lead to another “this time really last, but I had to clarify a few details” article, etc. This is the only real way to break the cycle, and only you can do it.
I made bad experiences with admitting something like that. I once wrote on Facebook that I am not a high IQ individual and got responses suggesting that now everyone can completely ignore me and everything I say is garbage. If I look at the comments to this post, my perception is that many people understood it as some kind of confession that everything I ever wrote is just wrong and that they can subsequently ignore everything else I might ever write. If the disclaimer was written by a third independent party, then I thought that this would show that I am willing to let the opponents voice their disagreement, and that I concede the possibility of being wrong.
I noticed that many people who read my blog take it much too seriously. I got emails praising me for what I have written. Which made me feel very uncomfortable, since I have not invested the necessary thoughtfulness in wirting those posts. They were never meant for other people to form a definitive opinion about MIRI, like some rigorous review by GiveWell. But this does not mean that they are random bullshit as people like to conclude when I admit this.
Hmm...I think my problems would be analog to loving you but wanting to correct some character mistakes you have. Noticing that you perceive this to be stalking would make me try to communicate that I really don’t want to harass you, since I actually like you very much, but that I think you should stop farting in public.
This seems obvious when it comes to your stalker scenario. But everything that involves MIRI involves a lot of low probability high utility considerations which really break my mind. I thought years about whether I should stop criticizing MIRI because I might endanger a future galactic civilization if the wrong person reads my posts and amplifies their effect. But I know that fully embracing this line of reasoning would completely break my mind.
I am not joking here. I find a lot of MIRI’s beliefs to be absurd, yet I have always been susceptible to their line of argumentation. I believe that it is very important to solve this meta-issue of how to decide such things rationally. And the issues surrounding MIRI seem to be perfectly suited to highlight this problematic issue.
If it helps, I believe your criticism is a mix of good and bad parts, but the bad parts make it really difficult for the reader to focus on the good parts, so at the end even the good parts are kinda wasted. It would be better if you could separate them, but the problem is probably what you describe as being “easily overwhelmed”.
You take this stuff really seriously, which in some way is impressive. Unfortunately, “taking stuff seriously” does not guarantee rational approach. (It could actually be the other way round; the higher stakes, the more difficult it is to keep a calm head.)
Also, the problem is not the criticism you have or the questions you ask, but the way how you do that. For example, if you find an old quote by Eliezer which seems problematic, the better way would be to post it in an open thread and ask: “I find this very disturbing. Does Eliezer still believe it or not? If yes, please explain. If no, please provide evidence of the change of mind.” Instead, the way you handled this, you made a few enemies.
If the topic is so important to you, you should have handled it better. At this moment, it is probably better to just stop and relax. (And perhaps try a better approach one year later.)
A mixture of good and bad parts is exactly how I would summarize LW.
And while you are iimpugning Alexander rationality, recall that he was the one to solicit input from domain experts.
“Something made out of atoms” is exactly how I would summarize most things.
Whoa, I can’t believe I made the cut.
I don’t personally care what you end up doing, and I don’t believe MIRI should care or even respond, though it sounds like Luke might out of xenia.
However, I will say that I find it very unlikely that you can manage to stop. You’ve tried what, three times over the past five years? All that did was drive you to RationalWiki, the subreddit, and some other places.
See you again in six or eight months.
Is that an offer on your part to delete a percentage of your posts discussing Lesswrong/MIRI, if I delete a similar percentage of my posts discussing your motives and actions? What percentage of these posts will you delete if I delete all my comments where I discuss you (or retract them if they were made in any forum that doesn’t allow deletions), and do I get to choose which ones of your posts get deleted?
Letting aside your views on what ‘winner’ means, who is the ‘you’ here? You offered MIRI the ability to post counterstatements, and I’m not affiliated with them.
You don’t need to delete any of your posts or comments. What I mainly fear is that if I was to delete posts, without linking to archived versions, then you would forever go around implying that all kinds of horrible things could have been found on those pages, and that me deleting them is evidence of this.
If you promise not to do anything like that, and stop portraying me as somehow being the worst person on Earth, then I’ll delete the comments, passages or posts that you deem offending.
But if there is nothing reasonable I could do to ever improve your opinion of me (i.e. other than donating all my money to MIRI), as if I committed some deadly sin, then this is a waste of time.
I would be willing to delete them because they offend certain people and could have been written much more benignly, with more rigor, and also because some of them might actually be misrepresentations which I accidentally made. Another reason for deletion would be that they have negative expected value, not because the arguments are necessarily wrong.
And if you agree, then please think about the Streisand effect. And if you e.g. ask me to delete my basilisk page, think about whether people could start believing that I take it seriously and as a result take it more seriously themselves. I have thought about this before and couldn’t reach a conclusive answer.
This is obviously not an agreement to delete everything you might want, such as my interview series.
I wouldn’t want you to delete the interview series anyway. The things that most offended me was this: the title of “http://kruel.co/2013/01/10/the-singularity-institute-how-they-brainwash-you/″ is absurdly offensive and inappropriate if you don’t believe in the deliberate ill intent of MIRI. If you don’t want to delete the post altogether, at least rename it to “How they convince you”. When you use ‘brainwash’ or ‘trick’ or ‘con’, you’re accusing them of being criminals. Only say such words if you really believe it.
I’d also like the deletion of http://kruel.co/2012/05/12/we-are-siai-argument-is-futile/ Putting words into SIAI’s mouth as if it accurately presents its side of the case is unfair.
I was also primarily going to say to delete all the contents of your ‘mockery index’ , which I believe you yourself had already admitted was unfair mockery, but it seems you have already delete them. I’m glad and pleasantly surprised.
Assuming the mockery index pages remain deleted, and you delete or rename the ‘how they brainwash you’ page, I DO promise to refrain from discussing you again in any way (reasonable caveats like you not discussing me are assumed), and will certainly be open to a more positive interpretation of your character (not that you’ll be able to tell, since I won’t be discussing you). Also keep in mind that my opinion of other Rationalwiki editors remains unchanged, and I’m still free to criticize and condemn Rationalwiki for reasons unrelated to your connections there.
As a sidenote, I also suggest and encourage you to consider the things that other people here (like Halfwitz) have said annoyed them.
I already deleted the ‘mockery index’ (which had included a disclaimer for some months that read that I distant myself from those outsourced posts). I also deleted the second post you mentioned.
I changed the brainwash post to ‘The Singularity Institute: How They Convince You’ and added the following disclaimer suggested by user Anatoly Vorobey:
I also completely deleted the post ‘Why you should be wary of the Singularity Institute’.
Yesterday I also deleted the Yudkowsky quotes page and the personality page.
Thank you. I’ll likewise keep my promise.
Calling people brainwashed when they call you a troll is not a good strategy for letting people concluded that you aren’t a troll.
Under what circumstances is the ‘olive branch’ valuable to MIRI- that is, what are the conditions under which they would want you to stop?
It seems to me that informed, insightful criticism is of tremendous value to anyone, although it’s not always fun to get. It’s great to find an intellectual opponent that can see weaknesses in your implementation and offer corrections, or even give you strong reason to reassess your whole plan. I go looking for principled opposition to transhumanism, Quinean philosophy, and what have you- these debates can offer some of the most and best insights into one’s own position. And it seems that the rationalist position is especially well suited to profit from and even enjoy such dialogues.
It seems implicit in your perspective that people don’t like what you’ve been posting. That means, at some point, there’s been a breakdown in the effectiveness in these arguments. Possibly those arguments are of an unproductive nature, hurt for the sake of hurt and not for the sake of making things better. Possibly they are productive, but there is no significant part of LW/MIRI that is receptive to these arguments, because there has been some breakdown in this group’s ability to perform your kind of self-analysis.
If the former is true, I would urge you to end your arguments unconditionally, because your actions are immoral and making the world a less wonderful place. If the latter is true, then you are a valuable resource and I hope that you won’t stop. But I’m afraid I just don’t see the hypothetical universe where MIRI should pay some kind of intellectual weregeld to keep you quiet.
So MIRI and LW are no longer a focus for you going forward?
I had a similar issue, although I think it was just a product of OCD. I used to have insane, tear-your-hair out arguments online with idiots which I could never ever not respond to, no matter how petty. It just bounced around in my brain until I had to vomit out a response to stop myself from going nuts.
I’d like to see a competent cognitive-behavioral therapist, as Eliezer recommended, but I don’t believe that competent mental health professionals actually exist. Better to do your own research, and find your own solution.
Just an idea: How about you write a response… but in a text file that you just save on your disk and never publish?
Here is a more complicated solution, if you are a programmer. Once when I was irritated by idiots on some website, I created a GreaseMonkey script for a browser that has highlighted the comments of the idiots with yellow color. Their usernames were hardcoded in the script. This trivial change had a huge psychological impact. When I saw their next comment and it was highlighted, I was like “oh, that’s just some known idiot, no need to take this seriously, just laught about it”, and it didn’t hurt me at all. And when I found a new hopeless idiot, I added the new username into my code, and refreshed the page; so I had my revenge. It was incredibly calming, and no one else knew about it.
I think they do, especially if you select for the best evidence-based method that will attract evidence-based people, but you may have to try more than one professional, and many people’s financial or insurance situations don’t permit that.
Ouch shots fired. How the success rate of CBT looks like depends heavily what exactly the mental health problem is. “Curing” or rather alleviating many kinds of phobias via cognitive behavior therapy has a really excellent success rate for example.
I endorse this suggestion.
Don’t Feed The Trolls!
I think that all the involved people deserve a break time to time. Every one is always attacking them and I have some aesthetic differences and some minor intellectually structural differences, but on the good side of the sanity waterline they’re great.
As a somewhat recent follower of LW (less than 1 year), it was actually quite useful to sift through your critiques back then (while occasionally they felt a bit personal and unnecessarily emotionally-motivated, I still valued the gist of the content—they were refreshingly contrarian).
Basically, when I first stumbled upon LW, I was excited, awed, and to some extent hypnotized.
The content was an interesting mixture of mathematics, computer science, philosophy and cognitive science, and as a new reader, I found myself easily convinced of many of the main positions advocated. I’m typically skeptical of any extraordinary claims, but the way the content is generally presented here, seemingly scientific and authoritative, evaded my usual defenses.
After a few weeks of taking in a lot of this content, I googled LW and Eliezer to find out more, and stumbled upon some criticisms, as well as your blog, the RationalWiki, ..etc.
Yes, it was interesting to hear about the basilisk, and the apparent knee-jerk reaction of Eliezer, and the ensuing censorship. That exercise helped restore my usual (and useful) skepticism, and consequently I re-examined a lot of the claims and positions with a more careful eye.
I also remember checking out your user profile here on LW, and seeing that you are an active member of the community, and that even though you and others occasionally engaged in some of these heated debates, the fact that one of LW’s more vocal critics was not banned or censored was also useful information that I gleaned from this exercise.
As a consumer of your critiques, I was enlightened but not turned-off from LW. In other words, I still somewhat drink the kool-aid, but I carefully check the drink before each sip.
So, thanks for providing a different perspective, and humanizing LW and its contributors, and good luck with your health and future endeavors.
Hehe. I just checked out that blog of yours.
I advise anyone tempted to respond to this to first do so as well.