You can update by posting a header to all of your blog posts saying, “I wrote this blog during a dark period of my life. I now realize that Eliezer Yudkowsky is a decent and honest person with no ill intent, and that anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret this page, and leave it here as an archive to that regret.” If that is how you feel and that is what you do, I will treat with you starting from scratch in any future endeavors. I’ve been stupid too, in my life. (If you then revert to pattern, you do not get a second second chance.)
I have not found it important to say very much at all about you so far, unless you show up to a thread in which I am participating. If carrying on your one-sided vendetta is affecting your health and you want to declare a one-sided ceasefire for instrumental reasons, and you feel afraid that your brain will helplessly drag you back in if anyone mentions your name, then I state that: if you delete your site, withdraw entirely from all related online discussions, and do not say anything about MIRI or Eliezer Yudkowsky in the future, I will not say anything about Xixidu or Alexander Kruel in the future. I will urge others to do the same. I do not control anyone except myself. I remark that you cannot possibly expect anything except hostility given your past conduct and that feeding your past addiction by posting one little comment anywhere, only to react with shock as people don’t give you the respect to which you consider yourself entitled, is likely to drag you back in and destroy your health again.
Failing either of these actions:
I am probably going to put up a page about Roko’s Basilisk soon. I am not about to mention you just to make your health problems worse, nor avoid mentioning you if I find that a net positive while I happen to be writing; your conduct has placed you outside of my circle of concern. If the name Alexander Kruel happens to arise in some other online discussion or someone links to your site, I will explain that you have been carrying on a one-sided vendetta against MIRI for unknown psychological reasons. If for some reason I am talking about the hazards of my existence, I might bring up the name of Alexander Kruel as that guy who follows me around the ’Net looking for sentences that can be taken out of context to add to his hateblog, and mention with some bemusement that you didn’t stop even after you posted that all the one-sided hate was causing you health problems. Either a ceasefire or an update will prevent me from saying any such thing.
I urge you to see a competent cognitive-behavioral therapist and talk to them about the reason why your brain is making you do this even as it destroys your health.
I have written this note according to the principles of Tell Culture to describe my own future actions conditional on yours. Reacting to it in a way I deem inappropriate, such as taking a sentence out of context and putting it on your hateblog, will result in no future such communications with you.
I apologize for any possible misunderstanding in this comment. My reading comprehension is often bad.
I know that in the original post I offered to add a statement of your choice to any of my posts. I stand by this, although I would have phrased this differently now. I would like to ask you to consider that there are also personal posts which are completely unrelated to you, MIRI, or LW. Such as photography posts and math posts. It would be really weird and confusing to readers to add your suggested header to those posts. If that is what you want, I will do it.
You also mention that I could delete my site (I already deleted a bunch of posts related to you and MIRI). I am not going to do that, as it is my homepage and contains completely unrelated material. I am sorry if I possibly gave a false impression here.
You further talk about withdrawing entirely from all related online discussions. I am willing to entirely stop to add anything negative to any related discussion. But I will still use social media to link to material produced by MIRI or LW (such as MIRI blog posts) and professional third party critiques (such as a possible evaluation of MIRI by GiveWell) without adding my own commentary.
I stand by what I wrote above, irrespective of your future actions. But I would be pleased if you maintain a charitable portrayal of me. I have no problem if you in future write that my arguments are wrong, that I have been offending, or that I only have an average IQ etc. But I would be pleased if you abstain from portraying me as an evil person, or that I deliberately lie. Stating that I misrepresented you is fine. But suggesting that I am a malicious troll who hates you is what I strongly disagree with.
As evidence that I mean what I write I now deleted my recent comments made on reddit.
You also mention that I could delete my site (I already deleted a bunch of posts related to you and MIRI). I am not going to do that, as it is my homepage and contains completely unrelated material.
If that’s the only concern I think the solution is quite easy. You have all the MIRI related material on one page, so you can delete it while leaving the other stuff on your homepage untouched.
Since you have not yet replied to my other comment, here is what I have done so far:
(1) I removed many more posts and edited others in such a way that no mention of you, MIRI or LW can be found anymore (except an occasional link to a LW post).[1]
(2) I slightly changed your given disclaimer and added it to my about page:
Note that I wrote some posts, posts that could previously be found on this blog, during a dark period of my life. Eliezer Yudkowsky is a decent and honest person with no ill intent, and anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret those posts, and leave this note here as an archive to that regret.
The reason for this alteration is that my blog has been around since 2001, and for most of the time it did not contain any mention of you, MIRI, or LW. For a few years it even contained positive referrals to you and MIRI. This can all be checked by looking at e.g. archive.org for domains such as xixidu.com. I estimate that much less than 1% of all content over those years has been related to you or MIRI, and even less was negative.
But my previous comment, in which I asked you to consider that your suggested header would look really weird and confusing if added to completely unrelated posts, still stands. If that’s what you desire, let me know. But I hope you are satisfied with the actions I took so far.
I don’t have time to evaluate what you did, so I’ll take this as a possible earnest of a good-faith attempt at something, and not speak ill of you until I get some other piece of positive evidence that something has gone wrong. A header statement only on relevant posts seems fine by me, if you have the time to add it to items individually.
I very strongly advise you, on a personal level, not to talk about these things online at all. No, not even posting links without discussion, especially if your old audience is commenting on them. The probability I estimate of your brain helplessly dragging you back in is very high.
I don’t have time to evaluate what you did, so I’ll take this as a possible earnest of a good-faith attempt at something, and not speak ill of you until I get some other piece of positive evidence that something has gone wrong.
This will be my last comment and I am going to log out after it. If you or MIRI change your mind, or discover any evidence “that something has gone wrong”, please let me know by email or via a private message on e.g. Facebook or some other social network that’s available at that point in time.
A header statement only on relevant posts seems fine by me, if you have the time to add it to items individually.
Thanks.
I noticed that there is still a post mentioning MIRI. It is not at all judgemental or negative but rather highlights a video that I captured of a media appearance of MIRI on German/French TV. I understand this sort of posts not to be relevant posts for either deletion or any sort of header.
Note: I might have misquoted, misrepresented, or otherwise misunderstood what Eliezer Yudkowsky wrote. If this is the case I apologize for it. I urge you to read the full context of the quote.
At least now when I cite Eliezer’s stuff on my doctoral thesis people who don’t know him—there are a lot of them in philosophy—will not say to me “I’ve googled him and some crazy quotes came up eventually, so maybe you should avoid mentioning his name altogether”. This was a much bigger problem to me than what is sounds. I had to do all sort of workarounds to use Eliezer’s ideas as if someone else said it because I was advised not to cite him (and the main, often the only, argument was in fact the crazy quote things).
There might be some very small level of uncertainty as whether Alex’s behaviour had a positive or negative overall impact(maybe it made MIRI slightly update in the right direction, somehow). But I can say with near certainty it made my life objectively worse in very quantifiable measures (i.e. I lost a month or two with the workarounds, and would continue to lose time with this).
How’s the situation now w/ Superintelligence published? Do you think it’d be a good idea for someone to publish a bunch of Eliezer’s ideas passing them off as their own to solve this problem?
I made my above comment because I knew of at least one clear instance where the reason I had to do the workaround was due to someone who found Alex’s stuff. But things haven’t improved that much as I anticipated in my field (Applied Ethics). These things would take time, even if this had Alex’s stuff as the only main cause. Looking back, I also think part of the workarounds were more due to having to relate the discussion to someone in my field who wrote about the same issue (Nick Agar) than due to having to avoid mentioning Eliezer too much.
I see a big difference in the AI community. For instance, I was able to convince a very intelligent, previously long-time superintelligence sceptic, CS grad student of superintelligence’s feasibility. But I am not that much involved with AI directly. What is very clear to me—and I am not sure how obvious this already is to everyone—is that Nick’s book had an enormous impact. Superintelligence scepticism is gradually becoming clearly a minority position. This is huge and awesome.
I don’t think simply publishing Eliezer’s ideas as your own would work; there would need to be a lot of refining to turn it into a publishable philosophy paper. I did this refining of the complexity thesis during my thesis’ second chapter. Refining his ideas made them a lot different, and I applied them to a completely different case (moral enhancement). Note that publishing someone else’s idea as your own is not a good plan, even if the person explicitly grants you permission. But if you are refining them and adding a lot of new stuff you can just briefly mention him and move along—and hopefully that won’t do too much reputation-damage. I am still pondering how and which parts of this chapter to publish. In case you are interested, you can find a presentation summarizing its main arguments here: https://prezi.com/tsxslr_5_36z/deep-moral-enhancement-is-risky/
Agreed, if there are no other indications of a change of mind. Nobody who reads your blog is not going to know who “AI risk advocates” are. Perfectly fine if there’s some other indication.
Your updates to your blog as of this post seem to replace “Less Wrong”, or “MIRI”, or “Eliezer Yudkowsky”, with the generic term “AI risk advocates”.
This just sounds more insidiously disingenuous.
I deleted any post that could be perceived to be offending to MIRI, Yudkowsky, or LW, and only left fully general posts pertaining AI risks. Many of those posts never mentioned MIRI or LW in the first place. What exactly is “insidiously disingenuous” about that?
We know that the sparring between Kruel and MIRI has caused a great deal of harm but it still should be possible in a less antagonistic fashion.
I am not about to mention you just to make your health problems worse, nor avoid mentioning you if I find that a net positive while I happen to be writing; your conduct has placed you outside of my circle of concern.
Not even as a tie-breaker? There are nicer ways to say this.
I urge you to see a competent cognitive-behavioral therapist and talk to them about the reason why your brain is making you do this even as it destroys your health.
This advice seems unsolicited and getting psychological advice from your past competitor seems unlikely to be helpful on an outside view.
I might bring up the name of Alexander Kruel as that guy who follows me around the ’Net looking for sentences that can be taken out of context to add to his hateblog, and mention with some bemusement that you didn’t stop even after you posted that all the one-sided hate was causing you health problems.
This seems to be written to make Alexander aware of his risk of later embarassing himself. Telling someone who announces that they have mental health difficulties that have been exascerbated by their sparring with your writing that they’re likely to face embarassment if they relapse is somewhat (and unnecessarily) hostile, especially as they probably already know.
For what it’s worth, I think that as well as producing much of the least helpful criticism of MIRI, I’ve also found that reading Alxanders writing has also led me to think of ways that MIRI and LessWrong could improve (getting more publications and conventional status), and to help AI risk advocates to anticipate kinds of criticism that they might encounter in the future. I also remember that he’s given useful replies when I’ve previously asked him about the x-risk reduction landscape in Germany.
LessWrong needs critics, and also needs basic kindness norms. In light of that, it seems that all that needed to be said about Alexander breaking the vicious cycle was “Good”.
selectively collecting all of his quotes one-sidedly
This phrasing feels a little clumsy/rushed to me: “selectively” + “onesidedly” is redundant, and “collecting all of” contradicts “selectively”. Perhaps worth improving if it’s to be copypasted over a whole bunch of posts. How about “cherry-picking his quotes” or something?
“You can update by posting a header to all of your blog posts saying, “I wrote this blog during a dark period of my life. I now realize that Eliezer Yudkowsky is a decent and honest person with no ill intent, and that anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret this page, and leave it here as an archive to that regret.”″
Wow, just wow. Cult leader demands Stalin-style self-critique on every page (no sane person would consider it reasonable) and censoring of all posts related to Less Wrong after campaign of harassment.
I respect both updates and hostile ceasefires.
You can update by posting a header to all of your blog posts saying, “I wrote this blog during a dark period of my life. I now realize that Eliezer Yudkowsky is a decent and honest person with no ill intent, and that anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret this page, and leave it here as an archive to that regret.” If that is how you feel and that is what you do, I will treat with you starting from scratch in any future endeavors. I’ve been stupid too, in my life. (If you then revert to pattern, you do not get a second second chance.)
I have not found it important to say very much at all about you so far, unless you show up to a thread in which I am participating. If carrying on your one-sided vendetta is affecting your health and you want to declare a one-sided ceasefire for instrumental reasons, and you feel afraid that your brain will helplessly drag you back in if anyone mentions your name, then I state that: if you delete your site, withdraw entirely from all related online discussions, and do not say anything about MIRI or Eliezer Yudkowsky in the future, I will not say anything about Xixidu or Alexander Kruel in the future. I will urge others to do the same. I do not control anyone except myself. I remark that you cannot possibly expect anything except hostility given your past conduct and that feeding your past addiction by posting one little comment anywhere, only to react with shock as people don’t give you the respect to which you consider yourself entitled, is likely to drag you back in and destroy your health again.
Failing either of these actions:
I am probably going to put up a page about Roko’s Basilisk soon. I am not about to mention you just to make your health problems worse, nor avoid mentioning you if I find that a net positive while I happen to be writing; your conduct has placed you outside of my circle of concern. If the name Alexander Kruel happens to arise in some other online discussion or someone links to your site, I will explain that you have been carrying on a one-sided vendetta against MIRI for unknown psychological reasons. If for some reason I am talking about the hazards of my existence, I might bring up the name of Alexander Kruel as that guy who follows me around the ’Net looking for sentences that can be taken out of context to add to his hateblog, and mention with some bemusement that you didn’t stop even after you posted that all the one-sided hate was causing you health problems. Either a ceasefire or an update will prevent me from saying any such thing.
I urge you to see a competent cognitive-behavioral therapist and talk to them about the reason why your brain is making you do this even as it destroys your health.
I have written this note according to the principles of Tell Culture to describe my own future actions conditional on yours. Reacting to it in a way I deem inappropriate, such as taking a sentence out of context and putting it on your hateblog, will result in no future such communications with you.
I apologize for any possible misunderstanding in this comment. My reading comprehension is often bad.
I know that in the original post I offered to add a statement of your choice to any of my posts. I stand by this, although I would have phrased this differently now. I would like to ask you to consider that there are also personal posts which are completely unrelated to you, MIRI, or LW. Such as photography posts and math posts. It would be really weird and confusing to readers to add your suggested header to those posts. If that is what you want, I will do it.
You also mention that I could delete my site (I already deleted a bunch of posts related to you and MIRI). I am not going to do that, as it is my homepage and contains completely unrelated material. I am sorry if I possibly gave a false impression here.
You further talk about withdrawing entirely from all related online discussions. I am willing to entirely stop to add anything negative to any related discussion. But I will still use social media to link to material produced by MIRI or LW (such as MIRI blog posts) and professional third party critiques (such as a possible evaluation of MIRI by GiveWell) without adding my own commentary.
I stand by what I wrote above, irrespective of your future actions. But I would be pleased if you maintain a charitable portrayal of me. I have no problem if you in future write that my arguments are wrong, that I have been offending, or that I only have an average IQ etc. But I would be pleased if you abstain from portraying me as an evil person, or that I deliberately lie. Stating that I misrepresented you is fine. But suggesting that I am a malicious troll who hates you is what I strongly disagree with.
As evidence that I mean what I write I now deleted my recent comments made on reddit.
If that’s the only concern I think the solution is quite easy. You have all the MIRI related material on one page, so you can delete it while leaving the other stuff on your homepage untouched.
Since you have not yet replied to my other comment, here is what I have done so far:
(1) I removed many more posts and edited others in such a way that no mention of you, MIRI or LW can be found anymore (except an occasional link to a LW post).[1]
(2) I slightly changed your given disclaimer and added it to my about page:
The reason for this alteration is that my blog has been around since 2001, and for most of the time it did not contain any mention of you, MIRI, or LW. For a few years it even contained positive referrals to you and MIRI. This can all be checked by looking at e.g. archive.org for domains such as xixidu.com. I estimate that much less than 1% of all content over those years has been related to you or MIRI, and even less was negative.
But my previous comment, in which I asked you to consider that your suggested header would look really weird and confusing if added to completely unrelated posts, still stands. If that’s what you desire, let me know. But I hope you are satisfied with the actions I took so far.
[1] If I missed something, let me know.
I don’t have time to evaluate what you did, so I’ll take this as a possible earnest of a good-faith attempt at something, and not speak ill of you until I get some other piece of positive evidence that something has gone wrong. A header statement only on relevant posts seems fine by me, if you have the time to add it to items individually.
I very strongly advise you, on a personal level, not to talk about these things online at all. No, not even posting links without discussion, especially if your old audience is commenting on them. The probability I estimate of your brain helplessly dragging you back in is very high.
This will be my last comment and I am going to log out after it. If you or MIRI change your mind, or discover any evidence “that something has gone wrong”, please let me know by email or via a private message on e.g. Facebook or some other social network that’s available at that point in time.
Thanks.
I noticed that there is still a post mentioning MIRI. It is not at all judgemental or negative but rather highlights a video that I captured of a media appearance of MIRI on German/French TV. I understand this sort of posts not to be relevant posts for either deletion or any sort of header.
Then there is also an interview with Dr. Laurent Orseau about something you wrote. I added the following header to this post:
Your updates to your blog as of this post seem to replace “Less Wrong”, or “MIRI”, or “Eliezer Yudkowsky”, with the generic term “AI risk advocates”.
This just sounds more insidiously disingenuous.
At least now when I cite Eliezer’s stuff on my doctoral thesis people who don’t know him—there are a lot of them in philosophy—will not say to me “I’ve googled him and some crazy quotes came up eventually, so maybe you should avoid mentioning his name altogether”. This was a much bigger problem to me than what is sounds. I had to do all sort of workarounds to use Eliezer’s ideas as if someone else said it because I was advised not to cite him (and the main, often the only, argument was in fact the crazy quote things).
There might be some very small level of uncertainty as whether Alex’s behaviour had a positive or negative overall impact(maybe it made MIRI slightly update in the right direction, somehow). But I can say with near certainty it made my life objectively worse in very quantifiable measures (i.e. I lost a month or two with the workarounds, and would continue to lose time with this).
How’s the situation now w/ Superintelligence published? Do you think it’d be a good idea for someone to publish a bunch of Eliezer’s ideas passing them off as their own to solve this problem?
I made my above comment because I knew of at least one clear instance where the reason I had to do the workaround was due to someone who found Alex’s stuff. But things haven’t improved that much as I anticipated in my field (Applied Ethics). These things would take time, even if this had Alex’s stuff as the only main cause. Looking back, I also think part of the workarounds were more due to having to relate the discussion to someone in my field who wrote about the same issue (Nick Agar) than due to having to avoid mentioning Eliezer too much.
I see a big difference in the AI community. For instance, I was able to convince a very intelligent, previously long-time superintelligence sceptic, CS grad student of superintelligence’s feasibility. But I am not that much involved with AI directly. What is very clear to me—and I am not sure how obvious this already is to everyone—is that Nick’s book had an enormous impact. Superintelligence scepticism is gradually becoming clearly a minority position. This is huge and awesome.
I don’t think simply publishing Eliezer’s ideas as your own would work; there would need to be a lot of refining to turn it into a publishable philosophy paper. I did this refining of the complexity thesis during my thesis’ second chapter. Refining his ideas made them a lot different, and I applied them to a completely different case (moral enhancement). Note that publishing someone else’s idea as your own is not a good plan, even if the person explicitly grants you permission. But if you are refining them and adding a lot of new stuff you can just briefly mention him and move along—and hopefully that won’t do too much reputation-damage. I am still pondering how and which parts of this chapter to publish. In case you are interested, you can find a presentation summarizing its main arguments here: https://prezi.com/tsxslr_5_36z/deep-moral-enhancement-is-risky/
Agreed, if there are no other indications of a change of mind. Nobody who reads your blog is not going to know who “AI risk advocates” are. Perfectly fine if there’s some other indication.
I deleted any post that could be perceived to be offending to MIRI, Yudkowsky, or LW, and only left fully general posts pertaining AI risks. Many of those posts never mentioned MIRI or LW in the first place. What exactly is “insidiously disingenuous” about that?
We know that the sparring between Kruel and MIRI has caused a great deal of harm but it still should be possible in a less antagonistic fashion.
Not even as a tie-breaker? There are nicer ways to say this.
This advice seems unsolicited and getting psychological advice from your past competitor seems unlikely to be helpful on an outside view.
This seems to be written to make Alexander aware of his risk of later embarassing himself. Telling someone who announces that they have mental health difficulties that have been exascerbated by their sparring with your writing that they’re likely to face embarassment if they relapse is somewhat (and unnecessarily) hostile, especially as they probably already know.
For what it’s worth, I think that as well as producing much of the least helpful criticism of MIRI, I’ve also found that reading Alxanders writing has also led me to think of ways that MIRI and LessWrong could improve (getting more publications and conventional status), and to help AI risk advocates to anticipate kinds of criticism that they might encounter in the future. I also remember that he’s given useful replies when I’ve previously asked him about the x-risk reduction landscape in Germany.
LessWrong needs critics, and also needs basic kindness norms. In light of that, it seems that all that needed to be said about Alexander breaking the vicious cycle was “Good”.
This phrasing feels a little clumsy/rushed to me: “selectively” + “onesidedly” is redundant, and “collecting all of” contradicts “selectively”. Perhaps worth improving if it’s to be copypasted over a whole bunch of posts. How about “cherry-picking his quotes” or something?
Disclaimer: foreign speaker here.
“You can update by posting a header to all of your blog posts saying, “I wrote this blog during a dark period of my life. I now realize that Eliezer Yudkowsky is a decent and honest person with no ill intent, and that anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret this page, and leave it here as an archive to that regret.”″
Wow, just wow. Cult leader demands Stalin-style self-critique on every page (no sane person would consider it reasonable) and censoring of all posts related to Less Wrong after campaign of harassment.