Since you have not yet replied to my other comment, here is what I have done so far:
(1) I removed many more posts and edited others in such a way that no mention of you, MIRI or LW can be found anymore (except an occasional link to a LW post).[1]
(2) I slightly changed your given disclaimer and added it to my about page:
Note that I wrote some posts, posts that could previously be found on this blog, during a dark period of my life. Eliezer Yudkowsky is a decent and honest person with no ill intent, and anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret those posts, and leave this note here as an archive to that regret.
The reason for this alteration is that my blog has been around since 2001, and for most of the time it did not contain any mention of you, MIRI, or LW. For a few years it even contained positive referrals to you and MIRI. This can all be checked by looking at e.g. archive.org for domains such as xixidu.com. I estimate that much less than 1% of all content over those years has been related to you or MIRI, and even less was negative.
But my previous comment, in which I asked you to consider that your suggested header would look really weird and confusing if added to completely unrelated posts, still stands. If that’s what you desire, let me know. But I hope you are satisfied with the actions I took so far.
I don’t have time to evaluate what you did, so I’ll take this as a possible earnest of a good-faith attempt at something, and not speak ill of you until I get some other piece of positive evidence that something has gone wrong. A header statement only on relevant posts seems fine by me, if you have the time to add it to items individually.
I very strongly advise you, on a personal level, not to talk about these things online at all. No, not even posting links without discussion, especially if your old audience is commenting on them. The probability I estimate of your brain helplessly dragging you back in is very high.
I don’t have time to evaluate what you did, so I’ll take this as a possible earnest of a good-faith attempt at something, and not speak ill of you until I get some other piece of positive evidence that something has gone wrong.
This will be my last comment and I am going to log out after it. If you or MIRI change your mind, or discover any evidence “that something has gone wrong”, please let me know by email or via a private message on e.g. Facebook or some other social network that’s available at that point in time.
A header statement only on relevant posts seems fine by me, if you have the time to add it to items individually.
Thanks.
I noticed that there is still a post mentioning MIRI. It is not at all judgemental or negative but rather highlights a video that I captured of a media appearance of MIRI on German/French TV. I understand this sort of posts not to be relevant posts for either deletion or any sort of header.
Note: I might have misquoted, misrepresented, or otherwise misunderstood what Eliezer Yudkowsky wrote. If this is the case I apologize for it. I urge you to read the full context of the quote.
At least now when I cite Eliezer’s stuff on my doctoral thesis people who don’t know him—there are a lot of them in philosophy—will not say to me “I’ve googled him and some crazy quotes came up eventually, so maybe you should avoid mentioning his name altogether”. This was a much bigger problem to me than what is sounds. I had to do all sort of workarounds to use Eliezer’s ideas as if someone else said it because I was advised not to cite him (and the main, often the only, argument was in fact the crazy quote things).
There might be some very small level of uncertainty as whether Alex’s behaviour had a positive or negative overall impact(maybe it made MIRI slightly update in the right direction, somehow). But I can say with near certainty it made my life objectively worse in very quantifiable measures (i.e. I lost a month or two with the workarounds, and would continue to lose time with this).
How’s the situation now w/ Superintelligence published? Do you think it’d be a good idea for someone to publish a bunch of Eliezer’s ideas passing them off as their own to solve this problem?
I made my above comment because I knew of at least one clear instance where the reason I had to do the workaround was due to someone who found Alex’s stuff. But things haven’t improved that much as I anticipated in my field (Applied Ethics). These things would take time, even if this had Alex’s stuff as the only main cause. Looking back, I also think part of the workarounds were more due to having to relate the discussion to someone in my field who wrote about the same issue (Nick Agar) than due to having to avoid mentioning Eliezer too much.
I see a big difference in the AI community. For instance, I was able to convince a very intelligent, previously long-time superintelligence sceptic, CS grad student of superintelligence’s feasibility. But I am not that much involved with AI directly. What is very clear to me—and I am not sure how obvious this already is to everyone—is that Nick’s book had an enormous impact. Superintelligence scepticism is gradually becoming clearly a minority position. This is huge and awesome.
I don’t think simply publishing Eliezer’s ideas as your own would work; there would need to be a lot of refining to turn it into a publishable philosophy paper. I did this refining of the complexity thesis during my thesis’ second chapter. Refining his ideas made them a lot different, and I applied them to a completely different case (moral enhancement). Note that publishing someone else’s idea as your own is not a good plan, even if the person explicitly grants you permission. But if you are refining them and adding a lot of new stuff you can just briefly mention him and move along—and hopefully that won’t do too much reputation-damage. I am still pondering how and which parts of this chapter to publish. In case you are interested, you can find a presentation summarizing its main arguments here: https://prezi.com/tsxslr_5_36z/deep-moral-enhancement-is-risky/
Agreed, if there are no other indications of a change of mind. Nobody who reads your blog is not going to know who “AI risk advocates” are. Perfectly fine if there’s some other indication.
Your updates to your blog as of this post seem to replace “Less Wrong”, or “MIRI”, or “Eliezer Yudkowsky”, with the generic term “AI risk advocates”.
This just sounds more insidiously disingenuous.
I deleted any post that could be perceived to be offending to MIRI, Yudkowsky, or LW, and only left fully general posts pertaining AI risks. Many of those posts never mentioned MIRI or LW in the first place. What exactly is “insidiously disingenuous” about that?
Since you have not yet replied to my other comment, here is what I have done so far:
(1) I removed many more posts and edited others in such a way that no mention of you, MIRI or LW can be found anymore (except an occasional link to a LW post).[1]
(2) I slightly changed your given disclaimer and added it to my about page:
The reason for this alteration is that my blog has been around since 2001, and for most of the time it did not contain any mention of you, MIRI, or LW. For a few years it even contained positive referrals to you and MIRI. This can all be checked by looking at e.g. archive.org for domains such as xixidu.com. I estimate that much less than 1% of all content over those years has been related to you or MIRI, and even less was negative.
But my previous comment, in which I asked you to consider that your suggested header would look really weird and confusing if added to completely unrelated posts, still stands. If that’s what you desire, let me know. But I hope you are satisfied with the actions I took so far.
[1] If I missed something, let me know.
I don’t have time to evaluate what you did, so I’ll take this as a possible earnest of a good-faith attempt at something, and not speak ill of you until I get some other piece of positive evidence that something has gone wrong. A header statement only on relevant posts seems fine by me, if you have the time to add it to items individually.
I very strongly advise you, on a personal level, not to talk about these things online at all. No, not even posting links without discussion, especially if your old audience is commenting on them. The probability I estimate of your brain helplessly dragging you back in is very high.
This will be my last comment and I am going to log out after it. If you or MIRI change your mind, or discover any evidence “that something has gone wrong”, please let me know by email or via a private message on e.g. Facebook or some other social network that’s available at that point in time.
Thanks.
I noticed that there is still a post mentioning MIRI. It is not at all judgemental or negative but rather highlights a video that I captured of a media appearance of MIRI on German/French TV. I understand this sort of posts not to be relevant posts for either deletion or any sort of header.
Then there is also an interview with Dr. Laurent Orseau about something you wrote. I added the following header to this post:
Your updates to your blog as of this post seem to replace “Less Wrong”, or “MIRI”, or “Eliezer Yudkowsky”, with the generic term “AI risk advocates”.
This just sounds more insidiously disingenuous.
At least now when I cite Eliezer’s stuff on my doctoral thesis people who don’t know him—there are a lot of them in philosophy—will not say to me “I’ve googled him and some crazy quotes came up eventually, so maybe you should avoid mentioning his name altogether”. This was a much bigger problem to me than what is sounds. I had to do all sort of workarounds to use Eliezer’s ideas as if someone else said it because I was advised not to cite him (and the main, often the only, argument was in fact the crazy quote things).
There might be some very small level of uncertainty as whether Alex’s behaviour had a positive or negative overall impact(maybe it made MIRI slightly update in the right direction, somehow). But I can say with near certainty it made my life objectively worse in very quantifiable measures (i.e. I lost a month or two with the workarounds, and would continue to lose time with this).
How’s the situation now w/ Superintelligence published? Do you think it’d be a good idea for someone to publish a bunch of Eliezer’s ideas passing them off as their own to solve this problem?
I made my above comment because I knew of at least one clear instance where the reason I had to do the workaround was due to someone who found Alex’s stuff. But things haven’t improved that much as I anticipated in my field (Applied Ethics). These things would take time, even if this had Alex’s stuff as the only main cause. Looking back, I also think part of the workarounds were more due to having to relate the discussion to someone in my field who wrote about the same issue (Nick Agar) than due to having to avoid mentioning Eliezer too much.
I see a big difference in the AI community. For instance, I was able to convince a very intelligent, previously long-time superintelligence sceptic, CS grad student of superintelligence’s feasibility. But I am not that much involved with AI directly. What is very clear to me—and I am not sure how obvious this already is to everyone—is that Nick’s book had an enormous impact. Superintelligence scepticism is gradually becoming clearly a minority position. This is huge and awesome.
I don’t think simply publishing Eliezer’s ideas as your own would work; there would need to be a lot of refining to turn it into a publishable philosophy paper. I did this refining of the complexity thesis during my thesis’ second chapter. Refining his ideas made them a lot different, and I applied them to a completely different case (moral enhancement). Note that publishing someone else’s idea as your own is not a good plan, even if the person explicitly grants you permission. But if you are refining them and adding a lot of new stuff you can just briefly mention him and move along—and hopefully that won’t do too much reputation-damage. I am still pondering how and which parts of this chapter to publish. In case you are interested, you can find a presentation summarizing its main arguments here: https://prezi.com/tsxslr_5_36z/deep-moral-enhancement-is-risky/
Agreed, if there are no other indications of a change of mind. Nobody who reads your blog is not going to know who “AI risk advocates” are. Perfectly fine if there’s some other indication.
I deleted any post that could be perceived to be offending to MIRI, Yudkowsky, or LW, and only left fully general posts pertaining AI risks. Many of those posts never mentioned MIRI or LW in the first place. What exactly is “insidiously disingenuous” about that?