At least now when I cite Eliezer’s stuff on my doctoral thesis people who don’t know him—there are a lot of them in philosophy—will not say to me “I’ve googled him and some crazy quotes came up eventually, so maybe you should avoid mentioning his name altogether”. This was a much bigger problem to me than what is sounds. I had to do all sort of workarounds to use Eliezer’s ideas as if someone else said it because I was advised not to cite him (and the main, often the only, argument was in fact the crazy quote things).
There might be some very small level of uncertainty as whether Alex’s behaviour had a positive or negative overall impact(maybe it made MIRI slightly update in the right direction, somehow). But I can say with near certainty it made my life objectively worse in very quantifiable measures (i.e. I lost a month or two with the workarounds, and would continue to lose time with this).
How’s the situation now w/ Superintelligence published? Do you think it’d be a good idea for someone to publish a bunch of Eliezer’s ideas passing them off as their own to solve this problem?
I made my above comment because I knew of at least one clear instance where the reason I had to do the workaround was due to someone who found Alex’s stuff. But things haven’t improved that much as I anticipated in my field (Applied Ethics). These things would take time, even if this had Alex’s stuff as the only main cause. Looking back, I also think part of the workarounds were more due to having to relate the discussion to someone in my field who wrote about the same issue (Nick Agar) than due to having to avoid mentioning Eliezer too much.
I see a big difference in the AI community. For instance, I was able to convince a very intelligent, previously long-time superintelligence sceptic, CS grad student of superintelligence’s feasibility. But I am not that much involved with AI directly. What is very clear to me—and I am not sure how obvious this already is to everyone—is that Nick’s book had an enormous impact. Superintelligence scepticism is gradually becoming clearly a minority position. This is huge and awesome.
I don’t think simply publishing Eliezer’s ideas as your own would work; there would need to be a lot of refining to turn it into a publishable philosophy paper. I did this refining of the complexity thesis during my thesis’ second chapter. Refining his ideas made them a lot different, and I applied them to a completely different case (moral enhancement). Note that publishing someone else’s idea as your own is not a good plan, even if the person explicitly grants you permission. But if you are refining them and adding a lot of new stuff you can just briefly mention him and move along—and hopefully that won’t do too much reputation-damage. I am still pondering how and which parts of this chapter to publish. In case you are interested, you can find a presentation summarizing its main arguments here: https://prezi.com/tsxslr_5_36z/deep-moral-enhancement-is-risky/
Agreed, if there are no other indications of a change of mind. Nobody who reads your blog is not going to know who “AI risk advocates” are. Perfectly fine if there’s some other indication.
Your updates to your blog as of this post seem to replace “Less Wrong”, or “MIRI”, or “Eliezer Yudkowsky”, with the generic term “AI risk advocates”.
This just sounds more insidiously disingenuous.
I deleted any post that could be perceived to be offending to MIRI, Yudkowsky, or LW, and only left fully general posts pertaining AI risks. Many of those posts never mentioned MIRI or LW in the first place. What exactly is “insidiously disingenuous” about that?
Your updates to your blog as of this post seem to replace “Less Wrong”, or “MIRI”, or “Eliezer Yudkowsky”, with the generic term “AI risk advocates”.
This just sounds more insidiously disingenuous.
At least now when I cite Eliezer’s stuff on my doctoral thesis people who don’t know him—there are a lot of them in philosophy—will not say to me “I’ve googled him and some crazy quotes came up eventually, so maybe you should avoid mentioning his name altogether”. This was a much bigger problem to me than what is sounds. I had to do all sort of workarounds to use Eliezer’s ideas as if someone else said it because I was advised not to cite him (and the main, often the only, argument was in fact the crazy quote things).
There might be some very small level of uncertainty as whether Alex’s behaviour had a positive or negative overall impact(maybe it made MIRI slightly update in the right direction, somehow). But I can say with near certainty it made my life objectively worse in very quantifiable measures (i.e. I lost a month or two with the workarounds, and would continue to lose time with this).
How’s the situation now w/ Superintelligence published? Do you think it’d be a good idea for someone to publish a bunch of Eliezer’s ideas passing them off as their own to solve this problem?
I made my above comment because I knew of at least one clear instance where the reason I had to do the workaround was due to someone who found Alex’s stuff. But things haven’t improved that much as I anticipated in my field (Applied Ethics). These things would take time, even if this had Alex’s stuff as the only main cause. Looking back, I also think part of the workarounds were more due to having to relate the discussion to someone in my field who wrote about the same issue (Nick Agar) than due to having to avoid mentioning Eliezer too much.
I see a big difference in the AI community. For instance, I was able to convince a very intelligent, previously long-time superintelligence sceptic, CS grad student of superintelligence’s feasibility. But I am not that much involved with AI directly. What is very clear to me—and I am not sure how obvious this already is to everyone—is that Nick’s book had an enormous impact. Superintelligence scepticism is gradually becoming clearly a minority position. This is huge and awesome.
I don’t think simply publishing Eliezer’s ideas as your own would work; there would need to be a lot of refining to turn it into a publishable philosophy paper. I did this refining of the complexity thesis during my thesis’ second chapter. Refining his ideas made them a lot different, and I applied them to a completely different case (moral enhancement). Note that publishing someone else’s idea as your own is not a good plan, even if the person explicitly grants you permission. But if you are refining them and adding a lot of new stuff you can just briefly mention him and move along—and hopefully that won’t do too much reputation-damage. I am still pondering how and which parts of this chapter to publish. In case you are interested, you can find a presentation summarizing its main arguments here: https://prezi.com/tsxslr_5_36z/deep-moral-enhancement-is-risky/
Agreed, if there are no other indications of a change of mind. Nobody who reads your blog is not going to know who “AI risk advocates” are. Perfectly fine if there’s some other indication.
I deleted any post that could be perceived to be offending to MIRI, Yudkowsky, or LW, and only left fully general posts pertaining AI risks. Many of those posts never mentioned MIRI or LW in the first place. What exactly is “insidiously disingenuous” about that?