I made my above comment because I knew of at least one clear instance where the reason I had to do the workaround was due to someone who found Alex’s stuff. But things haven’t improved that much as I anticipated in my field (Applied Ethics). These things would take time, even if this had Alex’s stuff as the only main cause. Looking back, I also think part of the workarounds were more due to having to relate the discussion to someone in my field who wrote about the same issue (Nick Agar) than due to having to avoid mentioning Eliezer too much.
I see a big difference in the AI community. For instance, I was able to convince a very intelligent, previously long-time superintelligence sceptic, CS grad student of superintelligence’s feasibility. But I am not that much involved with AI directly. What is very clear to me—and I am not sure how obvious this already is to everyone—is that Nick’s book had an enormous impact. Superintelligence scepticism is gradually becoming clearly a minority position. This is huge and awesome.
I don’t think simply publishing Eliezer’s ideas as your own would work; there would need to be a lot of refining to turn it into a publishable philosophy paper. I did this refining of the complexity thesis during my thesis’ second chapter. Refining his ideas made them a lot different, and I applied them to a completely different case (moral enhancement). Note that publishing someone else’s idea as your own is not a good plan, even if the person explicitly grants you permission. But if you are refining them and adding a lot of new stuff you can just briefly mention him and move along—and hopefully that won’t do too much reputation-damage. I am still pondering how and which parts of this chapter to publish. In case you are interested, you can find a presentation summarizing its main arguments here: https://prezi.com/tsxslr_5_36z/deep-moral-enhancement-is-risky/
I made my above comment because I knew of at least one clear instance where the reason I had to do the workaround was due to someone who found Alex’s stuff. But things haven’t improved that much as I anticipated in my field (Applied Ethics). These things would take time, even if this had Alex’s stuff as the only main cause. Looking back, I also think part of the workarounds were more due to having to relate the discussion to someone in my field who wrote about the same issue (Nick Agar) than due to having to avoid mentioning Eliezer too much.
I see a big difference in the AI community. For instance, I was able to convince a very intelligent, previously long-time superintelligence sceptic, CS grad student of superintelligence’s feasibility. But I am not that much involved with AI directly. What is very clear to me—and I am not sure how obvious this already is to everyone—is that Nick’s book had an enormous impact. Superintelligence scepticism is gradually becoming clearly a minority position. This is huge and awesome.
I don’t think simply publishing Eliezer’s ideas as your own would work; there would need to be a lot of refining to turn it into a publishable philosophy paper. I did this refining of the complexity thesis during my thesis’ second chapter. Refining his ideas made them a lot different, and I applied them to a completely different case (moral enhancement). Note that publishing someone else’s idea as your own is not a good plan, even if the person explicitly grants you permission. But if you are refining them and adding a lot of new stuff you can just briefly mention him and move along—and hopefully that won’t do too much reputation-damage. I am still pondering how and which parts of this chapter to publish. In case you are interested, you can find a presentation summarizing its main arguments here: https://prezi.com/tsxslr_5_36z/deep-moral-enhancement-is-risky/