Would MIRI be interested in hiring a full time staff writer/editor? I feel like I could have produced a good chunk of this if I had thought I should try to, just from having hung around LessWrong since it was just Eliezer Yudkowsky and Robin Hanson blogging on Overcoming Bias, but I thought the basic “no, really, AI is going to kill us” arguments were already written up in other places, like Arbital and the book Superintelligence.
Would MIRI be interested in hiring a full time staff writer/editor?
This is sort of still Rob’s job, and it was my job from 2016-2019. If I recall correctly, my first major project was helping out with a document sort-of-like this document, which tried to explain to OpenPhil some details of the MIRI strategic view. [I don’t think this was ever made public, and might be an interesting thing to pull out of the archives and publish now?]
If I tried to produce this document from scratch, I think it would have been substantially worse, tho I think I might have been able to reduce the time from “Eliezer’s initial draft” to “this is published at all”.
From the perspective of persuading an alignment-optimist in the AI world, this document could not possibly have been worse. I don’t know you Vaniver, but I’m confident you could have done a more persuasive job just by editing out the weird aspersion that EY is the only person capable of writing about alignment.
I think you’re thinking of drafts mainly based on Nate’s thinking rather than Eliezer’s, but yeah, those are on my list of things to maybe release publicly in some form.
Would MIRI be interested in hiring a full time staff writer/editor? I feel like I could have produced a good chunk of this if I had thought I should try to, just from having hung around LessWrong since it was just Eliezer Yudkowsky and Robin Hanson blogging on Overcoming Bias, but I thought the basic “no, really, AI is going to kill us” arguments were already written up in other places, like Arbital and the book Superintelligence.
This is sort of still Rob’s job, and it was my job from 2016-2019. If I recall correctly, my first major project was helping out with a document sort-of-like this document, which tried to explain to OpenPhil some details of the MIRI strategic view. [I don’t think this was ever made public, and might be an interesting thing to pull out of the archives and publish now?]
If I tried to produce this document from scratch, I think it would have been substantially worse, tho I think I might have been able to reduce the time from “Eliezer’s initial draft” to “this is published at all”.
From the perspective of persuading an alignment-optimist in the AI world, this document could not possibly have been worse. I don’t know you Vaniver, but I’m confident you could have done a more persuasive job just by editing out the weird aspersion that EY is the only person capable of writing about alignment.
I think you’re thinking of drafts mainly based on Nate’s thinking rather than Eliezer’s, but yeah, those are on my list of things to maybe release publicly in some form.
Yeah, either that or paying for writing lessons for alignment researchers if they really have to write the post themselves.