Thanks. I’ll dwell more on these. Quick thoughts from a first read:
I generally liked the “further discussion” doc.
I do think it’s important to strongly signal the aspects of cause neutrality that you do intend to pursue (as well as pursuing them). These are unusual and important.
I found the mission statement generally opaque and extremely jargony. I think I could follow what you were saying, but in some cases this required a bit of work and in some cases I felt like it was perhaps only because I’d had conversations with you. (The FAQ at the top was relatively clear, but an odd thing to lead with.)
I was bemused by the fact that there didn’t appear to be a clear mission statement highlighted anywhere on the page!
ETA: Added some more in depth comments on the relevant comment threads: here on “further thoughts”, and here and here on the mission statement.
Oh, sorry, the two new docs are posted and were in the new ETA:
http://lesswrong.com/lw/o9h/further_discussion_of_cfars_focus_on_ai_safety/ and http://lesswrong.com/r/discussion/lw/o9j/cfars_new_mission_statement_on_our_website/
Thanks. I’ll dwell more on these. Quick thoughts from a first read:
I generally liked the “further discussion” doc.
I do think it’s important to strongly signal the aspects of cause neutrality that you do intend to pursue (as well as pursuing them). These are unusual and important.
I found the mission statement generally opaque and extremely jargony. I think I could follow what you were saying, but in some cases this required a bit of work and in some cases I felt like it was perhaps only because I’d had conversations with you. (The FAQ at the top was relatively clear, but an odd thing to lead with.)
I was bemused by the fact that there didn’t appear to be a clear mission statement highlighted anywhere on the page!
ETA: Added some more in depth comments on the relevant comment threads: here on “further thoughts”, and here and here on the mission statement.