I’m encouraged by what you say here. The doubt as to the value of Friendliness research that I express above is doubt as to the value of researching Friendly AI without a taskification rather than doubt as to the value researching what a taskification might look like.
If you haven’t done so I think that it would be worthwhile to ask the SIAI staff whether they might be comfortable with classifying (some of?) the output of the SIAI Visiting Fellows as part of SIAI’s output. As I said in response to a comment by WrongBot, I’ve gathered that the SIAI visiting fellows program is a good thing; but there’s been relatively little public documentation of what the SIAI visiting fellows have been doing. I would guess that a policy of such public documentation would improve SIAI’s credibility.
While I didn’t read your comment in the way that cousin_it did, I can see why he would do so. I’ve gotten a vague impression from talking to a number of people loosely or directly connected with SIAI that SIAI has been keeping their research secret on the grounds that releases to the public could be dangerous on account of speeding unfriendly AI research. In view of how primitive the study of AGI looks, the apparent infeasibility of SIAI unilaterally building the first AGI and the fact that Friendliness research would not seem to significantly speed the creation of unfriendly AI; such a policy seems highly dubious to me. So I was happy to hear that you and your collaborators are planning on putting some of what you’ve been doing out in the open in roughly a few months.
I’m encouraged by what you say here. The doubt as to the value of Friendliness research that I express above is doubt as to the value of researching Friendly AI without a taskification rather than doubt as to the value researching what a taskification might look like.
If you haven’t done so I think that it would be worthwhile to ask the SIAI staff whether they might be comfortable with classifying (some of?) the output of the SIAI Visiting Fellows as part of SIAI’s output. As I said in response to a comment by WrongBot, I’ve gathered that the SIAI visiting fellows program is a good thing; but there’s been relatively little public documentation of what the SIAI visiting fellows have been doing. I would guess that a policy of such public documentation would improve SIAI’s credibility.
While I didn’t read your comment in the way that cousin_it did, I can see why he would do so. I’ve gotten a vague impression from talking to a number of people loosely or directly connected with SIAI that SIAI has been keeping their research secret on the grounds that releases to the public could be dangerous on account of speeding unfriendly AI research. In view of how primitive the study of AGI looks, the apparent infeasibility of SIAI unilaterally building the first AGI and the fact that Friendliness research would not seem to significantly speed the creation of unfriendly AI; such a policy seems highly dubious to me. So I was happy to hear that you and your collaborators are planning on putting some of what you’ve been doing out in the open in roughly a few months.
Thanks for the link to your blog posts.