If my anecdotal evidence is indicative of reality, the attitude in the ML community is that people concerned about superhuman AI should not even be engaged with seriously. Hopefully that, at least, will change soon.
If you think there is a chance that he would accept, could you please tell the guy you are referring to that I would love to have him on my podcast. Here is a link to this podcast, and here is me.
This perception problem is a big part of the reason I think we are doomed if superintelligence will soon be feasible to create.
If my anecdotal evidence is indicative of reality, the attitude in the ML community is that people concerned about superhuman AI should not even be engaged with seriously. Hopefully that, at least, will change soon.
If you think there is a chance that he would accept, could you please tell the guy you are referring to that I would love to have him on my podcast. Here is a link to this podcast, and here is me.
Edited thanks to Douglas_Knight
That’s the wrong link. Your podcast is here.
He might be willing to talk off the record. I’ll ask. Have you had Darklight on? See http://lesswrong.com/r/discussion/lw/oul/openai_makes_humanity_less_safe/dqm8