Thanks! It seems like most of your exposure has been through Eliezer? Certainly impressions like “why does everyone think the chance of doom is >90%?” only make sense in that light. Have you seen presentations of AI risk arguments from other people like Rob Miles or Stuart Russell or Holden Karnofsky, and if so do you have different impressions?
I think the relevant point here is that OPs impressions are from Yudkowsky, and that’s evidence that many people’s are. Certainly the majority of public reactions I see emphasize Yudkowsky’s explanations, and seem to be motivated by his relatively long-winded and contemptuous style.
Thanks! It seems like most of your exposure has been through Eliezer? Certainly impressions like “why does everyone think the chance of doom is >90%?” only make sense in that light. Have you seen presentations of AI risk arguments from other people like Rob Miles or Stuart Russell or Holden Karnofsky, and if so do you have different impressions?
I think the relevant point here is that OPs impressions are from Yudkowsky, and that’s evidence that many people’s are. Certainly the majority of public reactions I see emphasize Yudkowsky’s explanations, and seem to be motivated by his relatively long-winded and contemptuous style.