I downvoted TAG’s comment because I found it confusing/misleading.
You could have asked for clarification. The point is that Yudkowsky’s early movement was disjoint from actual AI research, and during that period a bunch of dogmas and approaches became solidified, which a lot of AI researchers (Russell is an exception) find incomprehensible or misguided. In other words, you can disapprove of amateur AI safety without dismissing AI safety wholesale.
It seems like “amateur” AI safety researchers have been the main ones willing to seriously think about AGI and on-the-horizon advanced AI systems from a safety angle though.
However, I do think you’re pointing to a key potential blindspot in the AI safety community. Fortunately AI safety folks are studying ML more, and I think ML researchers are starting to be more receptive to discussions about AGI and safety. So this may become a moot point.
You could have asked for clarification. The point is that Yudkowsky’s early movement was disjoint from actual AI research, and during that period a bunch of dogmas and approaches became solidified, which a lot of AI researchers (Russell is an exception) find incomprehensible or misguided. In other words, you can disapprove of amateur AI safety without dismissing AI safety wholesale.
(Responding to the above comment years later...)
It seems like “amateur” AI safety researchers have been the main ones willing to seriously think about AGI and on-the-horizon advanced AI systems from a safety angle though.
However, I do think you’re pointing to a key potential blindspot in the AI safety community. Fortunately AI safety folks are studying ML more, and I think ML researchers are starting to be more receptive to discussions about AGI and safety. So this may become a moot point.