I downvoted TAG’s comment because I found it confusing/misleading. I can’t tell which of these things TAG’s trying to do:
Assert, in a snarky/indirect way, that people agitating about AI safety have no overlap with AI researchers. This seems doubly weird in a conversation with Stuart Russell.
Suggest that LeCun believes this. (??)
Assert that LeCun doesn’t mean to discourage Russell’s research. (But the whole conversation seems to be about what kind of research people should be doing when in order to get good outcomes from AI.)
I downvoted TAG’s comment because I found it confusing/misleading.
You could have asked for clarification. The point is that Yudkowsky’s early movement was disjoint from actual AI research, and during that period a bunch of dogmas and approaches became solidified, which a lot of AI researchers (Russell is an exception) find incomprehensible or misguided. In other words, you can disapprove of amateur AI safety without dismissing AI safety wholesale.
It seems like “amateur” AI safety researchers have been the main ones willing to seriously think about AGI and on-the-horizon advanced AI systems from a safety angle though.
However, I do think you’re pointing to a key potential blindspot in the AI safety community. Fortunately AI safety folks are studying ML more, and I think ML researchers are starting to be more receptive to discussions about AGI and safety. So this may become a moot point.
No idea why this is heavily downvoted; strong upvoted to compensate.
I’d say he’s discouraging everyone from working on the problems, or at least from considering such work to be important, urgent, high status, etc.
I downvoted TAG’s comment because I found it confusing/misleading. I can’t tell which of these things TAG’s trying to do:
Assert, in a snarky/indirect way, that people agitating about AI safety have no overlap with AI researchers. This seems doubly weird in a conversation with Stuart Russell.
Suggest that LeCun believes this. (??)
Assert that LeCun doesn’t mean to discourage Russell’s research. (But the whole conversation seems to be about what kind of research people should be doing when in order to get good outcomes from AI.)
You could have asked for clarification. The point is that Yudkowsky’s early movement was disjoint from actual AI research, and during that period a bunch of dogmas and approaches became solidified, which a lot of AI researchers (Russell is an exception) find incomprehensible or misguided. In other words, you can disapprove of amateur AI safety without dismissing AI safety wholesale.
(Responding to the above comment years later...)
It seems like “amateur” AI safety researchers have been the main ones willing to seriously think about AGI and on-the-horizon advanced AI systems from a safety angle though.
However, I do think you’re pointing to a key potential blindspot in the AI safety community. Fortunately AI safety folks are studying ML more, and I think ML researchers are starting to be more receptive to discussions about AGI and safety. So this may become a moot point.