I think Bostrom isn’t necessarily that interested in currently working on AI x-Risk as opposed to other topics (see his recent work here for example). This seems pretty reasonable from my perspective, I think his comparative advantage plausibly lies elsewhere at this point.
As far as Yudkowsky, I think he’s often considered (by people at AI labs and in AI governance) to have incorrect views and to not be a “thought leader” with good views on AI x-Risk. He is also maybe just somewhat annoying to work with and interact with. (A roughly similar story probably applies for Nate Soares.)
Probably part of the reason AI labs don’t pay attention to Yudkowsky is that he is clearly advocating for a total stop to frontier AI development which might make AI labs (the people doing frontier AI development) less likely to pay attention to him.
(My personal take is that Yudkowsky doesn’t really have very good views on the topics of what AI governance should do at the margin or what AI labs should do at the margin and thus the situation seems acceptable. That said, it seems like an obviously a bad dynamic if (public) criticism of AI labs makes them not want to interact with you.)
He is also maybe just somewhat annoying to work with and interact with
I have heard that elsewhere as well. Still, I don’t really see that myself, whether in his public posting or in my limited interactions with him. He can be rough and on rare occasion has said things that could be considered personally disrespectful, but I didn’t think that people were that delicate.
… advocating for a total stop… which might make AI lab… less likely to pay attention to him.
True, but I had thought better of those people. I would have thought that they could take criticism, especially from someone who inspired some of them into their career.
Thank you for your answers. Overall, I think you are right, though I don’t quite understand.
He can be rough and on rare occasion has said things that could be considered personally disrespectful, but I didn’t think that people were that delicate.
You may wish to update on this. I’ve only exchange a few words with one of the name, but that was enough to make clear he doesn’t bother being respectful. That may work in some non delicate research environment I don’t want to know about, but most bright academic I know like to have fun at work, and would leave any non delicate work environment (unless they make their personal duty to clean the place).
I think Bostrom isn’t necessarily that interested in currently working on AI x-Risk as opposed to other topics (see his recent work here for example). This seems pretty reasonable from my perspective, I think his comparative advantage plausibly lies elsewhere at this point.
As far as Yudkowsky, I think he’s often considered (by people at AI labs and in AI governance) to have incorrect views and to not be a “thought leader” with good views on AI x-Risk. He is also maybe just somewhat annoying to work with and interact with. (A roughly similar story probably applies for Nate Soares.)
Probably part of the reason AI labs don’t pay attention to Yudkowsky is that he is clearly advocating for a total stop to frontier AI development which might make AI labs (the people doing frontier AI development) less likely to pay attention to him.
(My personal take is that Yudkowsky doesn’t really have very good views on the topics of what AI governance should do at the margin or what AI labs should do at the margin and thus the situation seems acceptable. That said, it seems like an obviously a bad dynamic if (public) criticism of AI labs makes them not want to interact with you.)
I have heard that elsewhere as well. Still, I don’t really see that myself, whether in his public posting or in my limited interactions with him. He can be rough and on rare occasion has said things that could be considered personally disrespectful, but I didn’t think that people were that delicate.
True, but I had thought better of those people. I would have thought that they could take criticism, especially from someone who inspired some of them into their career.
Thank you for your answers. Overall, I think you are right, though I don’t quite understand.
You may wish to update on this. I’ve only exchange a few words with one of the name, but that was enough to make clear he doesn’t bother being respectful. That may work in some non delicate research environment I don’t want to know about, but most bright academic I know like to have fun at work, and would leave any non delicate work environment (unless they make their personal duty to clean the place).