Upvoted because I think this comment is a reasonable question, and shouldn’t be getting this many downvotes. Your latter comment in the thread wasn’t thought provoking, as it felt like a non-sequitur, though still not really something I’d downvote. I would encourage you to share your model for why a lack of co-operation with labs within three likely-inconsequential-to-AI states and one likely-consequential-to-AI-states implies that well intended intellectuals in the west aren’t likely to have control over the future of AI.
After all, substantial chunk of the most capable AI companies take alignment risks fairly seriously (Deepmind, OpenAI sort-of), and I mostly think AGI will arrive in a decade or two. Given Chinese companies don’t seem interested in building AGI, and still aren’t producing as high quality research as the west, and China’s slowing economic growth, I think it probable the West will play a large role in the creation of AGI.
It’s not a reasonable question because the premise of the OP is that there currently isn’t any cooperation no matter the nationality.
It also does ignore that the Chinese Communist Party does take actions in regard to AI safety and that practically matters more than any cooperation with North Korean AI labs.
There’s an odd background framing that implies that somehow the Chinese don’t care about the public good while Westerners do care. The CCP is perfectly willing to engage in heavy regulations of their tech industry provided they believe that the regulation will protect the public good. There’s much more potential for Chinese actors to not follow economic imperatives because their government believes that this is a bad idea.
Nor is there likely to ever be such cooperation. Thus, well intended intellectual elites in the West are not in a position to decide the future of AI. I shoulda just said that.
Are you having any luck finding cooperation with Russian, Chinese, Iranian and North Korean labs?
Are you having any luck finding innovative Russian, Chinese, Iranian, or North Korean labs?
Upvoted because I think this comment is a reasonable question, and shouldn’t be getting this many downvotes. Your latter comment in the thread wasn’t thought provoking, as it felt like a non-sequitur, though still not really something I’d downvote. I would encourage you to share your model for why a lack of co-operation with labs within three likely-inconsequential-to-AI states and one likely-consequential-to-AI-states implies that well intended intellectuals in the west aren’t likely to have control over the future of AI.
After all, substantial chunk of the most capable AI companies take alignment risks fairly seriously (Deepmind, OpenAI sort-of), and I mostly think AGI will arrive in a decade or two. Given Chinese companies don’t seem interested in building AGI, and still aren’t producing as high quality research as the west, and China’s slowing economic growth, I think it probable the West will play a large role in the creation of AGI.
It’s not a reasonable question because the premise of the OP is that there currently isn’t any cooperation no matter the nationality.
It also does ignore that the Chinese Communist Party does take actions in regard to AI safety and that practically matters more than any cooperation with North Korean AI labs.
There’s an odd background framing that implies that somehow the Chinese don’t care about the public good while Westerners do care. The CCP is perfectly willing to engage in heavy regulations of their tech industry provided they believe that the regulation will protect the public good. There’s much more potential for Chinese actors to not follow economic imperatives because their government believes that this is a bad idea.
OP writes that there have been no big cooperation wins, so a fortiori, there have been no big cooperation wins with the countries you mention.
Nor is there likely to ever be such cooperation. Thus, well intended intellectual elites in the West are not in a position to decide the future of AI. I shoulda just said that.