In the midgame, it is unlikely for any given group to make it all the way to safe AGI by itself. Therefore, safe AGI is a broad collective effort and we should expect most results to be published. In the endgame, it might become likely for a given group to make it all the way to safe AGI. In this case, incentives for secrecy become stronger.
I’m not sure what “safe” means in this context, but it seems to me that publishing safe AGI is not a threat and it’s the unsafe but potentially very capable AGI research we should worry about?
And the statement “In the midgame, it is unlikely for any given group to make it all the way to safe AGI by itself” seems a lot more dubious given recent developments with GPT-3, at least according to most of the Lesswrong community.
I’m not sure what “safe” means in this context, but it seems to me that publishing safe AGI is not a threat and it’s the unsafe but potentially very capable AGI research we should worry about?
And the statement “In the midgame, it is unlikely for any given group to make it all the way to
safeAGI by itself” seems a lot more dubious given recent developments with GPT-3, at least according to most of the Lesswrong community.