The system that I proposed is simpler, it doesn’t have fine grained and selective access, and therefore continuous efforts on the part of some people for “connecting the dots”. It’s just a single space, basically like the internal Notion + Slack space + Google Drive of the AI safety lab that would lead this project. On this space, people can share research, ideas, have “mildly” infohazardous discussions such as regarding the pros and cons of different approaches to building AGI.
I cannot imagine that system would end up unused. At least three people (you, me, and another person) felt as much frustration as to commit time to write on LW about this problem. All three these posts were well-received with comments like “yes, I agree this is a problem”. Another AI safety researcher said to me in private communication he feels this problem, too. So, I suspect a large fraction of all AI safety researchers stumble into capability ideas regularly now and spend significant portion of their mental cycles trying to manage this and still publish something in public.
As Nate Soares wrote in his post from 2018 where he announced nondisclosure-by-default strategy, “researchers shouldn’t have walls inside their minds”.
The system that I proposed is simpler, it doesn’t have fine grained and selective access, and therefore continuous efforts on the part of some people for “connecting the dots”. It’s just a single space, basically like the internal Notion + Slack space + Google Drive of the AI safety lab that would lead this project. On this space, people can share research, ideas, have “mildly” infohazardous discussions such as regarding the pros and cons of different approaches to building AGI.
I cannot imagine that system would end up unused. At least three people (you, me, and another person) felt as much frustration as to commit time to write on LW about this problem. All three these posts were well-received with comments like “yes, I agree this is a problem”. Another AI safety researcher said to me in private communication he feels this problem, too. So, I suspect a large fraction of all AI safety researchers stumble into capability ideas regularly now and spend significant portion of their mental cycles trying to manage this and still publish something in public.
As Nate Soares wrote in his post from 2018 where he announced nondisclosure-by-default strategy, “researchers shouldn’t have walls inside their minds”.