True! I just think the specific system I proposed required:
significant time investments on the part of organizers (requiring intrinsic interest or funding for individuals with the requisite knowledge and trustworthiness)
a critical mass of users (requiring that a nontrivial fraction of people would find some value in the system)
The people who could serve as the higher level organizers are few and are typically doing other stuff, and a poll of a dozen people coming back with zero enthusiastic takers makes 2 seem iffy. Default expectation is that the system as described would just end up unused.
I’m pretty sure there exists some system design that would fare better, so I definitely encourage poking at this type of thing!
The system that I proposed is simpler, it doesn’t have fine grained and selective access, and therefore continuous efforts on the part of some people for “connecting the dots”. It’s just a single space, basically like the internal Notion + Slack space + Google Drive of the AI safety lab that would lead this project. On this space, people can share research, ideas, have “mildly” infohazardous discussions such as regarding the pros and cons of different approaches to building AGI.
I cannot imagine that system would end up unused. At least three people (you, me, and another person) felt as much frustration as to commit time to write on LW about this problem. All three these posts were well-received with comments like “yes, I agree this is a problem”. Another AI safety researcher said to me in private communication he feels this problem, too. So, I suspect a large fraction of all AI safety researchers stumble into capability ideas regularly now and spend significant portion of their mental cycles trying to manage this and still publish something in public.
As Nate Soares wrote in his post from 2018 where he announced nondisclosure-by-default strategy, “researchers shouldn’t have walls inside their minds”.
True! I just think the specific system I proposed required:
significant time investments on the part of organizers (requiring intrinsic interest or funding for individuals with the requisite knowledge and trustworthiness)
a critical mass of users (requiring that a nontrivial fraction of people would find some value in the system)
The people who could serve as the higher level organizers are few and are typically doing other stuff, and a poll of a dozen people coming back with zero enthusiastic takers makes 2 seem iffy. Default expectation is that the system as described would just end up unused.
I’m pretty sure there exists some system design that would fare better, so I definitely encourage poking at this type of thing!
The system that I proposed is simpler, it doesn’t have fine grained and selective access, and therefore continuous efforts on the part of some people for “connecting the dots”. It’s just a single space, basically like the internal Notion + Slack space + Google Drive of the AI safety lab that would lead this project. On this space, people can share research, ideas, have “mildly” infohazardous discussions such as regarding the pros and cons of different approaches to building AGI.
I cannot imagine that system would end up unused. At least three people (you, me, and another person) felt as much frustration as to commit time to write on LW about this problem. All three these posts were well-received with comments like “yes, I agree this is a problem”. Another AI safety researcher said to me in private communication he feels this problem, too. So, I suspect a large fraction of all AI safety researchers stumble into capability ideas regularly now and spend significant portion of their mental cycles trying to manage this and still publish something in public.
As Nate Soares wrote in his post from 2018 where he announced nondisclosure-by-default strategy, “researchers shouldn’t have walls inside their minds”.