I’ve contacted a number of people in the field regarding this idea (thanks to everyone who responded!).
The general vibe is “this seems like it could be useful, maybe, if it took off,” but it did not appear to actually solve the problems any specific person I contacted was having.
My expectation would be that people in large organizations would likely not publish anything in this system that they would not publish out in the open.
One of the critical pieces of this proposal is having people willing to coordinate across access boundaries. There were zero enthusiastic takers for that kind of burden, which doesn’t surprise me too much. Without a broad base of volunteers for that kind of task, though, this idea seems to require paying a group of highly trusted and well-informed people to manage coordination instead of, say, researching things. That seems questionable.
Overall, I still think there is an important hole in coordination, but I don’t think this proposal fills it well. I’m not yet sure what a better shape would be.
it did not appear to actually solve the problems any specific person I contacted was having.
I think it’s important to realise (including for the people whom you spoke to) that we are not in the business of solving specific problems people that researchers have individually (or as a small research group), but a collective coordination problem, i. e., a problem with the design of the collective, civilisational project of developing non-catastrophic AI.
True! I just think the specific system I proposed required:
significant time investments on the part of organizers (requiring intrinsic interest or funding for individuals with the requisite knowledge and trustworthiness)
a critical mass of users (requiring that a nontrivial fraction of people would find some value in the system)
The people who could serve as the higher level organizers are few and are typically doing other stuff, and a poll of a dozen people coming back with zero enthusiastic takers makes 2 seem iffy. Default expectation is that the system as described would just end up unused.
I’m pretty sure there exists some system design that would fare better, so I definitely encourage poking at this type of thing!
The system that I proposed is simpler, it doesn’t have fine grained and selective access, and therefore continuous efforts on the part of some people for “connecting the dots”. It’s just a single space, basically like the internal Notion + Slack space + Google Drive of the AI safety lab that would lead this project. On this space, people can share research, ideas, have “mildly” infohazardous discussions such as regarding the pros and cons of different approaches to building AGI.
I cannot imagine that system would end up unused. At least three people (you, me, and another person) felt as much frustration as to commit time to write on LW about this problem. All three these posts were well-received with comments like “yes, I agree this is a problem”. Another AI safety researcher said to me in private communication he feels this problem, too. So, I suspect a large fraction of all AI safety researchers stumble into capability ideas regularly now and spend significant portion of their mental cycles trying to manage this and still publish something in public.
As Nate Soares wrote in his post from 2018 where he announced nondisclosure-by-default strategy, “researchers shouldn’t have walls inside their minds”.
Following up on this:
I’ve contacted a number of people in the field regarding this idea (thanks to everyone who responded!).
The general vibe is “this seems like it could be useful, maybe, if it took off,” but it did not appear to actually solve the problems any specific person I contacted was having.
My expectation would be that people in large organizations would likely not publish anything in this system that they would not publish out in the open.
One of the critical pieces of this proposal is having people willing to coordinate across access boundaries. There were zero enthusiastic takers for that kind of burden, which doesn’t surprise me too much. Without a broad base of volunteers for that kind of task, though, this idea seems to require paying a group of highly trusted and well-informed people to manage coordination instead of, say, researching things. That seems questionable.
Overall, I still think there is an important hole in coordination, but I don’t think this proposal fills it well. I’m not yet sure what a better shape would be.
I think it’s important to realise (including for the people whom you spoke to) that we are not in the business of solving specific problems people that researchers have individually (or as a small research group), but a collective coordination problem, i. e., a problem with the design of the collective, civilisational project of developing non-catastrophic AI.
I wrote a post about this.
True! I just think the specific system I proposed required:
significant time investments on the part of organizers (requiring intrinsic interest or funding for individuals with the requisite knowledge and trustworthiness)
a critical mass of users (requiring that a nontrivial fraction of people would find some value in the system)
The people who could serve as the higher level organizers are few and are typically doing other stuff, and a poll of a dozen people coming back with zero enthusiastic takers makes 2 seem iffy. Default expectation is that the system as described would just end up unused.
I’m pretty sure there exists some system design that would fare better, so I definitely encourage poking at this type of thing!
The system that I proposed is simpler, it doesn’t have fine grained and selective access, and therefore continuous efforts on the part of some people for “connecting the dots”. It’s just a single space, basically like the internal Notion + Slack space + Google Drive of the AI safety lab that would lead this project. On this space, people can share research, ideas, have “mildly” infohazardous discussions such as regarding the pros and cons of different approaches to building AGI.
I cannot imagine that system would end up unused. At least three people (you, me, and another person) felt as much frustration as to commit time to write on LW about this problem. All three these posts were well-received with comments like “yes, I agree this is a problem”. Another AI safety researcher said to me in private communication he feels this problem, too. So, I suspect a large fraction of all AI safety researchers stumble into capability ideas regularly now and spend significant portion of their mental cycles trying to manage this and still publish something in public.
As Nate Soares wrote in his post from 2018 where he announced nondisclosure-by-default strategy, “researchers shouldn’t have walls inside their minds”.