I think more independent AI safety orgs introduces more liabilities and points of failure, such as infohazard leaks, unilateralist curse, accidental capabilities research, mental health spirals, and inter-org conflict. Rather, there should be sub-orgs that are underneath main orgs both de-jure and de-facto, with full leadership subordination and limited access to infohazardous information.
This captures my perspective well: not everyone is suited to run organizations. I believe that AI safety organizations would benefit from the integration of established “AI safety standards,” similar to existing Engineering or Financial Reporting Standards. This would make maintenance easier. However, for the time being, the focus should be on independent researchers pursuing diverse projects to first identify those standards.
I think more independent AI safety orgs introduces more liabilities and points of failure, such as infohazard leaks, unilateralist curse, accidental capabilities research, mental health spirals, and inter-org conflict. Rather, there should be sub-orgs that are underneath main orgs both de-jure and de-facto, with full leadership subordination and limited access to infohazardous information.
This captures my perspective well: not everyone is suited to run organizations. I believe that AI safety organizations would benefit from the integration of established “AI safety standards,” similar to existing Engineering or Financial Reporting Standards. This would make maintenance easier. However, for the time being, the focus should be on independent researchers pursuing diverse projects to first identify those standards.
What about orgs such as ai-plans.com, which aim to be exponentially useful for AI Safety?
They’re talking about technical research orgs/labs, not ancillary orgs/projects