It may be hard to push this against local economic incentives, but if we can get people to at least pay lip service to it, then if an AGI project eventually gets enough slack against the economic incentives (e.g., it somehow gets a big lead over rival projects, or a government gets involved and throws resources at the project) then maybe it will put the idea into practice.
Another thing that I’d like to lock-in for AI research is the idea of AI design as opportunity and obligation to address human safety problems. The alternative “lock-in” that I’d like to avoid is a culture where AI designers think it’s someone else’s job to prevent “misuse” of their AI, and never think about their users as potentially unsafe systems that can be accidentally or intentionally corrupted.
It may be hard to push this against local economic incentives, but if we can get people to at least pay lip service to it, then if an AGI project eventually gets enough slack against the economic incentives (e.g., it somehow gets a big lead over rival projects, or a government gets involved and throws resources at the project) then maybe it will put the idea into practice.