Just want to acknowledge that I agree it’s worth working on even if it’s not the only scenario by which AGI might be developed.
The main thing I want to reply to has already been said by Oliver, that I want to see a thoughtful policy proposal first, and expect any such campaign to be net negative before then.
Regarding the amount of goodwill, I’m talking about the goodwill between the regulators and the regulated. When regulatory bodies are incredibly damaging and net negative (e.g. the FDA, the IRB) then the people in the relevant fields often act in an actively adversarial way toward those bodies. If a group passes a pointless, costly regulation about military ML systems, AI companies will (/should) respond similarly, correctly anticipating future regulatory bloat and overreach.
Just want to acknowledge that I agree it’s worth working on even if it’s not the only scenario by which AGI might be developed.
The main thing I want to reply to has already been said by Oliver, that I want to see a thoughtful policy proposal first, and expect any such campaign to be net negative before then.
Regarding the amount of goodwill, I’m talking about the goodwill between the regulators and the regulated. When regulatory bodies are incredibly damaging and net negative (e.g. the FDA, the IRB) then the people in the relevant fields often act in an actively adversarial way toward those bodies. If a group passes a pointless, costly regulation about military ML systems, AI companies will (/should) respond similarly, correctly anticipating future regulatory bloat and overreach.