The main problem is that tech companies are much, much better at steering you than you are at steering them. So in the AI policy space, people mostly work on trying to explain AI risk to decisionmakers in an honest and persuasive way, not by relabelling tech companies (which can be interpreted or misinterpreted as pointing fingers).
(I originally posted this reply to the wrong thread)
tech companies are much, much better at steering you than you are at steering them. So in the AI policy space, people mostly work on trying to explain AI risk to decisionmakers in an honest and persuasive way, not by relabelling tech companies (which can be interpreted or misinterpreted as pointing fingers).
The main problem is that tech companies are much, much better at steering you than you are at steering them. So in the AI policy space, people mostly work on trying to explain AI risk to decisionmakers in an honest and persuasive way, not by relabelling tech companies (which can be interpreted or misinterpreted as pointing fingers).
Another very serious problem is that tech companies are not the friendly, peaceful, or technocratic behemoths that they appear to be to many of their employees and engineers. Autonomous weapons are now a central foundation for nuclear deterrence, and AI production is clearly recognized as critical to national security.
I highly recommend working with people who are already integrated into the international policy space since they already know the lay of the land and the pitfalls, since anyone capable of reinventing the wheel is capable of optimizing the wheel significantly further.
(I originally posted this reply to the wrong thread)
I agree with this generally.