The main takeaway from the Paris AI anti-safety summit is that for people with reasonably short timelines (from say 5-10 years, though it applies quite a lot more to the 5 year case), and maybe for even longer, we cannot assume that AI governance is reasonably likely, and the AI governance theory of change will really have to pivot towards being prepared for when the vibe does shift to AI regulation again, so safety plans for AI should assume the US government does ~nothing of importance by default until very late in the game.
We might get AI regulation, but it will not be strong enough to slow down AI significantly until AIs completely obsolete humans at a lot of jobs, which is likely to be very late in the process.
The main takeaway from the Paris AI anti-safety summit is that for people with reasonably short timelines (from say 5-10 years, though it applies quite a lot more to the 5 year case), and maybe for even longer, we cannot assume that AI governance is reasonably likely, and the AI governance theory of change will really have to pivot towards being prepared for when the vibe does shift to AI regulation again, so safety plans for AI should assume the US government does ~nothing of importance by default until very late in the game.
We might get AI regulation, but it will not be strong enough to slow down AI significantly until AIs completely obsolete humans at a lot of jobs, which is likely to be very late in the process.