It sounds like the key cluster of evidence for your view here is roughly this:
Everything is a chaotic jumble and there’s lots of variation from politician to politician and from government agency to government agency. Within that jumble, new good ideas (and normal smart well-intentioned actions, etc.) sometimes pop up, and periodically make a big difference.
I.e. things look chaotic, and you do sometimes see good ideas adopted within that chaos, therefore more good ideas does sometimes matter.
Here’s a different model for the same phenomenon. There is a “policy equilibrium”, determined by the incentives faced by policymakers, and policy is generally near-equilibrium. An example relevant to current events: policymakers at the FDA and CDC are incentivized largely to avoid blame. So long as they aren’t blamed for any major problem, they face a pretty stable career trajectory without much room for other career incentives. So, this model would predict that at any given time FDA/CDC policies are approximately-optimal for blame avoidance. (This is oversimplified; a proper discussion would include both other incentives and the distinction between optimization-via-intentional-planning vs optimization-via-selection.)
New good ideas (and new bad ideas) sometimes pop up mainly because the environment sometimes shifts external incentives. For instance, if the FDA/CDC perceive themselves to be at serious risk of blame for blocking pandemic response, then they’ll adopt policies less likely to block pandemic response (or at least less likely to be perceived that way).
It sounds like the key cluster of evidence for your view here is roughly this:
I.e. things look chaotic, and you do sometimes see good ideas adopted within that chaos, therefore more good ideas does sometimes matter.
Here’s a different model for the same phenomenon. There is a “policy equilibrium”, determined by the incentives faced by policymakers, and policy is generally near-equilibrium. An example relevant to current events: policymakers at the FDA and CDC are incentivized largely to avoid blame. So long as they aren’t blamed for any major problem, they face a pretty stable career trajectory without much room for other career incentives. So, this model would predict that at any given time FDA/CDC policies are approximately-optimal for blame avoidance. (This is oversimplified; a proper discussion would include both other incentives and the distinction between optimization-via-intentional-planning vs optimization-via-selection.)
New good ideas (and new bad ideas) sometimes pop up mainly because the environment sometimes shifts external incentives. For instance, if the FDA/CDC perceive themselves to be at serious risk of blame for blocking pandemic response, then they’ll adopt policies less likely to block pandemic response (or at least less likely to be perceived that way).