Also, caution/safety is a matter of degree, and it seems hard to define what “unsafe” means, for the purpose of imposing a penalty on all unsafe projects
The ideas is just to make the relative cost of safety as low as possible. In the linked post I gave a quantification of safety. So let’s consider some 99.9% safe project, and the level of oversight it requires.
If this oversight is expensive because it involves using some resource (like involving human overseers, or periodically pausing as you wait on an overseer, or whatever) then it would be sufficient to require each project to use that resource, or to provide that resource for free to any project using tax dollars, or so on.
Alternatively, if there is some kind of oversight (e.g. periodic high-intensity audits, or AI oversight of AI projects, or mandated close involvement of human auditors) then the goal would just be to ensure that the price of evading detection eats up the efficiency benefits of unsafety. This looks pretty plausible to me, but you may be more skeptical about the feasibility of oversight.
The ideas is just to make the relative cost of safety as low as possible. In the linked post I gave a quantification of safety. So let’s consider some 99.9% safe project, and the level of oversight it requires.
If this oversight is expensive because it involves using some resource (like involving human overseers, or periodically pausing as you wait on an overseer, or whatever) then it would be sufficient to require each project to use that resource, or to provide that resource for free to any project using tax dollars, or so on.
Alternatively, if there is some kind of oversight (e.g. periodic high-intensity audits, or AI oversight of AI projects, or mandated close involvement of human auditors) then the goal would just be to ensure that the price of evading detection eats up the efficiency benefits of unsafety. This looks pretty plausible to me, but you may be more skeptical about the feasibility of oversight.