You could imagine a law “we will not build AI systems that use >X amount of compute unless they are mechanistically transparent”. Then research on mechanistic transparency reduces the cost of such a law, making it more palatable to implement it.
If mechanistic transparency barely works and/or is super expensive, then presumably this law doesn’t look very good compared to other potential laws that prevent the building of powerful AI, so you’d think that marginal changes in how good we are at mechanistic transparency would do basically nothing (unless you’ve got the hope of ‘crossing the threshold’ to the point where this law becomes the most viable such law).
If mechanistic transparency barely works and/or is super expensive, then presumably this law doesn’t look very good compared to other potential laws that prevent the building of powerful AI, so you’d think that marginal changes in how good we are at mechanistic transparency would do basically nothing (unless you’ve got the hope of ‘crossing the threshold’ to the point where this law becomes the most viable such law).
The other bullet points make sense though.