Thanks very much for your feedback, though I confess I’m not entirely sure where to go with it. My interpretation is you have basically two concerns:
This policy doesn’t really directly regulate algorithmic progress, e.g. if it happened on smaller amounts of compute.
Algorithmic theft/leakage is easy.
The first one is true, as I alluded in the problems section. Part of my perspective here is coming from a place of skepticism about regulatory competence—I basically believe we can get regulators to control total compute usage, and to evaluate specific models according to pre-established evals, but I’m not sure I’d trust them to be able to determine “this is a new algorithmic advance, we need to evaluate it”. To the extent you had less libertarian priors you could try to use something like the above scheme for algorithms as well, but I wouldn’t expect it to work so well, as you lack the cardinal structure of compute size.
In terms of theft/leakage, you’re right this plan doesn’t discuss it much, and I agree it’s worth working on.
I agree that I have no faith in current governments to implement and enforce policies that are more complex than things on the order of governance compute and chip export controls.
I think the conclusion this points towards is that we need new forms of governance. Not to replace existing governments, but to complement them. Voluntary mutual inspection contracts with privacy-respecting technology using AI inspectors. Something of that sort.
Thanks very much for your feedback, though I confess I’m not entirely sure where to go with it. My interpretation is you have basically two concerns:
This policy doesn’t really directly regulate algorithmic progress, e.g. if it happened on smaller amounts of compute.
Algorithmic theft/leakage is easy.
The first one is true, as I alluded in the problems section. Part of my perspective here is coming from a place of skepticism about regulatory competence—I basically believe we can get regulators to control total compute usage, and to evaluate specific models according to pre-established evals, but I’m not sure I’d trust them to be able to determine “this is a new algorithmic advance, we need to evaluate it”. To the extent you had less libertarian priors you could try to use something like the above scheme for algorithms as well, but I wouldn’t expect it to work so well, as you lack the cardinal structure of compute size.
In terms of theft/leakage, you’re right this plan doesn’t discuss it much, and I agree it’s worth working on.
I agree that I have no faith in current governments to implement and enforce policies that are more complex than things on the order of governance compute and chip export controls.
I think the conclusion this points towards is that we need new forms of governance. Not to replace existing governments, but to complement them. Voluntary mutual inspection contracts with privacy-respecting technology using AI inspectors. Something of that sort.
Here’s some recent evidence of compute thresholds not being reliable: https://novasky-ai.github.io/posts/sky-t1/
Here’s another self-link to some of my thoughts on this: https://www.lesswrong.com/posts/tdrK7r4QA3ifbt2Ty/is-ai-alignment-enough?commentId=An6L68WETg3zCQrHT