I agree that it’s drastic and clumsy. It’s not an actual proposal, but a lower bound of what would likely work. More research into this is urgently needed.
Aren’t you afraid that people could easily circumvent the regulation you mention? This would require every researcher and hacker, everywhere, forever, to comply. Also, many researchers are probably unaware that their models could start self-improving. Also, I’d say the security safeguards that you mention amount to AI Safety, which is of course currently an unsolved problem.
But honestly, I’m interested in regulation proposals that would be sufficiently robust while minimizing damage. If you have those, I’m all ears.
I agree that it’s drastic and clumsy. It’s not an actual proposal, but a lower bound of what would likely work. More research into this is urgently needed.
Aren’t you afraid that people could easily circumvent the regulation you mention? This would require every researcher and hacker, everywhere, forever, to comply. Also, many researchers are probably unaware that their models could start self-improving. Also, I’d say the security safeguards that you mention amount to AI Safety, which is of course currently an unsolved problem.
But honestly, I’m interested in regulation proposals that would be sufficiently robust while minimizing damage. If you have those, I’m all ears.