Addresses only theoretical harms (e.g. AI could be used for WMD)
That’s the whole point of the bill! It’s not trying to address present harms, it’s trying to address future harms, which are the important ones. Suggesting that you instead address present harms is like responding to a bill that is trying to price in environmental externalities by saying “but wouldn’t it be better if you instead spent more money on education?”, which like, IDK, you can think education is more important than climate change, but your suggestion has basically nothing to do with the aims of the original bill.
I don’t want to address “real existing harm by existing actors”, I want to prevent future AI systems from killing literally everyone.
It’s not trying to address present harms, it’s trying to address future harms, which are the important ones.
A real AI system that kills literally everyone will do so by gaining power/resources over a period of time. Most likely it will do so the same way existing bad-agents accumulate power and resources.
Unless you’re explicitly committing to the Diamondoid bacteria thing, stopping hacking is stopping AI from taking over the world.
I am not sure I understand this comment. Are you saying you think there are autonomous AI systems that right now are trying to accumulate power? And that present regulation should be optimized to stop those?
That’s the whole point of the bill! It’s not trying to address present harms, it’s trying to address future harms, which are the important ones. Suggesting that you instead address present harms is like responding to a bill that is trying to price in environmental externalities by saying “but wouldn’t it be better if you instead spent more money on education?”, which like, IDK, you can think education is more important than climate change, but your suggestion has basically nothing to do with the aims of the original bill.
I don’t want to address “real existing harm by existing actors”, I want to prevent future AI systems from killing literally everyone.
A real AI system that kills literally everyone will do so by gaining power/resources over a period of time. Most likely it will do so the same way existing bad-agents accumulate power and resources.
Unless you’re explicitly committing to the Diamondoid bacteria thing, stopping hacking is stopping AI from taking over the world.
I am not sure I understand this comment. Are you saying you think there are autonomous AI systems that right now are trying to accumulate power? And that present regulation should be optimized to stop those?