That’s definitely progress. I think that the best thing AI regulation can do right now is looking to the future, and in particular getting prepared with draft plans for AI regulation, so that if or when the next crisis hits, we won’t be fumbling for solutions and instead have good AI regulations back in the running.
Agree that those drafts are very important. I also think there will be technical research required in order to find out which regulation would actually be sufficient (I think at present we have no idea). I disagree, however, that waiting for a crisis (warning shot) is a good plan. There might not really be one. If there would be one, though, I agree that we should at least be ready.
True that we probably shouldn’t wait for a crisis, but one thing that does stand out to me is that the biggest issue wasn’t political will, but rather that AI governance was pretty unpreprared for this moment (though they improvised surprisingly effectively).
That’s definitely progress. I think that the best thing AI regulation can do right now is looking to the future, and in particular getting prepared with draft plans for AI regulation, so that if or when the next crisis hits, we won’t be fumbling for solutions and instead have good AI regulations back in the running.
Agree that those drafts are very important. I also think there will be technical research required in order to find out which regulation would actually be sufficient (I think at present we have no idea). I disagree, however, that waiting for a crisis (warning shot) is a good plan. There might not really be one. If there would be one, though, I agree that we should at least be ready.
True that we probably shouldn’t wait for a crisis, but one thing that does stand out to me is that the biggest issue wasn’t political will, but rather that AI governance was pretty unpreprared for this moment (though they improvised surprisingly effectively).