I agree that regulation is harder to do before you know all the details of the technology, but it doesn’t seem obviously doomed, and it seems especially-not-doomed to productively think about what regulations would be good (which is the vast majority of current AI governance work by longtermists).
As a canonical example I’d think of the Asilomar conference, which I think happened well before the details of the technology were known. There are a few more examples, but overall not many. I think that’s primarily because we don’t usually try to foresee problems because we’re too caught up in current problems, so I don’t see that as a very strong update against thinking about governance in advance.
Perhaps I was unclear. I object to the idea that you should get attached to any ideas now, not that you shouldn’t think about them. People being people, they are much more prone to getting attached to their ideas than is wise. Understand before risking attachment.
The problem with AI governance, is that AI is a mix between completely novel abilities, and things humans have been doing as long as there have been humans. The latter don’t need special ‘AI governance’ and the former are not understood.
(It should be noted that I am absolutely certain that AI will not take off quickly if it ever does takeoff beyond human limits.)
The Asilomar conference isn’t something I’m particularly familiar with, but it sounds like people actually had significant hands on experience with the technology, and understood them already. They stopped the experiments because they needed the clarity, not because someone else made rules earlier. There are not details as to whether they did a good job, and the recommendations seem very generic. Of course, it is wikipedia. We are not at this point with nontrivial AI. Needless to say, I don’t think this is against my point.
I agree that regulation is harder to do before you know all the details of the technology, but it doesn’t seem obviously doomed, and it seems especially-not-doomed to productively think about what regulations would be good (which is the vast majority of current AI governance work by longtermists).
As a canonical example I’d think of the Asilomar conference, which I think happened well before the details of the technology were known. There are a few more examples, but overall not many. I think that’s primarily because we don’t usually try to foresee problems because we’re too caught up in current problems, so I don’t see that as a very strong update against thinking about governance in advance.
Perhaps I was unclear. I object to the idea that you should get attached to any ideas now, not that you shouldn’t think about them. People being people, they are much more prone to getting attached to their ideas than is wise. Understand before risking attachment.
The problem with AI governance, is that AI is a mix between completely novel abilities, and things humans have been doing as long as there have been humans. The latter don’t need special ‘AI governance’ and the former are not understood.
(It should be noted that I am absolutely certain that AI will not take off quickly if it ever does takeoff beyond human limits.)
The Asilomar conference isn’t something I’m particularly familiar with, but it sounds like people actually had significant hands on experience with the technology, and understood them already. They stopped the experiments because they needed the clarity, not because someone else made rules earlier. There are not details as to whether they did a good job, and the recommendations seem very generic. Of course, it is wikipedia. We are not at this point with nontrivial AI. Needless to say, I don’t think this is against my point.