I suppose one question I have to ask, in the context of “slowing down” the development of AI....how? the only pathway I can muster is government regulation. But such an action would need to be global, as any regulation passed in one nation would undoubtedly be bypassed by another, no?
I don’t see any legitimate pathway to actually slow down the development of AGI, so I think the question is a false one. The better question is, what can we do to prepare for its emergence? I imagine that there are very tangible actions we can take on that front.
If, say, the US government were to regulate OpenAI and Big Tech in general to slow them down significantly, this might buy a few years. In the longer term you’d need to get China etc. on board—but that is not completely unfathomable and should be significantly easier, if you’re not running ahead full steam yourself.
I suppose one question I have to ask, in the context of “slowing down” the development of AI....how? the only pathway I can muster is government regulation. But such an action would need to be global, as any regulation passed in one nation would undoubtedly be bypassed by another, no?
I don’t see any legitimate pathway to actually slow down the development of AGI, so I think the question is a false one. The better question is, what can we do to prepare for its emergence? I imagine that there are very tangible actions we can take on that front.
I found this a very lucid write-up of the case for slowing down and how realistic/unrealistic it is:
https://www.lesswrong.com/posts/uFNgRumrDTpBfQGrs/let-s-think-about-slowing-down-ai
If, say, the US government were to regulate OpenAI and Big Tech in general to slow them down significantly, this might buy a few years. In the longer term you’d need to get China etc. on board—but that is not completely unfathomable and should be significantly easier, if you’re not running ahead full steam yourself.