I very strongly agree with this post. Thank you very much for writing it!
I think to reach a general agreement on not doing certain stupid things, we need to better understand and define what exactly those things are that we shouldn’t do. For example, instead of talking about slowing down the development of AGI, which is a quite fuzzy term, we could talk about preventing uncontrollable AI. Superintelligent self-improving AGI would very likely be uncontrollable, but there could be lesser forms of AI that could be uncontrollable, and thus very dangerous, as well. It should also be easier to reach agreement that uncontrollable AI is a bad thing. At least, I don’t think that any leader of an AI lab would proudly announce to the public that they’re trying to develop uncontrollable AI.
Of course, it isn’t clear yet what exactly makes an AI uncontrollable. I would love to see more research in that direction.
I very strongly agree with this post. Thank you very much for writing it!
I think to reach a general agreement on not doing certain stupid things, we need to better understand and define what exactly those things are that we shouldn’t do. For example, instead of talking about slowing down the development of AGI, which is a quite fuzzy term, we could talk about preventing uncontrollable AI. Superintelligent self-improving AGI would very likely be uncontrollable, but there could be lesser forms of AI that could be uncontrollable, and thus very dangerous, as well. It should also be easier to reach agreement that uncontrollable AI is a bad thing. At least, I don’t think that any leader of an AI lab would proudly announce to the public that they’re trying to develop uncontrollable AI.
Of course, it isn’t clear yet what exactly makes an AI uncontrollable. I would love to see more research in that direction.