I’ve rewritten my comment and posted it as a standalone article. I’ve somewhat improved it, so you may want to read the other one.
I am not aware of any articles concerning the problem of how we should approach self-improving AGI, I was just hacking my personal thoughts on this matter into the keyboard. If you are talking about the potentially disastrous effects of public rage over the matter of AGI, then Hugo de Garis comes to mind—but personally I find his downright apocalyptic scenarios of societal upheaval and wars over AI a bit ridiculous and hyper-pessimistic, given that as far as I know he lacks any really substantial arguments to support such a disastrous scenario.
EDIT: I have revised my opinion of Hugo’s views after watching all parts of his youtube interview on the following YTchannel: http://www.youtube.com/user/TheRationalFuture. He does make a lot of valid points and I would advise everyone to take a look in order to broaden one’s perspective.
I’ve rewritten my comment and posted it as a standalone article. I’ve somewhat improved it, so you may want to read the other one.
I am not aware of any articles concerning the problem of how we should approach self-improving AGI, I was just hacking my personal thoughts on this matter into the keyboard. If you are talking about the potentially disastrous effects of public rage over the matter of AGI, then Hugo de Garis comes to mind—but personally I find his downright apocalyptic scenarios of societal upheaval and wars over AI a bit ridiculous and hyper-pessimistic, given that as far as I know he lacks any really substantial arguments to support such a disastrous scenario.
EDIT: I have revised my opinion of Hugo’s views after watching all parts of his youtube interview on the following YTchannel: http://www.youtube.com/user/TheRationalFuture. He does make a lot of valid points and I would advise everyone to take a look in order to broaden one’s perspective.