The part where development of AGI fooms immediately into superintelligence and destroys the world. Evidence for it in not even circumstantial, it is fictional.
Still, when I imagine something that is smarter than man who created it, it seems it would be able to improve itself.I would bet on that; I do not see a strong reason why this would not happen. What about you? Are you with Hanson on this one?
Ok, thinking how close we are to AGI is a prior I do not care to argue about, but don’t you think AGI is a concern? What do you mean by a weak link?
The part where development of AGI fooms immediately into superintelligence and destroys the world. Evidence for it in not even circumstantial, it is fictional.
Ok, of course it’s fictional—hasn’t happened yet!
Still, when I imagine something that is smarter than man who created it, it seems it would be able to improve itself.I would bet on that; I do not see a strong reason why this would not happen. What about you? Are you with Hanson on this one?