Well, no one has built an AGI yet, and if your plan is to wait until we have years of experience with unaligned AGIs before it’s OK to start worrying about the problem, that’s a bad plan.
Also, there are things which are not AGI but which are similar in various ways (software, deep neural nets, rocket navigation mechanisms, prisons, childrearing strategies, tiger-training-strategies) which provide ample examples of unseen errors.
Also, like I said, there ARE plenty of POCs for AGI risk.
Well, no one has built an AGI yet, and if your plan is to wait until we have years of experience with unaligned AGIs before it’s OK to start worrying about the problem, that’s a bad plan.
Also, there are things which are not AGI but which are similar in various ways (software, deep neural nets, rocket navigation mechanisms, prisons, childrearing strategies, tiger-training-strategies) which provide ample examples of unseen errors.
Also, like I said, there ARE plenty of POCs for AGI risk.