DeepMind is very definitely AGI in the sense of the domain of problems its learners can learn and its agents can solve. If DeepMind is easily controlled and not very dangerous, that’s not evidence for AGI being further away than we thought before we looked at DeepMind, it’s evidence for AGI being more easily controlled than we thought before we looked at DeepMind.
Real AGI was never going to look like magic genies, so we should never fault real-life AI work for failing at genie.
DeepMind is very definitely AGI in the sense of the domain of problems its learners can learn and its agents can solve. If DeepMind is easily controlled and not very dangerous, that’s not evidence for AGI being further away than we thought before we looked at DeepMind, it’s evidence for AGI being more easily controlled than we thought before we looked at DeepMind.
Real AGI was never going to look like magic genies, so we should never fault real-life AI work for failing at genie.