Jaan: so GMAGI would—effectively—still be a narrow AI that’s designed to augment human capabilities in particularly strategic domains, while not being able to perform tasks such as programming. also, importantly, such GMAGI would not be able to make non-statistical (ie, individual) predictions about the behaviour of human beings, since it is unable to predict their actions in domains where it is inferior.
Holden: [...] I don’t think of the GMAGI I’m describing as necessarily narrow—just as being such that assigning it to improve its own prediction algorithm is less productive than assigning it directly to figuring out the questions the programmer wants (like “how do I develop superweapons”). There are many ways this could be the case.
Jaan: [...] i stand corrected re the GMAGI definition—from now on let’s assume that it is a full blown AGI in the sense that it can perform every intellectual task better than the best of human teams, including programming itself.
It’s not clear to me that everyone involved has the same understanding of AGI, unless in the next statement Holden agrees with the sense that Jaan uses.
Holden explicitly said that he was talking about AGI in his dialogue with Jaan Tallinn:
Jaan: so GMAGI would—effectively—still be a narrow AI that’s designed to augment human capabilities in particularly strategic domains, while not being able to perform tasks such as programming. also, importantly, such GMAGI would not be able to make non-statistical (ie, individual) predictions about the behaviour of human beings, since it is unable to predict their actions in domains where it is inferior.
Holden: [...] I don’t think of the GMAGI I’m describing as necessarily narrow—just as being such that assigning it to improve its own prediction algorithm is less productive than assigning it directly to figuring out the questions the programmer wants (like “how do I develop superweapons”). There are many ways this could be the case.
Jaan: [...] i stand corrected re the GMAGI definition—from now on let’s assume that it is a full blown AGI in the sense that it can perform every intellectual task better than the best of human teams, including programming itself.
It’s not clear to me that everyone involved has the same understanding of AGI, unless in the next statement Holden agrees with the sense that Jaan uses.