In 2012, Holden Karnofsky[1]critiqued MIRI (then SI) by saying “SI appears to neglect the potentially important distinction between ‘tool’ and ‘agent’ AI.” He particularly claimed:
Is a tool-AGI possible? I believe that it is, and furthermore that it ought to be our default picture of how AGI will work
I understand this to be the first introduction of the “tool versus agent” ontology, and it is a helpful (relatively) concrete prediction. Eliezer replied here, making the following summarized points (among others):
Tool AI is nontrivial
Tool AI is not obviously the way AGI should or will be developed
AIs limited to pure computation (Tool AIs) supporting humans, will be less intelligent, efficient, and economically valuable than more autonomous reinforcement-learning AIs (Agent AIs) who act on their own and meta-learn, because all problems are reinforcement-learning problems.
11 years later, can we evaluate the accuracy of these predictions?
Can we evaluate the “tool versus agent” AGI prediction?
In 2012, Holden Karnofsky[1] critiqued MIRI (then SI) by saying “SI appears to neglect the potentially important distinction between ‘tool’ and ‘agent’ AI.” He particularly claimed:
I understand this to be the first introduction of the “tool versus agent” ontology, and it is a helpful (relatively) concrete prediction. Eliezer replied here, making the following summarized points (among others):
Gwern more directly replied by saying:
11 years later, can we evaluate the accuracy of these predictions?
Some Bayes points go to LW commenter shminux for saying that this Holden kid seems like he’s going places