I don’t think Eliezer addressed Holden’s point about tool AI. My interpretation of Holden’s point was, “SIAI should spend some time investigating the concept of Tool AI, see what can be done in that area to make something that is useful and safer than agentive AI, and promote the idea that AI should be pursued in that manner.”
My interpretation of Eliezer’s response (between , because he won’t like it if I use quotes) is,
A.
This is completely irrelevant.
EDIT July 2: Eliezer’s response would make sense only if Holden had been suggesting that SIAI should warn AI researchers against tool AI, as it warns them against autonomous AI. That was not what he was saying. He was saying that SIAI should consider tool AI as a possible more-safe kind of AI, just as it considers FAI as a possible more-safe kind of AI. If one rejects investigating tool AI because not many AI researchers use it, one must also reject investigating FAI, for the same reason.
ORIGINAL TEXT: Far many more AI researchers have found tool AI to be an obvious approach (e.g., Winograd, Schank, Lenat) than have found FAI to be an obvious approach. SIAI finds it worthwhile to investigate FAI, find some reasonable approach using it, and encourage other researchers to consider adopting that approach. Holden suggested that, in exactly the same way, SIAI could investigate tool AI, find some reasonable way of using it, and encourage other researchers to do that.
B.
Holden said, And Eliezer replies, There are lots of very-useful tool AIs that would not model the user. Google Maps takes two endpoints and produces a route between them. It relies on the user to have picked two endpoints that are useful. If Eliezer’s objection were valid, he should have been able to come up with a scenario in which the algorithm Google Maps used to choose a route could pose a threat. It doesn’t matter what else he says, if he can’t show that.
That is not an argument against investigating Holden’s idea. It is an explanation of why SIAI had not investigated Holden’s idea before Holden had presented it (you can tell because it’s in the section titled “Why we haven’t already discussed Holden’s suggestion”). This explanation was given in response Holden presenting the idea in the course of criticizing SIAI for not having investigated it.
It’s still irrelevant. Other researchers did not find FAI to be an obvious approach, either. Holden is suggesting that SIAI could investigate tool AI as a possible safer approach. Eliezer’s response would make sense only if Holden had been suggesting SIAI should investigate the dangers of tool AI, in order to warn people against it—which is not what Holden was doing.
If Eliezer’s objection were valid, he should have been able to come up with a scenario in which the algorithm Google Maps used to choose a route could pose a threat. It doesn’t matter what else he says, if he can’t show that.
The discussion was about AGI. The algorithm the real Google Maps actually uses is irrelevant, since it is not an AGI. “Tool AI” does not simply mean “Narrow AI”.
The point is not what algorithm google maps uses. The point is that google maps does not model the user, and try to manipulate the user. Google maps is asked for a short way to get between two points, and it finds such a route and reports it. It is invulnerable to all the objections Eliezer makes, even though it is the example Eliezer began with when making his objections!
[EY] My particular conception of an extraordinarily powerful tool AI, which would be vastly more powerful than any other conception of tool AI that anyone has considered, would secretly be an agentive AI because the difference between trying to inform the user and trying to manipulate the user is only semantic.
This is not a valid response. Holden is saying, “Here’s this vast space of possible kinds of AIs, subsumed under the term ‘tool AI’, that you should investigate.” And Eliezer is saying, “AIs within a small subset of that space would be dangerous; therefore I’m not interested in that space.”
How do you know it is a small subset? Or a subset at all? If every interestingly powerful tool AI is secretly an agent AI, that’s bad, right?
If every interestingly powerful tool AI is secretly an agent AI, that’s bad, right?
Sure. And that’s what Eliezer would have had to argue for his response to be valid. And doing so would have required, at the very least, showing that Google Maps is secretly an agent AI.
The key sentence in Eliezer’s response is, “If a planning Oracle is going to produce better solutions than humanity has yet managed to the Rubik’s Cube, it needs to be capable of doing original computer science research and writing its own code.” Eliezer’s response is only relevant to “tool AIs” of this level. Google maps is not on this level. This argument completely fails to apply to Google Maps—which supposedly motivated the repsonse—as proven by the fact that Google maps EXISTS and does not do anything like this.
Seems to me that there’s rather a large gap between “interestingly powerful” and superhuman in Eliezer’s sense. We like Google Maps because it can come up with fast, general, usually-good-enough solutions to route-planning problems, but I’m nowhere near convinced that Google Maps generates solutions that suitably trained human beings couldn’t if given the same data in a human-understandable format. Particularly not solutions that’re interesting because of their cleverness or originality or other qualities that we generally associate with organic intelligence.
On the other hand, automated theorem provers do exist, and they’ve generated some results that humans haven’t. It’s not inconceivable to me that similar systems could be applied to Rubik’s Cube (or similar) and come up with interesting results, all without doing humanlike research or rewriting their own code. Not that this is a particularly devastating argument within the narrower context of AGI.
ETA: Odd. I really didn’t expect this to be downvoted. If I’m making some obvious mistake, I’d appreciate knowing what it is.
I don’t think Eliezer addressed Holden’s point about tool AI. My interpretation of Holden’s point was, “SIAI should spend some time investigating the concept of Tool AI, see what can be done in that area to make something that is useful and safer than agentive AI, and promote the idea that AI should be pursued in that manner.”
My interpretation of Eliezer’s response (between , because he won’t like it if I use quotes) is,
A.
This is completely irrelevant.
EDIT July 2: Eliezer’s response would make sense only if Holden had been suggesting that SIAI should warn AI researchers against tool AI, as it warns them against autonomous AI. That was not what he was saying. He was saying that SIAI should consider tool AI as a possible more-safe kind of AI, just as it considers FAI as a possible more-safe kind of AI. If one rejects investigating tool AI because not many AI researchers use it, one must also reject investigating FAI, for the same reason.
ORIGINAL TEXT: Far many more AI researchers have found tool AI to be an obvious approach (e.g., Winograd, Schank, Lenat) than have found FAI to be an obvious approach. SIAI finds it worthwhile to investigate FAI, find some reasonable approach using it, and encourage other researchers to consider adopting that approach. Holden suggested that, in exactly the same way, SIAI could investigate tool AI, find some reasonable way of using it, and encourage other researchers to do that.
B.
Holden said, And Eliezer replies, There are lots of very-useful tool AIs that would not model the user. Google Maps takes two endpoints and produces a route between them. It relies on the user to have picked two endpoints that are useful. If Eliezer’s objection were valid, he should have been able to come up with a scenario in which the algorithm Google Maps used to choose a route could pose a threat. It doesn’t matter what else he says, if he can’t show that.
That is not an argument against investigating Holden’s idea. It is an explanation of why SIAI had not investigated Holden’s idea before Holden had presented it (you can tell because it’s in the section titled “Why we haven’t already discussed Holden’s suggestion”). This explanation was given in response Holden presenting the idea in the course of criticizing SIAI for not having investigated it.
It’s still irrelevant. Other researchers did not find FAI to be an obvious approach, either. Holden is suggesting that SIAI could investigate tool AI as a possible safer approach. Eliezer’s response would make sense only if Holden had been suggesting SIAI should investigate the dangers of tool AI, in order to warn people against it—which is not what Holden was doing.
The discussion was about AGI. The algorithm the real Google Maps actually uses is irrelevant, since it is not an AGI. “Tool AI” does not simply mean “Narrow AI”.
The point is not what algorithm google maps uses. The point is that google maps does not model the user, and try to manipulate the user. Google maps is asked for a short way to get between two points, and it finds such a route and reports it. It is invulnerable to all the objections Eliezer makes, even though it is the example Eliezer began with when making his objections!
How do you know it is a small subset? Or a subset at all? If every interestingly powerful tool AI is secretly an agent AI, that’s bad, right?
Sure. And that’s what Eliezer would have had to argue for his response to be valid. And doing so would have required, at the very least, showing that Google Maps is secretly an agent AI.
The key sentence in Eliezer’s response is, “If a planning Oracle is going to produce better solutions than humanity has yet managed to the Rubik’s Cube, it needs to be capable of doing original computer science research and writing its own code.” Eliezer’s response is only relevant to “tool AIs” of this level. Google maps is not on this level. This argument completely fails to apply to Google Maps—which supposedly motivated the repsonse—as proven by the fact that Google maps EXISTS and does not do anything like this.
Seems to me that there’s rather a large gap between “interestingly powerful” and superhuman in Eliezer’s sense. We like Google Maps because it can come up with fast, general, usually-good-enough solutions to route-planning problems, but I’m nowhere near convinced that Google Maps generates solutions that suitably trained human beings couldn’t if given the same data in a human-understandable format. Particularly not solutions that’re interesting because of their cleverness or originality or other qualities that we generally associate with organic intelligence.
On the other hand, automated theorem provers do exist, and they’ve generated some results that humans haven’t. It’s not inconceivable to me that similar systems could be applied to Rubik’s Cube (or similar) and come up with interesting results, all without doing humanlike research or rewriting their own code. Not that this is a particularly devastating argument within the narrower context of AGI.
ETA: Odd. I really didn’t expect this to be downvoted. If I’m making some obvious mistake, I’d appreciate knowing what it is.