Let’s be a bit more specific—that is one important point of the article, that as soon as the “Tool AI” definition becomes more specific, the problems start to appear.
Sure, but part of my point is that there are multiple options for a Tool AI definition. The one I prefer is narrow AIs that can answer particular questions well- and so to answer any question, you need a Tool that decides which Tools to call on the question, each of those Tools, and then a Tool that selects which answers to present to the user.
What would be awesome is if we could write an AI that would write those Tools itself. But that requires general intelligence, because it needs to understand the questions to write the Tools. (This is what the Oracle in a box looks like to me.) But that’s also really difficult and dangerous, for reasons that we don’t need to go over again. Notice Holden’s claim- that his Tools don’t need to gather data because they’ve already been supplied with a dataset- couldn’t be a reasonable limitation for an Oracle in a box (unless it’s a really big box).
I think the discussion would be improved by making more distinctions like that, and trying to identify the risk and reward of particular features. That would be demonstrating what FAI thinkers are good at.
Sure, but part of my point is that there are multiple options for a Tool AI definition. The one I prefer is narrow AIs that can answer particular questions well- and so to answer any question, you need a Tool that decides which Tools to call on the question, each of those Tools, and then a Tool that selects which answers to present to the user.
What would be awesome is if we could write an AI that would write those Tools itself. But that requires general intelligence, because it needs to understand the questions to write the Tools. (This is what the Oracle in a box looks like to me.) But that’s also really difficult and dangerous, for reasons that we don’t need to go over again. Notice Holden’s claim- that his Tools don’t need to gather data because they’ve already been supplied with a dataset- couldn’t be a reasonable limitation for an Oracle in a box (unless it’s a really big box).
I think the discussion would be improved by making more distinctions like that, and trying to identify the risk and reward of particular features. That would be demonstrating what FAI thinkers are good at.