I’m deeply confused. How can you even define the difference between tool AI and FAI?
I assume that even tool AI is supposed to be able to opine on relatively long sequences of input. In particular, to be useful it must be able to accumulate information over essentially unbounded time periods. Say if you want advise about where to position your air defenses you must be able to go back to the AI system each day hand it updates on enemy activity and expect it to integrate that information with information it received during previous sessions. Whether or not you upload this info each time you ask a quesiton or not in effect the AI has (periods) in which it is loaded with a significant amount of information about past events.
But now you face the problem that self-modification is indistinguishable from simple storing of data. The existence of universal Turing machines demonstrate that much. Simply by loading up information in memory one can generate behavior corresponding to any kind of (software) self-modification.
So perhaps the supposed difference is that this AI won’t actually take direct actions, merely make verbal suggestions. Well it’s awful optimistic to suppose no one will get lazy or exigencies won’t drive them to connect a simple script up to the machine which takes say sentences of the form “I recommend you deploy your troops in this manner.” and directly sends the orders. Even if so the machine still takes direct action in the form of making statements that influence human behavior.
You might argue that a tool AI is one in which the advice it generates doesn’t require self-reference or consideration of it’s future actions so it is somehow different in kind. However, again simple analysis reveals this can’t be so. Imagine again the basic question of “How should I position my forces to defend against the enemy attack.” Now, given that the enemy is likely to react in certain ways correct advice requires the tool AI to consider whether future responses will be orchestrated by itself or a human who will be unable to handle certain kinds of complexity or be inclined to different sorts of responses. Those even a purely advisory AI needs the ability to project likely outcomes based on it’s on likely future behaviors.
Now it seems we are again in the realm of ‘FAI’ since one has to ensure that the advice given by the machine when presented with indefinitely long, complex historical records won’t end up encouraging the outcome where someone ends up connecting permanent memory and wiring on the ability to take direct action. After all, if the advise is designed to be of maximum usefulness to the people asking the tool AI must be programmed to give advice that causes them to best achieve the goals they ask for advice in achieving. Since such goals could quite reasonably be advanced by the ability of the AI to take direct action and the reasons for the advice can’t ever be entirely explained to humans (even deep blue goes beyond being able to do that to humans now) I don’t see how the problem isn’t just as complicated as ‘FAI’.
I guess it comes down to my belief that if you can’t formulate the notion precisely I’m skeptical it’s coherent.
So perhaps the supposed difference is that this AI won’t actually take direct actions, merely make verbal suggestions. Well it’s awful optimistic to suppose no one will get lazy or exigencies won’t drive them to connect a simple script up to the machine which takes say sentences of the form “I recommend you deploy your troops in this manner.” and directly sends the orders. Even if so the machine still takes direct action in the form of making statements that influence human behavior.
An Oracle determines which action would produce higher utility, then outputs it. An “Agent AGI” determines which output will produce higher utility, then outputs it. It’s a question of optimizing the output or merely outputting optimization.
And yes, you can easily turn an Oracle into an Agent.
I’m deeply confused. How can you even define the difference between tool AI and FAI?
I assume that even tool AI is supposed to be able to opine on relatively long sequences of input. In particular, to be useful it must be able to accumulate information over essentially unbounded time periods. Say if you want advise about where to position your air defenses you must be able to go back to the AI system each day hand it updates on enemy activity and expect it to integrate that information with information it received during previous sessions. Whether or not you upload this info each time you ask a quesiton or not in effect the AI has (periods) in which it is loaded with a significant amount of information about past events.
But now you face the problem that self-modification is indistinguishable from simple storing of data. The existence of universal Turing machines demonstrate that much. Simply by loading up information in memory one can generate behavior corresponding to any kind of (software) self-modification.
So perhaps the supposed difference is that this AI won’t actually take direct actions, merely make verbal suggestions. Well it’s awful optimistic to suppose no one will get lazy or exigencies won’t drive them to connect a simple script up to the machine which takes say sentences of the form “I recommend you deploy your troops in this manner.” and directly sends the orders. Even if so the machine still takes direct action in the form of making statements that influence human behavior.
You might argue that a tool AI is one in which the advice it generates doesn’t require self-reference or consideration of it’s future actions so it is somehow different in kind. However, again simple analysis reveals this can’t be so. Imagine again the basic question of “How should I position my forces to defend against the enemy attack.” Now, given that the enemy is likely to react in certain ways correct advice requires the tool AI to consider whether future responses will be orchestrated by itself or a human who will be unable to handle certain kinds of complexity or be inclined to different sorts of responses. Those even a purely advisory AI needs the ability to project likely outcomes based on it’s on likely future behaviors.
Now it seems we are again in the realm of ‘FAI’ since one has to ensure that the advice given by the machine when presented with indefinitely long, complex historical records won’t end up encouraging the outcome where someone ends up connecting permanent memory and wiring on the ability to take direct action. After all, if the advise is designed to be of maximum usefulness to the people asking the tool AI must be programmed to give advice that causes them to best achieve the goals they ask for advice in achieving. Since such goals could quite reasonably be advanced by the ability of the AI to take direct action and the reasons for the advice can’t ever be entirely explained to humans (even deep blue goes beyond being able to do that to humans now) I don’t see how the problem isn’t just as complicated as ‘FAI’.
I guess it comes down to my belief that if you can’t formulate the notion precisely I’m skeptical it’s coherent.
An Oracle determines which action would produce higher utility, then outputs it. An “Agent AGI” determines which output will produce higher utility, then outputs it. It’s a question of optimizing the output or merely outputting optimization.
And yes, you can easily turn an Oracle into an Agent.