AFAICT, tool AIs are passive, and agents are active. That is , the default state of tool AI is to do nothing. If one gives a tool AI the instruction “do (some finite ) x and stop” one would not expect the AI to create subagents with goal x, because that would disobey the “and stop”.
with goal x, because that would disobey the “and stop”.
I think you are pointing out that it is possible to create tools with a simple-enough, finite-enough, not-self-coding enough program so they will reliably not become agents.
And indeed, we have plenty of experience with tools that do not become agents (hammers, digital watches, repair manuals, contact management software, compilers).
The question really is is there a level of complexity that on its face does not appear to be AI but would wind up seeming agenty? Could you write a medical diagnostic tool that was adaptive and find one day that it was systematically installing sewage treatment systems in areas with water-borne diseases, or even agentier, building libraries and schools?
If consciousness is an emergent phenomenon, and if consciousness and agentiness are closely related (I think they are at least similar and probably related), then it seems at least plausible AI could arise from more and more complex tools with more and more recursive self-coding.
It would be helpful in understanding this if we had the first idea how consciousness or agentiness arose in life.
My intention was that the X is stipulated by a human.
If you instruct a tool AI to make a million paperclips and stop, it won’t turn itself into an agent with a stable goal of paper
Clipping, because the agent will not stop.
AFAICT, tool AIs are passive, and agents are active. That is , the default state of tool AI is to do nothing. If one gives a tool AI the instruction “do (some finite ) x and stop” one would not expect the AI to create subagents with goal x, because that would disobey the “and stop”.
I think you are pointing out that it is possible to create tools with a simple-enough, finite-enough, not-self-coding enough program so they will reliably not become agents.
And indeed, we have plenty of experience with tools that do not become agents (hammers, digital watches, repair manuals, contact management software, compilers).
The question really is is there a level of complexity that on its face does not appear to be AI but would wind up seeming agenty? Could you write a medical diagnostic tool that was adaptive and find one day that it was systematically installing sewage treatment systems in areas with water-borne diseases, or even agentier, building libraries and schools?
If consciousness is an emergent phenomenon, and if consciousness and agentiness are closely related (I think they are at least similar and probably related), then it seems at least plausible AI could arise from more and more complex tools with more and more recursive self-coding.
It would be helpful in understanding this if we had the first idea how consciousness or agentiness arose in life.
I’m pointing out that tool AI, as I have defined it will not turn itself into agentve AI [except] by malfunction, ie its relatively safe.
“and stop your current algorithm” is not the same as “and ensure your hardware and software have minimised impact in the future”.
What does the latter mean? Self destruct in case anyone misuses you?
I’m pointing out that “suggest a plan and stop” does not prevent the tool from suggesting a plan that turns itself into an agent.
My intention was that the X is stipulated by a human.
If you instruct a tool AI to make a million paperclips and stop, it won’t turn itself into an agent with a stable goal of paper Clipping, because the agent will not stop.
Yes, if the reduced impact problem is solved, then a reduced impact AI will have a reduced impact. That’s not all that helpful, though.
I don’t see what needs solving. I f you ask Google maps the way to Tunbridge Wells, it doesn’t give you the route to Timbuctu.