“and stop your current algorithm” is not the same as “and ensure your hardware and software have minimised impact in the future”.
What does the latter mean? Self destruct in case anyone misuses you?
I’m pointing out that “suggest a plan and stop” does not prevent the tool from suggesting a plan that turns itself into an agent.
My intention was that the X is stipulated by a human.
If you instruct a tool AI to make a million paperclips and stop, it won’t turn itself into an agent with a stable goal of paper Clipping, because the agent will not stop.
Yes, if the reduced impact problem is solved, then a reduced impact AI will have a reduced impact. That’s not all that helpful, though.
I don’t see what needs solving. I f you ask Google maps the way to Tunbridge Wells, it doesn’t give you the route to Timbuctu.
“and stop your current algorithm” is not the same as “and ensure your hardware and software have minimised impact in the future”.
What does the latter mean? Self destruct in case anyone misuses you?
I’m pointing out that “suggest a plan and stop” does not prevent the tool from suggesting a plan that turns itself into an agent.
My intention was that the X is stipulated by a human.
If you instruct a tool AI to make a million paperclips and stop, it won’t turn itself into an agent with a stable goal of paper Clipping, because the agent will not stop.
Yes, if the reduced impact problem is solved, then a reduced impact AI will have a reduced impact. That’s not all that helpful, though.
I don’t see what needs solving. I f you ask Google maps the way to Tunbridge Wells, it doesn’t give you the route to Timbuctu.