Minor point from Nick Bostrom: an agent AI may be safer than a tool AI, because if something goes unexpectedly wrong, then an agent with safe goals should turn out to be better than a non-agent whose behaviour would be unpredictable.
Also, an agent with safer goals than humans have (which is a high bar, but not nearly as high a bar as some alternatives) is safer than humans with equivalently powerful tools.
I don’t think this makes any sense. A tool AI has no autonomous behavior. It computes a function. Its output has no impact on the world until a human uses it. The phrase “tool AI” implies to me that we are not talking about an AI that you ask, for instance, to “fix the economy”; we are talking about an AI that you ask questions such as, “Find me data showing whether lowering taxes increases tax revenue.”
Minor point from Nick Bostrom: an agent AI may be safer than a tool AI, because if something goes unexpectedly wrong, then an agent with safe goals should turn out to be better than a non-agent whose behaviour would be unpredictable.
Also, an agent with safer goals than humans have (which is a high bar, but not nearly as high a bar as some alternatives) is safer than humans with equivalently powerful tools.
How is this helpful? This is true by definition of the word “safer”. The problem is knowing whether an agent has safer goals, or what “safer” means.
I don’t think this makes any sense. A tool AI has no autonomous behavior. It computes a function. Its output has no impact on the world until a human uses it. The phrase “tool AI” implies to me that we are not talking about an AI that you ask, for instance, to “fix the economy”; we are talking about an AI that you ask questions such as, “Find me data showing whether lowering taxes increases tax revenue.”