If you are just trying to communicate risk, analogy to a virus might be helpful in this respect. A natural virus can be thought of as code that has goals. If it harms humankind, it doesn’t ‘intend’ to, it is just a side effect of achieving its goals. We might create an artificial virus with a goal that everyone recognizes as beneficial (e.g., end malaria), but that does harm due to unexpected consequences or because the artificial virus evolves, self-modifying its original goal. Note that once a virus is released into the environment, it is nontrivial to ‘delete’ or ‘turn off’. AI will operate in an environment that is many times more complex: “mindspace”.
If you are just trying to communicate risk, analogy to a virus might be helpful in this respect. A natural virus can be thought of as code that has goals. If it harms humankind, it doesn’t ‘intend’ to, it is just a side effect of achieving its goals. We might create an artificial virus with a goal that everyone recognizes as beneficial (e.g., end malaria), but that does harm due to unexpected consequences or because the artificial virus evolves, self-modifying its original goal. Note that once a virus is released into the environment, it is nontrivial to ‘delete’ or ‘turn off’. AI will operate in an environment that is many times more complex: “mindspace”.