Even with the limited AGI with very specific goals (build 1000 cars) the problem is not automatically solved.
The AI might deduce that if humans still exist, there is a higher than zero probability that a human will prevent it from finishing the task, so to be completely safe, all humans must be killed.
Or it will deduce that there is an even higher probability that either (1) it will fail at killing humans and be turned off itself, or (2) encounter problems for which it needs or would largely benefit from human cooperation.
Even with the limited AGI with very specific goals (build 1000 cars) the problem is not automatically solved.
The AI might deduce that if humans still exist, there is a higher than zero probability that a human will prevent it from finishing the task, so to be completely safe, all humans must be killed.
Or it will deduce that there is an even higher probability that either (1) it will fail at killing humans and be turned off itself, or (2) encounter problems for which it needs or would largely benefit from human cooperation.