To the contrary, the danger arises because the AI will interact with us in the interim between input and output with requests for clarification, resources, and assistance. That is, it will realize that manipulation of the outside world is a permitted method in achieving its mission.
Except this is not the case for the AI I describe in my post.
The AI I describe in my post cannot make request for anything. It doesn’t need clarification because we don’t ask it question in a natural language at all! So I don’t think you’re criticism apply to this specific model.
Except this is not the case for the AI I describe in my post.
The AI I describe in my post cannot make request for anything. It doesn’t need clarification because we don’t ask it question in a natural language at all! So I don’t think you’re criticism apply to this specific model.