I’m remembering an Asimov that has robots who can’t kill but can allow harm to come to humans. They end up putting humans into deadly situation at Time A, as they know that they are able to save them and so the threat does not apply. And then at Time B not bothering to save them after all.
You are thinking of the short story Little Lost Robot, in which a character (robo-psychologist Susan Calvin) speculated that a robot built with a weakened first law (omitting “or through inaction, allow a human to come to harm”) may harm a human in that way, though it never occurs in the story.
You are thinking of the short story Little Lost Robot, in which a character (robo-psychologist Susan Calvin) speculated that a robot built with a weakened first law (omitting “or through inaction, allow a human to come to harm”) may harm a human in that way, though it never occurs in the story.