Think of the situation in the last round of an iterated Prisoner’s Dilemma with known bounds. Because of the variety of agents you might be dealing with, the payoffs there aren’t strictly Newcomblike, but they’re closely related; there’s a large class of opposing strategies (assuming reasonably bright agents with some level of insight into your behavior, e.g. if you are a software agent and your opponent has access to your source code) which will cooperate if they model you as likely to cooperate (but, perhaps, don’t model you as a CooperateBot) and defect otherwise. If you know you’re dealing with an agent like that, then defection can be thought of as analogous to two-boxing in Newcomb.
Think of the situation in the last round of an iterated Prisoner’s Dilemma with known bounds. Because of the variety of agents you might be dealing with, the payoffs there aren’t strictly Newcomblike, but they’re closely related; there’s a large class of opposing strategies (assuming reasonably bright agents with some level of insight into your behavior, e.g. if you are a software agent and your opponent has access to your source code) which will cooperate if they model you as likely to cooperate (but, perhaps, don’t model you as a CooperateBot) and defect otherwise. If you know you’re dealing with an agent like that, then defection can be thought of as analogous to two-boxing in Newcomb.