There are set ups where each agent is using an nonphysically large but finite amount of compute.
In a situation where you are asking a question about an ideal reasoner, having the agents be finite means you are no longer asking it about an ideal reasoner. If you put an ideal reasoner in a Newcomb problem, he may very well think “I’ll simulate Omega and act according to what I find”. (Or more likely, some more complicated algorithm that indirectly amounts to that.) If the agent can’t do this, he may not be able to solve the problem. Of course, real humans can’t, but this may just mean that real humans are, because they are finite, unable to solve some problems.
In a situation where you are asking a question about an ideal reasoner, having the agents be finite means you are no longer asking it about an ideal reasoner. If you put an ideal reasoner in a Newcomb problem, he may very well think “I’ll simulate Omega and act according to what I find”. (Or more likely, some more complicated algorithm that indirectly amounts to that.) If the agent can’t do this, he may not be able to solve the problem. Of course, real humans can’t, but this may just mean that real humans are, because they are finite, unable to solve some problems.