There is a name for ‘a finite analogy to a Turing machine’ - it’s called a finite state machine. You are correct in that one sufficiently large finite state machine can simulate a smaller FSM.
If agents must be FSMs, case a) applies. Your agents can of course get suboptimal outcomes, and many standard axioms of game theory do not apply[1]. (For instance: a game’s expected value can be lowered by adding another option[2] or raising the payoff of an option[3].)
There is a name for ‘a finite analogy to a Turing machine’ - it’s called a finite state machine. You are correct in that one sufficiently large finite state machine can simulate a smaller FSM.
If agents must be FSMs, case a) applies. Your agents can of course get suboptimal outcomes, and many standard axioms of game theory do not apply[1]. (For instance: a game’s expected value can be lowered by adding another option[2] or raising the payoff of an option[3].)
Or apply in modified forms.
Once the FSM is no longer large enough to scan through all outcomes.
Once the FSM is no longer large enough to be able to do the necessary arithmetic on the output payoffs.
I’m not very shocked by the fact that realistically finite agents can make suboptimal decisions.
Neither am I, which is why I am surprised and confused by people seemingly attaching a fair amount of importance to Newcomb’s problems.