Not super important but maybe worth mentioning in the context of generalizing Pavlov: the strategy Pavlov for the iterated PD can be seen as an extremely shortsighted version of the law of effect, which basically says: repeat actions that have worked well in the past (in similar situations). Of course, the LoE can be applied in a wide range of settings. For example, in their reinforcement learning textbook, Sutton and Barto write that LoE underlies all of (model-free) RL.
Somewhat true, but without further bells and whistles, RL does not replicate the Pavlov strategy in Prisoner’s Dilemma, so I think looking at it that way is missing something important about what’s going on.
Not super important but maybe worth mentioning in the context of generalizing Pavlov: the strategy Pavlov for the iterated PD can be seen as an extremely shortsighted version of the law of effect, which basically says: repeat actions that have worked well in the past (in similar situations). Of course, the LoE can be applied in a wide range of settings. For example, in their reinforcement learning textbook, Sutton and Barto write that LoE underlies all of (model-free) RL.
Somewhat true, but without further bells and whistles, RL does not replicate the Pavlov strategy in Prisoner’s Dilemma, so I think looking at it that way is missing something important about what’s going on.