sometimes there would be multiple possible self-consistent models
I’m not sure what you’re getting at here; you may have a different conception of predictive-processing-like decision theory than I do. I would say “I will get up and go to the store” is a self-consistent model, “I will sit down and read the news” is a self-consistent model, etc. etc. There are always multiple possible self-consistent models—at least one for each possible action that you will take.
Oh, maybe you’re taking the perspective where if you’re hungry you put a high prior on “I will eat soon”. Yeah, I just don’t think that’s right, or if there’s a sensible way to think about it, I haven’t managed to get it despite some effort. I think if you’re hungry, you want to eat because it leads to a predicted reward, not because you have a prior expectation that you will eat. After all, if you’re stuck on a lifeboat in the middle of the ocean, you’re hungry but you don’t expect to eat. This is an obvious point, frequently brought up, and Friston & colleagues hold strong that it’s not a problem for their theory, and I can’t make heads or tails of what their counterargument is. I discussed my version (where rewards are also involved) here, and then here I went into more depth for a specific example.
I’m not sure what you’re getting at here; you may have a different conception of predictive-processing-like decision theory than I do. I would say “I will get up and go to the store” is a self-consistent model, “I will sit down and read the news” is a self-consistent model, etc. etc. There are always multiple possible self-consistent models—at least one for each possible action that you will take.
Oh, maybe you’re taking the perspective where if you’re hungry you put a high prior on “I will eat soon”. Yeah, I just don’t think that’s right, or if there’s a sensible way to think about it, I haven’t managed to get it despite some effort. I think if you’re hungry, you want to eat because it leads to a predicted reward, not because you have a prior expectation that you will eat. After all, if you’re stuck on a lifeboat in the middle of the ocean, you’re hungry but you don’t expect to eat. This is an obvious point, frequently brought up, and Friston & colleagues hold strong that it’s not a problem for their theory, and I can’t make heads or tails of what their counterargument is. I discussed my version (where rewards are also involved) here, and then here I went into more depth for a specific example.