I read you to be asking “what decision theory is implied by predictive processing as it’s implemented in human brains”. It’s my understanding that while there are attempts to derive something like a “decision theory formulated entirely in PP terms”, there are also serious arguments for the brain actually having systems that are just conventional decision theories and not directly derivable from PP.
Let’s say you try, as some PP theorists do, to explain all behavior as free energy minimization as opposed to expected utility maximization. Ransom et al. (2020) (current sci-hub) note that this makes it hard to explain cases where the mind acts according to a prediction that has a low probability of being true, but a high cost if it were true.
For example, the sound of rustling grass might be indicative either of the wind or of a lion; if wind is more likely, then predictive processing says that wind should become the predominant prediction. But for your own safety it can be better to predict that it’s a lion, just in case. “Predict a lion” is also what standard Bayesian decision theory would recommend, and it seems like the correct solution… but to get that correct solution, you need to import Bayesian decision theory as an extra ingredient, it doesn’t fall naturally out of the predictive processing framework.
That sounds to me like PP, or at least PP as it exists, is something that’s compatible with implementing different decision theories, rather than one that implies a specific decision theory by itself.
That sounds to me like PP, or at least PP as it exists, is something that’s compatible with implementing different decision theories, rather than one that implies a specific decision theory by itself.
I generally agree with this. Specifically, I tend to imagine that PP is trying to make our behavior match a model in which we behave like an agent (at least sometimes). Thus, for instance, the tendency for humans to do things which “look like” or “feel like” optimizing for X without actually optimizing for X.
In that case, PP would be consistent with many decision theories, depending on the decision theory used by the model it’s trying to match.
I don’t buy the lottery example. You never encoded the fact that you know tomorrow’s numbers. Shouldn’t the prior be that you win a million guranteed if you buy the ticket?
No. I wrote it on mobile. I noticed that my comment dialog was at the wrong parent, scrolled back up to the comment I wanted to reply to, found the text I had already written when I pressed reply, and finished it without much further thought on the matter. Perhaps the reply button sent me to the wrong comment both times?
I read you to be asking “what decision theory is implied by predictive processing as it’s implemented in human brains”. It’s my understanding that while there are attempts to derive something like a “decision theory formulated entirely in PP terms”, there are also serious arguments for the brain actually having systems that are just conventional decision theories and not directly derivable from PP.
Let’s say you try, as some PP theorists do, to explain all behavior as free energy minimization as opposed to expected utility maximization. Ransom et al. (2020) (current sci-hub) note that this makes it hard to explain cases where the mind acts according to a prediction that has a low probability of being true, but a high cost if it were true.
For example, the sound of rustling grass might be indicative either of the wind or of a lion; if wind is more likely, then predictive processing says that wind should become the predominant prediction. But for your own safety it can be better to predict that it’s a lion, just in case. “Predict a lion” is also what standard Bayesian decision theory would recommend, and it seems like the correct solution… but to get that correct solution, you need to import Bayesian decision theory as an extra ingredient, it doesn’t fall naturally out of the predictive processing framework.
That sounds to me like PP, or at least PP as it exists, is something that’s compatible with implementing different decision theories, rather than one that implies a specific decision theory by itself.
I generally agree with this. Specifically, I tend to imagine that PP is trying to make our behavior match a model in which we behave like an agent (at least sometimes). Thus, for instance, the tendency for humans to do things which “look like” or “feel like” optimizing for X without actually optimizing for X.
In that case, PP would be consistent with many decision theories, depending on the decision theory used by the model it’s trying to match.
I don’t buy the lottery example. You never encoded the fact that you know tomorrow’s numbers. Shouldn’t the prior be that you win a million guranteed if you buy the ticket?
Did you post this comment in the right place?
No. I wrote it on mobile. I noticed that my comment dialog was at the wrong parent, scrolled back up to the comment I wanted to reply to, found the text I had already written when I pressed reply, and finished it without much further thought on the matter. Perhaps the reply button sent me to the wrong comment both times?