What’s this supposed to be estimating or predicting with Bayes here? The thing you’ll end up doing? Something like this?:
Each of the 3 processes has a general prior about how often they “win” (that add up to 100%, or maybe the basal ganglia normalizes them). And a bayes factor, given the specific “sensory” inputs related to their specific process, while remaining agnostic about the options of the other process. For example, the reinforcer would be thinking: “I get my way 30% of the time. Also, this level of desire to play the game is 2 times more frequent when I end up getting my way than when I don’t (regardless of which of the other 2 won, let’s assume, or I don’t know how to keep this modular). Similarly, the first process would be looking at the level of laziness, and the last one at the strength of the arguments or sth.
Then, the basal ganglia does bayes to update the priors given the 3 pieces of evidence, and gets to a posterior probability distribution among the 3 options.
And finally you’ll end up doing what was estimated because, well, the brain does what minimizes the prediction error. Is this the weird sense in which the info is mixed with bayes and this is all bayesian stuff?
I must be missing something. If this interpretation was correct, e.g., what would increasing the dopamine e.g. in the frontal cortex be doing? Increasing the “unnormalized” prior for such process? (like, it falsely thinks it wins more often than it does, regardless of the evidence). Falsely bias the bayes factor? (like, it thinks it almost never happens that it feels this convinced of what should happen in the cases when it doesn’t end up winning.)
What’s this supposed to be estimating or predicting with Bayes here? The thing you’ll end up doing? Something like this?:
Each of the 3 processes has a general prior about how often they “win” (that add up to 100%, or maybe the basal ganglia normalizes them). And a bayes factor, given the specific “sensory” inputs related to their specific process, while remaining agnostic about the options of the other process. For example, the reinforcer would be thinking: “I get my way 30% of the time. Also, this level of desire to play the game is 2 times more frequent when I end up getting my way than when I don’t (regardless of which of the other 2 won, let’s assume, or I don’t know how to keep this modular). Similarly, the first process would be looking at the level of laziness, and the last one at the strength of the arguments or sth.
Then, the basal ganglia does bayes to update the priors given the 3 pieces of evidence, and gets to a posterior probability distribution among the 3 options.
And finally you’ll end up doing what was estimated because, well, the brain does what minimizes the prediction error. Is this the weird sense in which the info is mixed with bayes and this is all bayesian stuff?
I must be missing something. If this interpretation was correct, e.g., what would increasing the dopamine e.g. in the frontal cortex be doing? Increasing the “unnormalized” prior for such process? (like, it falsely thinks it wins more often than it does, regardless of the evidence). Falsely bias the bayes factor? (like, it thinks it almost never happens that it feels this convinced of what should happen in the cases when it doesn’t end up winning.)