Another reason not to integrate is that integration is actually just bad in some circumstances. You don’t want all your heuristics to propagate to all possible domains all at once since they wouldn’t be applicable and too many options would likely make your decision making capabilities worse. Some kinds of drug experiences demonstrate this.
I’m not sure if I’d treat “different heuristics in different domains” as an example of non-integration. At least it feels different from the inside. If someone points out to me that I’m not applying a programming heuristic when dealing with humans, I’m likely to react by “well that’s because I’m dealing with humans not code”, rather than noticing something that feels like a contradiction.
A contradiction feels more like having the heuristics (when X, do A) and (when Y, do not-A), and it then being pointed out to me that actually in this situation, X and Y and both apply.
Bayesian nonparametric models group perceptual observations into unobserved ‘latent causes’ (or clusters) [52–55]. For example, consider a serial reversal learning task in which the identity of the high-reward option sporadically alternates. In such tasks, animals initially learn slowly but eventually learn to respond rapidly to contingency changes [56]. Bayesian nonparametric models learn this task by grouping reward outcomes into two latent causes: one in which the first option is better and one in which the second option is better. Once this structure is learned, the model displays one-shot reversals after contingency changes because it infers that the latent cause has changed. This inference about latent causes in the environment has also shed light on several puzzling conditioning effects. When presented with a neutral stimulus such as a tone followed by a shock, animals eventually display a fear response to the tone. The learned fear response gradually diminishes when the tone is later presented by itself (i.e., in extinction) but often returns after some time has passed. This phenomenon is known as spontaneous recovery. Bayesian nonparametric models attribute spontaneous recovery to the inference that extinction signals a new environmental state. This prevents old associations from being updated [57]. Bayesian nonparametric models also predict that gradual extinction will prevent spontaneous recovery, a finding borne out by empirical data [57]. In gradual extinction, the model infers a single latent state and gradually weakens the association between that state and aversive outcome, thereby abolishing the fear memory.
Another reason not to integrate is that integration is actually just bad in some circumstances. You don’t want all your heuristics to propagate to all possible domains all at once since they wouldn’t be applicable and too many options would likely make your decision making capabilities worse. Some kinds of drug experiences demonstrate this.
I’m not sure if I’d treat “different heuristics in different domains” as an example of non-integration. At least it feels different from the inside. If someone points out to me that I’m not applying a programming heuristic when dealing with humans, I’m likely to react by “well that’s because I’m dealing with humans not code”, rather than noticing something that feels like a contradiction.
A contradiction feels more like having the heuristics (when X, do A) and (when Y, do not-A), and it then being pointed out to me that actually in this situation, X and Y and both apply.
I’m reminded of this excerpt from a recent paper, Holistic Reinforcement Learning: The Role of Structure and Attention (Trends in Cognitive Sciences, Angela Radulescu, Yael Niv & Ian Ballard 2019; Sci-hub version):
edited