I’m skeptical of any clear divide between the systems. Of course, there are more abstract and more primitive information paths, but they talk to each other, and I don’t buy that they can be cleanly separated.
Plans can be more or less complicated, and can involve “I don’t know how this part works, but it worked last time, so lets do this” and what worked last time can be very pleasurable and rewarding—so it doesn’t seem to break down cleanly into any one category.
I’d also argue that, to the extent that abstract planning is successful, it is because it propagates top down and affects the lower pavlovian systems. If your thoughts about your project aren’t associated with motivation and wanting to actually do something, then your abstract plans aren’t of much use. It just isn’t salient that this happening unless the process is disrupted and you find yourself not doing what you “want” to do.
Another point that is worth stating explicitly is that algorithms for maximizing utility are not utility functions. In theory, you could have 3 different optimizers that all maximize the same utility function, or 3 different utility functions that all use the same optimizer—or any other combination.
I don’t think this is a purely academic distinction either—I think that we have conflicts at the same level all the time (multiple personality disorder being an extreme case). Conflicts between systems with no talk at between levels look like someone saying they want something, and then doing another without looking bothered at all. When someone is obviously pained by the conflict, then they are clearly both operating on an emotional level, even if the signal originated at different places. Or I could create a pavlovian conflict in my dog by throwing a steak on the wet tile, and watching as his conditioned fear of wet tile fights the conditioned desire of steak.
I’m skeptical of any clear divide between the systems. Of course, there are more abstract and more primitive information paths, but they talk to each other, and I don’t buy that they can be cleanly separated.
Plans can be more or less complicated, and can involve “I don’t know how this part works, but it worked last time, so lets do this” and what worked last time can be very pleasurable and rewarding—so it doesn’t seem to break down cleanly into any one category.
I’d also argue that, to the extent that abstract planning is successful, it is because it propagates top down and affects the lower pavlovian systems. If your thoughts about your project aren’t associated with motivation and wanting to actually do something, then your abstract plans aren’t of much use. It just isn’t salient that this happening unless the process is disrupted and you find yourself not doing what you “want” to do.
Another point that is worth stating explicitly is that algorithms for maximizing utility are not utility functions. In theory, you could have 3 different optimizers that all maximize the same utility function, or 3 different utility functions that all use the same optimizer—or any other combination.
I don’t think this is a purely academic distinction either—I think that we have conflicts at the same level all the time (multiple personality disorder being an extreme case). Conflicts between systems with no talk at between levels look like someone saying they want something, and then doing another without looking bothered at all. When someone is obviously pained by the conflict, then they are clearly both operating on an emotional level, even if the signal originated at different places. Or I could create a pavlovian conflict in my dog by throwing a steak on the wet tile, and watching as his conditioned fear of wet tile fights the conditioned desire of steak.