I’m not at all convinced by the claim that <valence is a roughly linear function over included concepts>, if I may paraphrase. After laying out a counterexample, you seem to be constructing a separate family of concepts that better fits a linear model. But (a) this is post-hoc and potentially ad-hoc, and (b) you’ve given us little reason to expect that there will always be such a family of concepts. It would help if you could outline how a privileged set of concepts arises for a given person, that will explain their valences.
Also, your definition of “innate drives” works for the purpose of collecting all valences into a category explained by one basket of root causes. But it’s a diverse basket. I think you’re missing the opportunity to make a distinction—Wanting vs. Liking Revisited—which is useful for understanding human motivations.
Thanks for your comment! I explicitly did not explain or justify the §2.4.1 (and especially §2.4.1.1) thing, so you have every right to be skeptical of it. :) If it’s wrong (for the sake of argument), well, the rest of the series doesn’t rely on it much. The exception is a discussion of brainstorming coming up in the next post. I think if the §2.4.1 thing is wrong, then that brainstorming-related text would need to be edited, but I would bet that there’s a weaker version of §2.4.1 like maybe “it can be approximately linear under certain circumstances…”, where that weaker version is still adequate to explain the things I want to explain.
If that’s not good enough to satisfy you, well, my plan for now is to wait until somebody else publishes a correct model with all the gory details of “concepts” in the cortex. After that happens, I will be happy to freely chat about that topic. It’s bound to happen sooner or later! I don’t want to help that process along, because I would prefer “later” over “sooner”. But most neuro-AI researchers aren’t asking me for my opinion.
I talked about wanting vs liking in §1.5.2. I have a little theory with some more details about wanting-vs-liking, but it involves a lot of background that I didn’t want to get into, and nothing else I care about right now seems to depend on that, so I have declared that out-of-scope for this series, beyond the brief intuitive discussion in §1.5.2. UPDATE: I wound up writing an appendix with much more details on wanting-vs-liking.
I strongly agree that innate drives are diverse and complex, and well worth understanding in great detail. That is a major research interest of mine. It’s true that this particular series mostly treats them as an undifferentiated category—but as a general policy, I think it’s good and healthy to talk about narrow topics, which inevitably entails declaring many important and interesting things to be out-of-scope. :)
I’m not at all convinced by the claim that <valence is a roughly linear function over included concepts>, if I may paraphrase. After laying out a counterexample, you seem to be constructing a separate family of concepts that better fits a linear model. But (a) this is post-hoc and potentially ad-hoc, and (b) you’ve given us little reason to expect that there will always be such a family of concepts. It would help if you could outline how a privileged set of concepts arises for a given person, that will explain their valences.
Also, your definition of “innate drives” works for the purpose of collecting all valences into a category explained by one basket of root causes. But it’s a diverse basket. I think you’re missing the opportunity to make a distinction—Wanting vs. Liking Revisited—which is useful for understanding human motivations.
Thanks for your comment! I explicitly did not explain or justify the §2.4.1 (and especially §2.4.1.1) thing, so you have every right to be skeptical of it. :) If it’s wrong (for the sake of argument), well, the rest of the series doesn’t rely on it much. The exception is a discussion of brainstorming coming up in the next post. I think if the §2.4.1 thing is wrong, then that brainstorming-related text would need to be edited, but I would bet that there’s a weaker version of §2.4.1 like maybe “it can be approximately linear under certain circumstances…”, where that weaker version is still adequate to explain the things I want to explain.
If that’s not good enough to satisfy you, well, my plan for now is to wait until somebody else publishes a correct model with all the gory details of “concepts” in the cortex. After that happens, I will be happy to freely chat about that topic. It’s bound to happen sooner or later! I don’t want to help that process along, because I would prefer “later” over “sooner”. But most neuro-AI researchers aren’t asking me for my opinion.
I talked about wanting vs liking in §1.5.2. I have a little theory with some more details about wanting-vs-liking, but it involves a lot of background that I didn’t want to get into, and nothing else I care about right now seems to depend on that, so
I have declared that out-of-scope for this series, beyond the brief intuitive discussion in §1.5.2. UPDATE: I wound up writing an appendix with much more details on wanting-vs-liking.I strongly agree that innate drives are diverse and complex, and well worth understanding in great detail. That is a major research interest of mine. It’s true that this particular series mostly treats them as an undifferentiated category—but as a general policy, I think it’s good and healthy to talk about narrow topics, which inevitably entails declaring many important and interesting things to be out-of-scope. :)