“The “subjective system” evolved from something like a basic reinforcement learning architecture, and it models subjective > expectation and this organism’s immediate rewards, and isn’t too strongly swayed by abstract theories and claims.”
I think this overestimates the degree to which a) (primitive) subjective systems are reward-seeking and that b) “personal identity” are really definable non-volatile static entities and not folk-psychological dualistic concepts in a cartesian theater (c.f. Dennett). For sufficiently complex adaptive systems (organisms), there is no sufficiently good correlation between its reward signal and its actual intended LONG TERM goals. A non-linear relationship between the present reward signal and the actual long-term/terminal goal in our sensorial and declarative memory creates a selective pressure for multiple senses of personal identities over time. This is precisely the reason why high level abstract models and rich integration of all kinds of timestamped and labeled instances of sensory data start to emerge inside the unitary phenomenal world-simulations of these organisms, when social dilemmas where the Nash Equilibrium is not Pareto-efficient and noncooperative self-interest disadvantageous: we have forever-changing episodic simulations of possible identities over time and some of these simulations are very hardcoded in our sense of fairness (e.g. Ultimatum Game) and empathetic understanding (yourselves in other organisms’s shoes). Organisms started to have encoded abstractions (memory) about it’s strategies and goals and possible associated rewards with different changing identities when competing against opponent organisms that uses the same level of memory to condition their playing on the past (be it for punishment or for helping parents or indirect reciprocity of other identities). So I don’t think “we” don’t base our decisions on our abstract world-model. I think “we” (the personal identities that are possibly encoded in my organism) do base decisions on the abstract world-model that the organism that is “us” is capable of maintaining coherently. Or vice versa: that the organisms that encodes “us” is basing it’s decision on top of several potential first person entities that exist over time. Yes, the subjective expectations are/were important, but to who?
This conflict between several potential self-identifiable volatile identities is what creates most social dilemmas, paradoxes, problems of collective action and protection of the commons (tragedy of the commons). The thing is not that we have this suboptimal passable evolutionary solution of “apparently one fuzzy personal identity”: we have this solution of several personal identities over time that are plagued with inter-temporal hyperbolically discounted myopia and unsatisfactory models of decision theory.
So I agree with you, but seems that I’m not thinking about learning in terms of rationally utility maximizing organisms with one personal identity over time. It seems this position is more related to the notion of Empty Individualism: http://goo.gl/0h3I0
I think this overestimates the degree to which a) (primitive) subjective systems are reward-seeking and that b) “personal identity” are really definable non-volatile static entities and not folk-psychological dualistic concepts in a cartesian theater (c.f. Dennett). For sufficiently complex adaptive systems (organisms), there is no sufficiently good correlation between its reward signal and its actual intended LONG TERM goals. A non-linear relationship between the present reward signal and the actual long-term/terminal goal in our sensorial and declarative memory creates a selective pressure for multiple senses of personal identities over time. This is precisely the reason why high level abstract models and rich integration of all kinds of timestamped and labeled instances of sensory data start to emerge inside the unitary phenomenal world-simulations of these organisms, when social dilemmas where the Nash Equilibrium is not Pareto-efficient and noncooperative self-interest disadvantageous: we have forever-changing episodic simulations of possible identities over time and some of these simulations are very hardcoded in our sense of fairness (e.g. Ultimatum Game) and empathetic understanding (yourselves in other organisms’s shoes). Organisms started to have encoded abstractions (memory) about it’s strategies and goals and possible associated rewards with different changing identities when competing against opponent organisms that uses the same level of memory to condition their playing on the past (be it for punishment or for helping parents or indirect reciprocity of other identities). So I don’t think “we” don’t base our decisions on our abstract world-model. I think “we” (the personal identities that are possibly encoded in my organism) do base decisions on the abstract world-model that the organism that is “us” is capable of maintaining coherently. Or vice versa: that the organisms that encodes “us” is basing it’s decision on top of several potential first person entities that exist over time. Yes, the subjective expectations are/were important, but to who?
This conflict between several potential self-identifiable volatile identities is what creates most social dilemmas, paradoxes, problems of collective action and protection of the commons (tragedy of the commons). The thing is not that we have this suboptimal passable evolutionary solution of “apparently one fuzzy personal identity”: we have this solution of several personal identities over time that are plagued with inter-temporal hyperbolically discounted myopia and unsatisfactory models of decision theory.
So I agree with you, but seems that I’m not thinking about learning in terms of rationally utility maximizing organisms with one personal identity over time. It seems this position is more related to the notion of Empty Individualism: http://goo.gl/0h3I0