(Note: this might be difficult to follow. Discussing different ways that different people relate to themselves across time is tricky. Feel free to ask for clarifications.)
1.
I’m reading the paper Against Narrativity, which is a piece of analytic philosophy that examines Narrativity in a few forms:
Psychological Narrativity—the idea that “people see or live or experience their lives as a narrative or story of some sort, or at least as a collection of stories.”
Ethical Narrativity—the normative thesis that “experiencing or conceiving one’s life as a narrative is a good thing; a richly [psychologically] Narrative outlook is essential to a well-lived life, to true or full personhood.”
It also names two kinds of self-experience that it takes to be diametrically opposite:
Diachronic—considers the self as something that was there in the further past, and will be there in the further future
Episodic—does not consider the self as something that was there in the further past and something that will be there in the further future
Wow, these seem pretty confusing. It sounds a lot like they just disagree on the definition of the world “self”. I think there is more to it than that, some weak evidence being discussing this concept of length with a friend (diachronic) who had a very different take on narrativity than myself (episodic).
I’ll try to sketch what I think “self” means. It seems that for almost all nontrivial cognition, it seems like intelligent agents have separate concepts (or the concept of a separation between) the “agent” and the “environment”. In Vervaeke’s works this is called the Agent-Arena Relationship.
You might say “my body is my self and the rest is the environment,” but is that really how you think of the distinction? Do you not see the clothes you’re currently wearing as part of your “agent”? Tools come to mind as similar extensions of our self. If I’m raking leaves for a long time, I start to sense myself as a the agent being the whole “person + rake” system, rather than a person whose environment includes a rake that is being held.
(In general I think there’s something interesting here in proto-human history about how tool use interacts with our concept of self, and our ability to quickly adapt to thinking of a tool as part of our ‘self’ as a critical proto-cognitive-skill.)
Getting back to Diachronic/Episodic: I think one of the things that’s going on in this divide is that this felt sense of “self” extends forwards and backwards in time differently.
2.
I often feel very uncertain in my understanding or prediction of the moral and ethical natures of my decisions and actions. This probably needs a whole lot more writing on its own, but I’ll sum it up as two ideas having a disproportionate affect on me:
The veil of ignorance, which is a thought experiment which leads people to favor policies that support populations more broadly (skipping a lot of detail and my thoughts on it for now).
The categorical imperative, which I’ll reduce here as the principle of universalizability—a policy for actions given context is moral if it is one you would endorse universalizing (this is huge and complex, and there’s a lot of finicky details in how context is defined, etc. skipping that for now)
Both of these prompt me to take the perspective of someone else, potentially everyone else, in reasoning through my decisions. I think the way I relate to them is very Non-Narrative/Episodic in nature.
(Separately, as I think more about the development of early cognition, the more the ability to take the perspective of someone else seems like a magical superpower)
I think they are not fundamentally or necessarily Non-Narrative/Episodic—I can imagine both of them being considered by someone who is Strongly Narrative and even them imagining a world consisting of a mixture of Diachronic/Episodic/etc.
3.
Priors are hard. Relatedly, choosing between similar explanations of the same evidence is hard.
I really like the concept of the Solomonoff prior, even if the math of it doesn’t apply directly here. Instead I’ll takeaway just this piece of it:
“Prefer explanations/policies that are simpler-to-execute programs”
A program may be simpler if it has fewer inputs, or fewer outputs. It might be simpler if it requires less memory or less processing.
This works well for choosing policies that are easier to implement or execute, especially as a person with bounded memory/processing/etc.
4.
A simplifying assumption that works very well for dynamic systems is the Markov property.
This property states that all of the information in the system is present in the current state of the system.
One way to look at this is in imagining a bunch of atoms in a moment of time—all of the information in the system is contained in the current positions and velocities of the atoms. (We can ignore or forget all of the trajectories that individual atoms took to get to their current locations)
In practice we usually do this to systems where this isn’t literally true, but close-enough-for-practical-purposes, and combine it with stuffing some extra stuff into the context for what “present” means.
(For example we might define the “present” state of a natural system includes “the past two days of observations”—this still has the Markov property, because this information is finite and fixed as the system proceeds dynamically into the future)
5.
I think that these pieces, when assembled, steer me towards becoming Episodic.
When choosing between policies that have the same actions, I prefer the policies that are simpler. (This feels related to the process of distilling principles.)
When considering good policies, I think I consider strongly those policies that I would endorse many people enact. This is aided by these policies being simpler to imagine.
Policies that are not path-dependent (for example, take into account fewer things in a person’s past) are simpler, and therefore easier to imagine.
Path-independent policies are more Episodic, in that they don’t rely heavily on a person’s place in their current Narratives.
6.
I don’t know what to do with all of this.
I think one thing that’s going on is self-fulfilling—where I don’t strongly experience psychological Narratives, and therefore it’s more complex for me to simulate people who do experience this, which via the above mechanism leads to me choosing Episodic policies.
I don’t strongly want to recruit everyone to this method of reasoning. It is an admitted irony of this system (that I don’t wish for everyone to use the same mechanism of reasoning as me) -- maybe just let it signal just how uncertain I feel about my whole ability to come to philosophical conclusions on my own.
I expect to write more about this stuff in the near future, including experiments I’ve been doing in my writing to try to move my experience in the Diachronic direction. I’d be happy to hear comments for what folks are interested in.
When choosing between policies that have the same actions, I prefer the policies that are simpler.
Could you elaborate on this? I feel like there’s a tension between “which policy is computationally simpler for me to execute in the moment?” and “which policy is more easily predicted by the agents around me?”, and it’s not obvious which one you should be optimizing for. [Like, predictions about other diachronic people seem more durable / easier to make, and so are easier to calculate and plan around.] Or maybe the ‘simple’ approaches for one metric are generally simple on the other metric.
My feeling is that I don’t have a strong difference between them. In general simpler policies are both easier to execute in the moment and also easier for others to simulate.
The clearest version of this is to, when faced with a decision, decide on an existing principle to apply before acting, or else define a new principle and act on this.
Principles are examples of short policies, which are largely path-independent, which are non-narrative, which are easy to execute, and are straightforward to communicate and be simulated by others.
(Note: this might be difficult to follow. Discussing different ways that different people relate to themselves across time is tricky. Feel free to ask for clarifications.)
1.
I’m reading the paper Against Narrativity, which is a piece of analytic philosophy that examines Narrativity in a few forms:
Psychological Narrativity—the idea that “people see or live or experience their lives as a narrative or story of some sort, or at least as a collection of stories.”
Ethical Narrativity—the normative thesis that “experiencing or conceiving one’s life as a narrative is a good thing; a richly [psychologically] Narrative outlook is essential to a well-lived life, to true or full personhood.”
It also names two kinds of self-experience that it takes to be diametrically opposite:
Diachronic—considers the self as something that was there in the further past, and will be there in the further future
Episodic—does not consider the self as something that was there in the further past and something that will be there in the further future
Wow, these seem pretty confusing. It sounds a lot like they just disagree on the definition of the world “self”. I think there is more to it than that, some weak evidence being discussing this concept of length with a friend (diachronic) who had a very different take on narrativity than myself (episodic).
I’ll try to sketch what I think “self” means. It seems that for almost all nontrivial cognition, it seems like intelligent agents have separate concepts (or the concept of a separation between) the “agent” and the “environment”. In Vervaeke’s works this is called the Agent-Arena Relationship.
You might say “my body is my self and the rest is the environment,” but is that really how you think of the distinction? Do you not see the clothes you’re currently wearing as part of your “agent”? Tools come to mind as similar extensions of our self. If I’m raking leaves for a long time, I start to sense myself as a the agent being the whole “person + rake” system, rather than a person whose environment includes a rake that is being held.
(In general I think there’s something interesting here in proto-human history about how tool use interacts with our concept of self, and our ability to quickly adapt to thinking of a tool as part of our ‘self’ as a critical proto-cognitive-skill.)
Getting back to Diachronic/Episodic: I think one of the things that’s going on in this divide is that this felt sense of “self” extends forwards and backwards in time differently.
2.
I often feel very uncertain in my understanding or prediction of the moral and ethical natures of my decisions and actions. This probably needs a whole lot more writing on its own, but I’ll sum it up as two ideas having a disproportionate affect on me:
The veil of ignorance, which is a thought experiment which leads people to favor policies that support populations more broadly (skipping a lot of detail and my thoughts on it for now).
The categorical imperative, which I’ll reduce here as the principle of universalizability—a policy for actions given context is moral if it is one you would endorse universalizing (this is huge and complex, and there’s a lot of finicky details in how context is defined, etc. skipping that for now)
Both of these prompt me to take the perspective of someone else, potentially everyone else, in reasoning through my decisions. I think the way I relate to them is very Non-Narrative/Episodic in nature.
(Separately, as I think more about the development of early cognition, the more the ability to take the perspective of someone else seems like a magical superpower)
I think they are not fundamentally or necessarily Non-Narrative/Episodic—I can imagine both of them being considered by someone who is Strongly Narrative and even them imagining a world consisting of a mixture of Diachronic/Episodic/etc.
3.
Priors are hard. Relatedly, choosing between similar explanations of the same evidence is hard.
I really like the concept of the Solomonoff prior, even if the math of it doesn’t apply directly here. Instead I’ll takeaway just this piece of it:
“Prefer explanations/policies that are simpler-to-execute programs”
A program may be simpler if it has fewer inputs, or fewer outputs. It might be simpler if it requires less memory or less processing.
This works well for choosing policies that are easier to implement or execute, especially as a person with bounded memory/processing/etc.
4.
A simplifying assumption that works very well for dynamic systems is the Markov property.
This property states that all of the information in the system is present in the current state of the system.
One way to look at this is in imagining a bunch of atoms in a moment of time—all of the information in the system is contained in the current positions and velocities of the atoms. (We can ignore or forget all of the trajectories that individual atoms took to get to their current locations)
In practice we usually do this to systems where this isn’t literally true, but close-enough-for-practical-purposes, and combine it with stuffing some extra stuff into the context for what “present” means.
(For example we might define the “present” state of a natural system includes “the past two days of observations”—this still has the Markov property, because this information is finite and fixed as the system proceeds dynamically into the future)
5.
I think that these pieces, when assembled, steer me towards becoming Episodic.
When choosing between policies that have the same actions, I prefer the policies that are simpler. (This feels related to the process of distilling principles.)
When considering good policies, I think I consider strongly those policies that I would endorse many people enact. This is aided by these policies being simpler to imagine.
Policies that are not path-dependent (for example, take into account fewer things in a person’s past) are simpler, and therefore easier to imagine.
Path-independent policies are more Episodic, in that they don’t rely heavily on a person’s place in their current Narratives.
6.
I don’t know what to do with all of this.
I think one thing that’s going on is self-fulfilling—where I don’t strongly experience psychological Narratives, and therefore it’s more complex for me to simulate people who do experience this, which via the above mechanism leads to me choosing Episodic policies.
I don’t strongly want to recruit everyone to this method of reasoning. It is an admitted irony of this system (that I don’t wish for everyone to use the same mechanism of reasoning as me) -- maybe just let it signal just how uncertain I feel about my whole ability to come to philosophical conclusions on my own.
I expect to write more about this stuff in the near future, including experiments I’ve been doing in my writing to try to move my experience in the Diachronic direction. I’d be happy to hear comments for what folks are interested in.
Fin.
Could you elaborate on this? I feel like there’s a tension between “which policy is computationally simpler for me to execute in the moment?” and “which policy is more easily predicted by the agents around me?”, and it’s not obvious which one you should be optimizing for. [Like, predictions about other diachronic people seem more durable / easier to make, and so are easier to calculate and plan around.] Or maybe the ‘simple’ approaches for one metric are generally simple on the other metric.
My feeling is that I don’t have a strong difference between them. In general simpler policies are both easier to execute in the moment and also easier for others to simulate.
The clearest version of this is to, when faced with a decision, decide on an existing principle to apply before acting, or else define a new principle and act on this.
Principles are examples of short policies, which are largely path-independent, which are non-narrative, which are easy to execute, and are straightforward to communicate and be simulated by others.