It seems to me you’re looking for temporal consistency. My problem understanding you stems from the fact that I don’t expect my future self to wish I had been any more altruistic than I’m right now. I don’t think being conflicted makes much sense without considering temporal differences in preference, and I think Yvain’s descriptions fit this picture.
I guess you could frame it as a temporal inconsistency as well, since it does often led to regret afterwards, but it’s more a “I’m doing this thing even though I know it’s wrong” thing: not a conflict between one’s current and future self, but rather a conflict between the good of myself and the good of others.
Interesting. I wonder if we have some fundamental difference in perceived identity at play here. It makes no sense to me to have a narrative where I do things I don’t actually want to do.
Say I attach my identity to my whole body. There will be no conflict here since whatever I do is result of a resolved conflict hidden in the body and therefore I must want to do whatever I’m doing.
Say I attach my identity to my brain. My brain can want things that my body cannot do, but whatever the brain tells the body to do, will be a result of a resolved conflict hidden inside the brain and I will tell my body to do whatever I want my body to do. Whatever conflict of preferences arises will be a confusion of identity between the brain and the body.
Say I attach my identity to a part of my brain, to this consciousness thing that seems to be in charge of some executive functions, probably residing in the frontal cortex. Whatever this part of the brain tells the rest of the brain will be a result of a resolved conflict hidden inside this part of the brain and again whatever I tell the rest of my brain to do will necessarily have to be what I want to tell it to do, but I can’t expect the rest of my brain to do something it cannot do. Whatever conflict arises will be a confusion of identity between this part and the rest of the brain.
I can think of several reasons why I’d want to assume a conflicted identity and almost all of them involve signalling and social convenience.
Say I attach my identity to my brain. My brain can want things that my body cannot do, but whatever the brain tells the body to do, will be a result of a resolved conflict hidden inside the brain and I will tell my body to do whatever I want my body to do.
I think the difference here is that, from the inside, it often doesn’t feel like my actions were the result of a resolved conflict. Well, in a sense they were, since otherwise I’d have been paralyzed with inaction. But when I’m considering some decision that I’m conflicted over, it very literally feels like there’s an actual struggle between different parts of my brain, and when I do reach a decision, the struggle usually isn’t resolved in the sense of one part making a decisive argument and the other part acknowledging that they were wrong. (Though that does happen sometimes.)
Rather it feels like one part managed to get the upper hand and could temporarily force the other part into accepting the decision that was made, but the conflict isn’t really resolved in any sense—if the circumstances were to change and I’d have to make the same decision again, the loser of this “round” might still end up winning the next one. Or the winner might get me started on the action but the loser might then make a comeback and block the action after all.
That’s also why it doesn’t seem right to talk about this as a conflict between current and future selves. That would seem to imply that I wanted thing X at time T, and some other thing Y at T+1. If you equated “wanting” with “the desire of the brain-faction that happens to be the strongest at the time when one’s brain is sampled”, then you could kind of frame it like a temporal conflict… but it feels like that description is losing information, since actually what happens is that I want both X and Y at both times: it’s just the relative strength of those wants that varies.
when I’m considering some decision that I’m conflicted over, it very literally feels like there’s an actual struggle between different parts of my brain
Ok. To me it most often feels like I’m observing that some parts of my brain struggle and that I’m there to tip the scales, so to speak. This doesn’t necessarily lead to a desirable outcome if my influence isn’t strong enough. I can’t say I feel conflicted about in what direction to tip the scales, but I assume this is just because I’m identifying with a part of my brain that can’t monitor its inner conflicts. I might have identified with several conflicting parts of my brain at once in the past, but don’t remember what it felt like, nor would I be able to tell you how this transformation might have happened.
Rather it feels like one part managed to get the upper hand and could temporarily force the other part into accepting the decision that was made, but the conflict isn’t really resolved in any sense
This sounds like tipping the scales. Are you indentifying with several conflicting processes or are you just expressing yourself in a socially convenient manner? If you’re X that’s trying to make process A win process B in your brain and process B wins in a way that leads to undesirable action, does it make any sense to say that you did something you didn’t want to do?
Your description of tipping the scale sounds about right, but I think that it only covers two of the three kinds of scenarios that I experience:
I can easily or semi-easily tip the scale in some direction, possibly with an expenditure of willpower. I would mostly not classify this as a struggle: instead I just make a decision.
I would like to tip the scale in some direction, but fail (and instead end up procrastinating or whatever), or succeed but only by a thin margin. I would classify this as a struggle.
I could tip the scale if I just decided what direction I wanted to tip them in, but I’m genuinely unsure of what direction I should tip them in. If scenario #1 feels like an expenditure of willpower in order to override a short-term impulse in favor of a long-term goal, and #2 like a failed or barely successful attempt to do so, then #3 feels like trying to decide what the long-term goal should be. Putting it differently, #3 feels like a situation where the set of processes that do the tipping do not necessarily have any preferences of their own, but rather act as the “carriers” of a set of preferences that multiple competing lower-level systems are trying to install in them. (Actually, that description doesn’t feel quite right, but it’s the best I can manage right now.)
I now realize that I hadn’t previously clearly made the distinction between those different scenarios, and may have been conflating them to some extent. I’ll have to rethink what I’ve said here in light of that.
I think that I identify with each brain-faction that has managed to “install” “its” preferences in the scale-tipping system at some point. So if there is any short-term impulse that all the factions think should be overriden given the chance, then I don’t identify with that short-term impulse, but since e.g. both the negative utilitarian and deontological factions manage to take control at times, I identify with both to some extent.
It seems to me you’re looking for temporal consistency. My problem understanding you stems from the fact that I don’t expect my future self to wish I had been any more altruistic than I’m right now. I don’t think being conflicted makes much sense without considering temporal differences in preference, and I think Yvain’s descriptions fit this picture.
I guess you could frame it as a temporal inconsistency as well, since it does often led to regret afterwards, but it’s more a “I’m doing this thing even though I know it’s wrong” thing: not a conflict between one’s current and future self, but rather a conflict between the good of myself and the good of others.
Interesting. I wonder if we have some fundamental difference in perceived identity at play here. It makes no sense to me to have a narrative where I do things I don’t actually want to do.
Say I attach my identity to my whole body. There will be no conflict here since whatever I do is result of a resolved conflict hidden in the body and therefore I must want to do whatever I’m doing.
Say I attach my identity to my brain. My brain can want things that my body cannot do, but whatever the brain tells the body to do, will be a result of a resolved conflict hidden inside the brain and I will tell my body to do whatever I want my body to do. Whatever conflict of preferences arises will be a confusion of identity between the brain and the body.
Say I attach my identity to a part of my brain, to this consciousness thing that seems to be in charge of some executive functions, probably residing in the frontal cortex. Whatever this part of the brain tells the rest of the brain will be a result of a resolved conflict hidden inside this part of the brain and again whatever I tell the rest of my brain to do will necessarily have to be what I want to tell it to do, but I can’t expect the rest of my brain to do something it cannot do. Whatever conflict arises will be a confusion of identity between this part and the rest of the brain.
I can think of several reasons why I’d want to assume a conflicted identity and almost all of them involve signalling and social convenience.
I think the difference here is that, from the inside, it often doesn’t feel like my actions were the result of a resolved conflict. Well, in a sense they were, since otherwise I’d have been paralyzed with inaction. But when I’m considering some decision that I’m conflicted over, it very literally feels like there’s an actual struggle between different parts of my brain, and when I do reach a decision, the struggle usually isn’t resolved in the sense of one part making a decisive argument and the other part acknowledging that they were wrong. (Though that does happen sometimes.)
Rather it feels like one part managed to get the upper hand and could temporarily force the other part into accepting the decision that was made, but the conflict isn’t really resolved in any sense—if the circumstances were to change and I’d have to make the same decision again, the loser of this “round” might still end up winning the next one. Or the winner might get me started on the action but the loser might then make a comeback and block the action after all.
That’s also why it doesn’t seem right to talk about this as a conflict between current and future selves. That would seem to imply that I wanted thing X at time T, and some other thing Y at T+1. If you equated “wanting” with “the desire of the brain-faction that happens to be the strongest at the time when one’s brain is sampled”, then you could kind of frame it like a temporal conflict… but it feels like that description is losing information, since actually what happens is that I want both X and Y at both times: it’s just the relative strength of those wants that varies.
Ok. To me it most often feels like I’m observing that some parts of my brain struggle and that I’m there to tip the scales, so to speak. This doesn’t necessarily lead to a desirable outcome if my influence isn’t strong enough. I can’t say I feel conflicted about in what direction to tip the scales, but I assume this is just because I’m identifying with a part of my brain that can’t monitor its inner conflicts. I might have identified with several conflicting parts of my brain at once in the past, but don’t remember what it felt like, nor would I be able to tell you how this transformation might have happened.
This sounds like tipping the scales. Are you indentifying with several conflicting processes or are you just expressing yourself in a socially convenient manner? If you’re X that’s trying to make process A win process B in your brain and process B wins in a way that leads to undesirable action, does it make any sense to say that you did something you didn’t want to do?
Your description of tipping the scale sounds about right, but I think that it only covers two of the three kinds of scenarios that I experience:
I can easily or semi-easily tip the scale in some direction, possibly with an expenditure of willpower. I would mostly not classify this as a struggle: instead I just make a decision.
I would like to tip the scale in some direction, but fail (and instead end up procrastinating or whatever), or succeed but only by a thin margin. I would classify this as a struggle.
I could tip the scale if I just decided what direction I wanted to tip them in, but I’m genuinely unsure of what direction I should tip them in. If scenario #1 feels like an expenditure of willpower in order to override a short-term impulse in favor of a long-term goal, and #2 like a failed or barely successful attempt to do so, then #3 feels like trying to decide what the long-term goal should be. Putting it differently, #3 feels like a situation where the set of processes that do the tipping do not necessarily have any preferences of their own, but rather act as the “carriers” of a set of preferences that multiple competing lower-level systems are trying to install in them. (Actually, that description doesn’t feel quite right, but it’s the best I can manage right now.)
I now realize that I hadn’t previously clearly made the distinction between those different scenarios, and may have been conflating them to some extent. I’ll have to rethink what I’ve said here in light of that.
I think that I identify with each brain-faction that has managed to “install” “its” preferences in the scale-tipping system at some point. So if there is any short-term impulse that all the factions think should be overriden given the chance, then I don’t identify with that short-term impulse, but since e.g. both the negative utilitarian and deontological factions manage to take control at times, I identify with both to some extent.