This might be unfair to deontologists, but I keep getting the feeling that deontology is a kind of “beginner’s ethics”. In other words, deontology is the kind of ethical system you get once you build it entirely around ethical injunctions, which is entirely reasonable if you don’t have the computing power to calculate the probable consequences of your actions with a very high degree of confidence. So you resort to what are basically cached rules that seem to work most of the time, and elevate those to axioms instead of treating them as heuristics.
And before I’m accused of missing the difference between consequentialism and deontology: no, I don’t claim that deontologists actually consciously think that this is why they’re deontologists. It does, however, seem like a plausible explanation of the (either development psychological or evolutionary) reason why people end up adopting deontology.
I don’t claim that deontologists actually consciously think that this is why they’re deontologists. It does, however, seem like a plausible explanation of the (either development psychological or evolutionary) reason why people end up adopting deontology.
Indeed, I get the impression from the article that a deontologist is someone who makes moral choices based on whether they will feel bad about violating a moral injunction, or good for following it… and then either ignorantly or indignantly denies this is the case, treating the feeling as evidence of a moral judgment’s truth, rather than as simply a cached response to prior experience.
Frankly, a big part of the work I do to help people is teaching them to shut off the compelling feelings attached to the explicit and implicit injunctions they picked up in childhood, so I’m definitely inclined to view deontology (at least as described by the article) as a hopelessly naive and tragically confused point of view, well below the sanity waterline… like any other belief in non-physical entities, rooted in mystery worship.
I also seem to recall that previous psychology research showed that that sort of thinking was something people naturally tended to grow out of as they got older (stages of moral reasoning), but then I also seem to recall that there was some more recent dispute about that, and accusations of gender bias in the research.
Nonetheless, it’s evolutionarily plausible that we’d have a simple, injunction-based emotional trigger system used in early life, until our more sophisticated reasoning abilities come online. And my experience working with my own and other people’s brains seems to support this: when broad childhood injunctions are switched off, people’s behavior and judgments in the relevant area immediately become more flexible and sophisticated.
Unfortunately, the deontological view sounds like it’s abusing higher reasoning simply to retroactively justify whatever (cached-feeling) injunctions are already in place, by finding more-sophisticated ways to spell the injunctions so they don’t sound like they have anything to do with one’s own past shames, guilts, fears, and other experiences. (What Robert Fritz refers to as an “ideal-belief-reality conflict”, or what Shakespeare called, “The lady doth protest too much, methinks.” I.e., we create high-sounding ideals and absolute moral injunctions specifically to conceal our personally-experienced failings or conflicts around those issues.)
Of course, I could just be missing the point of deontology entirely. But I can’t seem to even guess at what that point would be, because everything I’m reading here seems to closely resemble something that I had to grow out of… making it really hard for me to take it seriously.
Do you think it is likely that the emotional core of your claim was captured by the statement that “everything I’m reading here seems to closely resemble something that I had to grow out of… making it really hard for me to take it seriously”?
And then assuming this question finds some measure of ground.… how likely do you think it is that you would grow in a rewarding way by applying “your emotional reprogramming techniques” to this emotional reaction to an entry-level exposition on deontological modes of reasoning so that you could consider the positive and negative applications in a more dispassionate manner?
I haven’t read into your writings super extensively, but from what I read you have quite a lot of practice doing something like “soul dowsing” to find emotional reactions. Then you trace them back to especially vivid “formative memories” which can then then be rationally reprocessed using other techniques—the general goal being to allow clearer thinking about retrospectively critical experiences in a more careful manner and in light of subsequent life experiences. (I’m sure there’s a huge amount more, but this is my gloss that’s relevant to your post.)
I’ve never taken your specific suggestions along these lines into practice (for various reasons having mostly to do with opportunity costs) but the potential long term upside seem high and your post just seemed like a gorgeous opportunity to explore some of the longer term consequences of your suggested practices.
how likely do you think it is that you would grow in a rewarding way by applying “your emotional reprogramming techniques” to this emotional reaction to an entry-level exposition on deontological modes of reasoning so that you could consider the positive and negative applications in a more dispassionate manner?
That’s an interesting question. I don’t think an ideal-belief-reality conflict is involved, though, as an IBRC motivates someone to try to convince the “wrong” others of their error, and I didn’t feel any particular motivation to convince deontologists that they’re wrong! I included the disclaimer because I’m honestly frustrated by my inability to grok the concept of deontological morality except in terms of a feeling-driven injunctions model. (Had I been under the influence of an IBRC, I’d have been motivated to express greater certainty, as has happened occasionally in the past.)
So, if there’s any emotional reaction taking place, I’d have to say it was frustration with an inability to understand something… and the intensity level was pretty low.
In contrast, I’ve had discussions here last year where I definitely felt an inclination to convince people of things, and at a much higher emotional intensity—so I fixed them. This doesn’t feel to me like something in the same category.
It might be interesting to check out the frustration-at-inability-to-understand thing at some point, but at the moment it’s a bit like a hard-to-reproduce bug. I don’t have a specific trigger thought I can use to call up the feeling of frustration, so I would have no way at the moment to know if I actually changed anything.
from what I read you have quite a lot of practice doing something like “soul dowsing” to find emotional reactions.
I’ve never heard that phrase before, and Google actually finds your comment as the third-highest ranking result for the phrase. Is it of your invention?
In any event, I don’t believe I do anything that could be called dowsing. It would be more appropriate to refer it as a form of behavior modification via memory alteration.
We know that memories are fluid and their interpretations can be altered by suggestively-worded questions—mindhacking can be thought of as a way of using this brain bug, to fix other brain bugs.
Then you trace them back to especially vivid “formative memories” which can then then be rationally reprocessed using other techniques - the general goal being to allow clearer thinking
So far, so good, but this bit:
about retrospectively critical experiences in a more careful manner and in light of subsequent life experiences.
is beside the point. The purpose is that once you’ve altered the memory structure involved, your behavior—both in the form of thought patterns and actions—automatically changes to fall in line with the shift in the emotional relevance of what’s stored in your memory. The memory goes from being an unconscious emotional trigger, to an easily forgotten irrelevancy.
Indeed, the only reason I even remember the content of what I changed the other day regarding my mother yelling at me, is because I make a deliberate practice of trying to retain such memories. If I don’t write something down about what I change, the specific memories involved fade rapidly. I’ve had clients who within minutes or hours forgot they’d even had a problem in the first place.
Even trying to retain them in memory, the only record I now have of a change I made about two weeks ago, is the one I wrote down at the time. I remember remembering it, sure, but I don’t remember it directly—it’s now more like a story I heard, than something that actually happened to me.
IOW, amnesia for the original issue or where it come from is a normal and expected side-effect of successfully changing an emotionally-charged memory into a merely factual anecdote about something that happened to you, once upon a time.
I’ve never taken your specific suggestions along these lines into practice (for various reasons having mostly to do with opportunity costs) but the potential long term upside seem high and your post just seemed like a gorgeous opportunity to explore some of the longer term consequences of your suggested practices.
The intended outcome is to provide a means of effective self-modification, one that does not require constant vigilance to monitor an ever-increasing number of biases or enforce an ever-increasing number of required behaviors. There are an enormous number of hardware biases that I cannot modify, but on a day-to-day basis, we are far more affected by our acquired, “software” biases anyway.
To give a concrete example, what I do can’t modify the general tendency of humans to identify with ingroups and attack outgroups—but it can remove entries from the “outgroup description table” in an individual’s brain, one at a time!
This isn’t much, but it’s still something. I call it mindhacking, because that’s really what it is: making use of the brain’s bugs (e.g. malleable memory) to patch over some of its other bugs.
Hm. I think I just found a test stimulus that matches the feeling of frustration I had re: the deontology discussion. So I’ll work through it “live” right now.
[edit: the rest was too long to fit, so I’ve split it off into a separate, child comment as a reply to this one]
Hm. I think I just found a test stimulus that matches the feeling of frustration I had re: the deontology discussion. So I’ll work through it “live” right now.
I am frustrated at being unable to find common ground with what seems like abstract thoughts taken to the point of magical and circular thinking… and it seems the emotional memory is arguing theism and other subjects with my mother at a relatively young age… she would tie me in knots, not with clever rhetoric, but with sheer insanity—logical rudeness writ large.
But I couldn’t just come out and say that to her… not just because of the power differential, but also because I had no handy list of biases and fallacies to point to, and she had no attention span for any logically-built-up arguments.
Huh. No wonder I feel frustrated trying to understand deontology… I get the same, “I can’t even understand this craziness well enough to be able to say it’s wrong” feeling.
Okay, so what abilities did I lose to learned helplessness in this context? I learned that there was nothing I could say or do about logical craziness… which would certainly explain why I started and deleted my deontology comment multiple times before finally posting it… and didn’t really try to achieve any common ground during it… I just took a victim posture and said deontology was nonsense. I also waited until I could “safely” say it in the context of someone else’s comment, rather than directly addressing the post’s author—either to seek the truth or argue a clear position.
So, what do I want to replace that feeling of helplessness with? Would I rather be curious, so that I find out more about someone’s apparently circular reasoning before dismissing it or fighting with it? How about compassionate, so I try to help the person find the flaw in their reasoning, if they’re actually interested in the first place? What about amusement, so that I’m merely entertained and move on?
Just questioning these possibilities and bringing them into mind is already modifying the emotional response, since I’ve now had an (imagined) sensory experience of what it would be like to have those different emotions and behaviors in the circumstance. I can also see that I don’t need to understand or persuade in such a circumstance, which feels like a relief. I can see that I didn’t need to argue with my mother and frustrate myself; I could have just let her be who she was, and gone about my business.
So, this is a good time for a test. How do I feel about arguing theism with my mother? No big deal. How about deontology? Not a big deal either, but then it wasn’t earlier, either, which is why I couldn’t use it as a test directly. So the real test is the thought of “having to explain practical things to people hopelessly stuck in impractical thinking”, which was reliably causing me to wrinkle my brow, hunch slightly, and sigh in frustration.
Now, instead of that, I get a mixed feeling of compassion/patience, felt lightly in the chest area… but there’s still a hint of the old feeling, like a component is still there.
Ah… I see, I’ve dealt with only one need axis: connection/bonding, but not status/significance. A portion of the frustration was not being able to connect, and that portion I’ve resolved, but the other part was frustration with a status differential: the person making the argument is succeeding in lowering my status if I can’t address their (nonsensical) argument.
Ugh. I hate status entanglements. I can’t fix the brain’s need for status, only remove specific entries from the “status threats” table. So let’s see if we can take this one out.
I’m noticing that other memories of kids teasing or insulting me in school are coming up in connection with this—the same fundamental circumstance of being in a conversation with no good answers, silence included. No matter what I do, I will lose face.
Ouch. This is a tough one. The rookie mistake here would be to think I have to be able to come up with better comebacks or something… that is, that I have to solve the problem in the outside world, in order to change my feelings. But if I instead change my feelings first on the inside, then my behavior will change to match.
So, what do I want to feel? Amused? Confident? As with other forms of learned helplessness, I am best off if I can feel the outcome emotions in advance of tthe outside world conforming to my preference. (That is, if I already feel the self-esteem I want from the interaction, before the interaction takes place, it is more likely that I will act in a way that results in a favorable interaction.)
So how would I feel if those kids were praising, instead of teasing or insulting? I would feel honored by the attention...
Boom! The memory just changed, popping into a new interpretation: the kids teasing and insulting me were giving me positive attention. This new interpretation drives a different feeling about it… along with a change to my feelings about certain discussions that have taken place on LW. ;-) Netiher seems like a threat any more.
Similarly, thinking about being criticized in other contexts doesn’t seem like a threat… I strangely feel genuinely honored that somebody took the time to tell me how they feel, even if I don’t agree with it. Wow. Weird. ;-) (But then, as I’m constantly telling people, if your change doesn’t surprise you in some way, you probably didn’t really change anything.)
The change also sent me reeling for a moment, as suddenly the sense of loneliness and “outsider”-ness I had as a child begins to feel downright stupid and unnecessary in retrospect.
Wow. Deep stuff. Did not expect anything of this depth from your suggestion, JenniferRM. I think I will take the rest of my processing offline, as it’s been increasingly difficult to type about this while doing it… trying to explain the extra context/purpose stuff has been kind of distracting anyway, while I was in the middle of doing things.
Whew. Anyway, I hope that was helpfully illustrative, nonetheless.
Overnight, I continued working on that thread of thoughts, and dug up several related issues. One of them was that I’ve also not been nearly as generous with giving positive attention and appreciation as I would’ve liked others to be. So I made a change to fix that this morning, and I actually felt genuine warmth and gratitude in response to your comment… something that I generally haven’t felt, even towards very positive comments here in the past.
So really, thank you, as it was indeed both kind and generous of you to say it.
That was way more than I was hoping to get back and went in really interesting directions—the corrections about the way the “reprocessing” works and the limits of reprocessing was helpful. The detail about the way vivid memories can no longer be accessed through the same “index” and become more like stories was totally unexpected and fascinating.
Also, that was very impressive in terms of just… raw emotional openness, I guess. I don’t know about other readers, but it stirred up my emotions just reading about your issues as you worked through them. I have a hard time imagining the courage it would take for me to make similar emotional disclosures in a place like this if they were my own. I’m a little frightened by how much trust you gave me I think? But I’m very grateful too.
(And yes, “soul dousing” is a term I made up for the post for the sake of trying to summarize things I’ve read by you in the past in my own words to see if I was hearing what you were trying to say.)
I have a hard time imagining the courage it would take for me to make similar emotional disclosures in a place like this if they were my own.
Not as much as you might think. Bear in mind that by the time anybody reads anything I’ve written about something like that, it’s no longer the least bit emotional for me—it has become an interesting anecdote about something “once upon a time”.
If it was still emotional for me after I made the changes, I would have more trouble sharing it, here or even with my subscribers. In fact, the reason I cut off the post where I did was because there was some stuff I wasn’t yet “done” with and wanted to work on some more.
Likewise, it’s a lot easier to admit to your failures and shortcomings if you are acutely aware that 1) “you” aren’t really responsible, and 2) you can change. It’s easier to face the truth of what you did wrong, if you know that your reaction will be different in the future. It takes out the “feeling of being a bad person” part of the equation.
Yes! Both you and Kaj Sotala seem right on the money here. Deontology falls flat. A friend once observed to me that consequentialism is a more challenging stand to take because one needs to know more about any particular claim to defend an opinion about it.
I know it’s been discussed here on Less Wrong, but Jonathan Haidt’s research is really great, and relevant to this discussion. Professor Haidt’s work has validated David Hume’s assertions that we humans do not reason to our moral conclusions. Instead, we intuit about the morality of an action, and then provide shoddy reasoning as justification one way or the other.
Deciding whether a rule “works” based on whether it usually brings about good consequences, and following the rules that do and calling that “right”, is called rule consequentialism, not deontology.
That’s if you do it consciously, which I wasn’t suggesting. My suggestion was that this would be a mainly unconscious process, similar to the process of picking up any other deeply-rooted preference during childhood / young age.
Suppose that humans’ aggregate utility function includes both path-independent (“ends”) terms, and path-dependent (“means”) terms.
A (pseudo) deontologist in this scenario is someone who is concerned that all this talk about “achieving the best possible state of affairs” means that the path-dependent terms may be being neglected.
If you think about it, any fixed “state of affairs” is undesirable, simply because it is FIXED. I don’t know for sure, but I think almost everything that you value is actually a path unfolding in time—possibilities might include: falling in love, learning something new, freedom/self-determination, growth and change.
“Deontologists are just elevating intermediate heuristics to terminal values” is true, but also misleading and unfair unless you prepend “Consequentialists and ” first. After all, it seems quite likely that joy, curiosity, love, and all the other things we value are also merely heuristics that evolution found to be useful for its terminal goal of “Make more mans. More mans!” But if our terminal values happen to match some other optimizing process’ instrumental values, so what? That’s an interesting observation, not a devastating criticism.
This might be unfair to deontologists, but I keep getting the feeling that deontology is a kind of “beginner’s ethics”. In other words, deontology is the kind of ethical system you get once you build it entirely around ethical injunctions, which is entirely reasonable if you don’t have the computing power to calculate the probable consequences of your actions with a very high degree of confidence. So you resort to what are basically cached rules that seem to work most of the time, and elevate those to axioms instead of treating them as heuristics.
And before I’m accused of missing the difference between consequentialism and deontology: no, I don’t claim that deontologists actually consciously think that this is why they’re deontologists. It does, however, seem like a plausible explanation of the (either development psychological or evolutionary) reason why people end up adopting deontology.
Indeed, I get the impression from the article that a deontologist is someone who makes moral choices based on whether they will feel bad about violating a moral injunction, or good for following it… and then either ignorantly or indignantly denies this is the case, treating the feeling as evidence of a moral judgment’s truth, rather than as simply a cached response to prior experience.
Frankly, a big part of the work I do to help people is teaching them to shut off the compelling feelings attached to the explicit and implicit injunctions they picked up in childhood, so I’m definitely inclined to view deontology (at least as described by the article) as a hopelessly naive and tragically confused point of view, well below the sanity waterline… like any other belief in non-physical entities, rooted in mystery worship.
I also seem to recall that previous psychology research showed that that sort of thinking was something people naturally tended to grow out of as they got older (stages of moral reasoning), but then I also seem to recall that there was some more recent dispute about that, and accusations of gender bias in the research.
Nonetheless, it’s evolutionarily plausible that we’d have a simple, injunction-based emotional trigger system used in early life, until our more sophisticated reasoning abilities come online. And my experience working with my own and other people’s brains seems to support this: when broad childhood injunctions are switched off, people’s behavior and judgments in the relevant area immediately become more flexible and sophisticated.
Unfortunately, the deontological view sounds like it’s abusing higher reasoning simply to retroactively justify whatever (cached-feeling) injunctions are already in place, by finding more-sophisticated ways to spell the injunctions so they don’t sound like they have anything to do with one’s own past shames, guilts, fears, and other experiences. (What Robert Fritz refers to as an “ideal-belief-reality conflict”, or what Shakespeare called, “The lady doth protest too much, methinks.” I.e., we create high-sounding ideals and absolute moral injunctions specifically to conceal our personally-experienced failings or conflicts around those issues.)
Of course, I could just be missing the point of deontology entirely. But I can’t seem to even guess at what that point would be, because everything I’m reading here seems to closely resemble something that I had to grow out of… making it really hard for me to take it seriously.
Do you think it is likely that the emotional core of your claim was captured by the statement that “everything I’m reading here seems to closely resemble something that I had to grow out of… making it really hard for me to take it seriously”?
And then assuming this question finds some measure of ground.… how likely do you think it is that you would grow in a rewarding way by applying “your emotional reprogramming techniques” to this emotional reaction to an entry-level exposition on deontological modes of reasoning so that you could consider the positive and negative applications in a more dispassionate manner?
I haven’t read into your writings super extensively, but from what I read you have quite a lot of practice doing something like “soul dowsing” to find emotional reactions. Then you trace them back to especially vivid “formative memories” which can then then be rationally reprocessed using other techniques—the general goal being to allow clearer thinking about retrospectively critical experiences in a more careful manner and in light of subsequent life experiences. (I’m sure there’s a huge amount more, but this is my gloss that’s relevant to your post.)
I’ve never taken your specific suggestions along these lines into practice (for various reasons having mostly to do with opportunity costs) but the potential long term upside seem high and your post just seemed like a gorgeous opportunity to explore some of the longer term consequences of your suggested practices.
That’s an interesting question. I don’t think an ideal-belief-reality conflict is involved, though, as an IBRC motivates someone to try to convince the “wrong” others of their error, and I didn’t feel any particular motivation to convince deontologists that they’re wrong! I included the disclaimer because I’m honestly frustrated by my inability to grok the concept of deontological morality except in terms of a feeling-driven injunctions model. (Had I been under the influence of an IBRC, I’d have been motivated to express greater certainty, as has happened occasionally in the past.)
So, if there’s any emotional reaction taking place, I’d have to say it was frustration with an inability to understand something… and the intensity level was pretty low.
In contrast, I’ve had discussions here last year where I definitely felt an inclination to convince people of things, and at a much higher emotional intensity—so I fixed them. This doesn’t feel to me like something in the same category.
It might be interesting to check out the frustration-at-inability-to-understand thing at some point, but at the moment it’s a bit like a hard-to-reproduce bug. I don’t have a specific trigger thought I can use to call up the feeling of frustration, so I would have no way at the moment to know if I actually changed anything.
I’ve never heard that phrase before, and Google actually finds your comment as the third-highest ranking result for the phrase. Is it of your invention?
In any event, I don’t believe I do anything that could be called dowsing. It would be more appropriate to refer it as a form of behavior modification via memory alteration.
We know that memories are fluid and their interpretations can be altered by suggestively-worded questions—mindhacking can be thought of as a way of using this brain bug, to fix other brain bugs.
So far, so good, but this bit:
is beside the point. The purpose is that once you’ve altered the memory structure involved, your behavior—both in the form of thought patterns and actions—automatically changes to fall in line with the shift in the emotional relevance of what’s stored in your memory. The memory goes from being an unconscious emotional trigger, to an easily forgotten irrelevancy.
Indeed, the only reason I even remember the content of what I changed the other day regarding my mother yelling at me, is because I make a deliberate practice of trying to retain such memories. If I don’t write something down about what I change, the specific memories involved fade rapidly. I’ve had clients who within minutes or hours forgot they’d even had a problem in the first place.
Even trying to retain them in memory, the only record I now have of a change I made about two weeks ago, is the one I wrote down at the time. I remember remembering it, sure, but I don’t remember it directly—it’s now more like a story I heard, than something that actually happened to me.
IOW, amnesia for the original issue or where it come from is a normal and expected side-effect of successfully changing an emotionally-charged memory into a merely factual anecdote about something that happened to you, once upon a time.
The intended outcome is to provide a means of effective self-modification, one that does not require constant vigilance to monitor an ever-increasing number of biases or enforce an ever-increasing number of required behaviors. There are an enormous number of hardware biases that I cannot modify, but on a day-to-day basis, we are far more affected by our acquired, “software” biases anyway.
To give a concrete example, what I do can’t modify the general tendency of humans to identify with ingroups and attack outgroups—but it can remove entries from the “outgroup description table” in an individual’s brain, one at a time!
This isn’t much, but it’s still something. I call it mindhacking, because that’s really what it is: making use of the brain’s bugs (e.g. malleable memory) to patch over some of its other bugs.
Hm. I think I just found a test stimulus that matches the feeling of frustration I had re: the deontology discussion. So I’ll work through it “live” right now.
[edit: the rest was too long to fit, so I’ve split it off into a separate, child comment as a reply to this one]
[split from parent comment due to length]
Hm. I think I just found a test stimulus that matches the feeling of frustration I had re: the deontology discussion. So I’ll work through it “live” right now.
I am frustrated at being unable to find common ground with what seems like abstract thoughts taken to the point of magical and circular thinking… and it seems the emotional memory is arguing theism and other subjects with my mother at a relatively young age… she would tie me in knots, not with clever rhetoric, but with sheer insanity—logical rudeness writ large.
But I couldn’t just come out and say that to her… not just because of the power differential, but also because I had no handy list of biases and fallacies to point to, and she had no attention span for any logically-built-up arguments.
Huh. No wonder I feel frustrated trying to understand deontology… I get the same, “I can’t even understand this craziness well enough to be able to say it’s wrong” feeling.
Okay, so what abilities did I lose to learned helplessness in this context? I learned that there was nothing I could say or do about logical craziness… which would certainly explain why I started and deleted my deontology comment multiple times before finally posting it… and didn’t really try to achieve any common ground during it… I just took a victim posture and said deontology was nonsense. I also waited until I could “safely” say it in the context of someone else’s comment, rather than directly addressing the post’s author—either to seek the truth or argue a clear position.
So, what do I want to replace that feeling of helplessness with? Would I rather be curious, so that I find out more about someone’s apparently circular reasoning before dismissing it or fighting with it? How about compassionate, so I try to help the person find the flaw in their reasoning, if they’re actually interested in the first place? What about amusement, so that I’m merely entertained and move on?
Just questioning these possibilities and bringing them into mind is already modifying the emotional response, since I’ve now had an (imagined) sensory experience of what it would be like to have those different emotions and behaviors in the circumstance. I can also see that I don’t need to understand or persuade in such a circumstance, which feels like a relief. I can see that I didn’t need to argue with my mother and frustrate myself; I could have just let her be who she was, and gone about my business.
So, this is a good time for a test. How do I feel about arguing theism with my mother? No big deal. How about deontology? Not a big deal either, but then it wasn’t earlier, either, which is why I couldn’t use it as a test directly. So the real test is the thought of “having to explain practical things to people hopelessly stuck in impractical thinking”, which was reliably causing me to wrinkle my brow, hunch slightly, and sigh in frustration.
Now, instead of that, I get a mixed feeling of compassion/patience, felt lightly in the chest area… but there’s still a hint of the old feeling, like a component is still there.
Ah… I see, I’ve dealt with only one need axis: connection/bonding, but not status/significance. A portion of the frustration was not being able to connect, and that portion I’ve resolved, but the other part was frustration with a status differential: the person making the argument is succeeding in lowering my status if I can’t address their (nonsensical) argument.
Ugh. I hate status entanglements. I can’t fix the brain’s need for status, only remove specific entries from the “status threats” table. So let’s see if we can take this one out.
I’m noticing that other memories of kids teasing or insulting me in school are coming up in connection with this—the same fundamental circumstance of being in a conversation with no good answers, silence included. No matter what I do, I will lose face.
Ouch. This is a tough one. The rookie mistake here would be to think I have to be able to come up with better comebacks or something… that is, that I have to solve the problem in the outside world, in order to change my feelings. But if I instead change my feelings first on the inside, then my behavior will change to match.
So, what do I want to feel? Amused? Confident? As with other forms of learned helplessness, I am best off if I can feel the outcome emotions in advance of tthe outside world conforming to my preference. (That is, if I already feel the self-esteem I want from the interaction, before the interaction takes place, it is more likely that I will act in a way that results in a favorable interaction.)
So how would I feel if those kids were praising, instead of teasing or insulting? I would feel honored by the attention...
Boom! The memory just changed, popping into a new interpretation: the kids teasing and insulting me were giving me positive attention. This new interpretation drives a different feeling about it… along with a change to my feelings about certain discussions that have taken place on LW. ;-) Netiher seems like a threat any more.
Similarly, thinking about being criticized in other contexts doesn’t seem like a threat… I strangely feel genuinely honored that somebody took the time to tell me how they feel, even if I don’t agree with it. Wow. Weird. ;-) (But then, as I’m constantly telling people, if your change doesn’t surprise you in some way, you probably didn’t really change anything.)
The change also sent me reeling for a moment, as suddenly the sense of loneliness and “outsider”-ness I had as a child begins to feel downright stupid and unnecessary in retrospect.
Wow. Deep stuff. Did not expect anything of this depth from your suggestion, JenniferRM. I think I will take the rest of my processing offline, as it’s been increasingly difficult to type about this while doing it… trying to explain the extra context/purpose stuff has been kind of distracting anyway, while I was in the middle of doing things.
Whew. Anyway, I hope that was helpfully illustrative, nonetheless.
This comment has done more than anything else you’ve written to convince me that you aren’t generally talking nonsense.
Thank you, that’s very kind of you to say.
Overnight, I continued working on that thread of thoughts, and dug up several related issues. One of them was that I’ve also not been nearly as generous with giving positive attention and appreciation as I would’ve liked others to be. So I made a change to fix that this morning, and I actually felt genuine warmth and gratitude in response to your comment… something that I generally haven’t felt, even towards very positive comments here in the past.
So really, thank you, as it was indeed both kind and generous of you to say it.
Thanks for the response.
That was way more than I was hoping to get back and went in really interesting directions—the corrections about the way the “reprocessing” works and the limits of reprocessing was helpful. The detail about the way vivid memories can no longer be accessed through the same “index” and become more like stories was totally unexpected and fascinating.
Also, that was very impressive in terms of just… raw emotional openness, I guess. I don’t know about other readers, but it stirred up my emotions just reading about your issues as you worked through them. I have a hard time imagining the courage it would take for me to make similar emotional disclosures in a place like this if they were my own. I’m a little frightened by how much trust you gave me I think? But I’m very grateful too.
(And yes, “soul dousing” is a term I made up for the post for the sake of trying to summarize things I’ve read by you in the past in my own words to see if I was hearing what you were trying to say.)
Not as much as you might think. Bear in mind that by the time anybody reads anything I’ve written about something like that, it’s no longer the least bit emotional for me—it has become an interesting anecdote about something “once upon a time”.
If it was still emotional for me after I made the changes, I would have more trouble sharing it, here or even with my subscribers. In fact, the reason I cut off the post where I did was because there was some stuff I wasn’t yet “done” with and wanted to work on some more.
Likewise, it’s a lot easier to admit to your failures and shortcomings if you are acutely aware that 1) “you” aren’t really responsible, and 2) you can change. It’s easier to face the truth of what you did wrong, if you know that your reaction will be different in the future. It takes out the “feeling of being a bad person” part of the equation.
Yes! Both you and Kaj Sotala seem right on the money here. Deontology falls flat. A friend once observed to me that consequentialism is a more challenging stand to take because one needs to know more about any particular claim to defend an opinion about it.
I know it’s been discussed here on Less Wrong, but Jonathan Haidt’s research is really great, and relevant to this discussion. Professor Haidt’s work has validated David Hume’s assertions that we humans do not reason to our moral conclusions. Instead, we intuit about the morality of an action, and then provide shoddy reasoning as justification one way or the other.
Deciding whether a rule “works” based on whether it usually brings about good consequences, and following the rules that do and calling that “right”, is called rule consequentialism, not deontology.
That’s if you do it consciously, which I wasn’t suggesting. My suggestion was that this would be a mainly unconscious process, similar to the process of picking up any other deeply-rooted preference during childhood / young age.
How about this formulation:
Suppose that humans’ aggregate utility function includes both path-independent (“ends”) terms, and path-dependent (“means”) terms.
A (pseudo) deontologist in this scenario is someone who is concerned that all this talk about “achieving the best possible state of affairs” means that the path-dependent terms may be being neglected.
If you think about it, any fixed “state of affairs” is undesirable, simply because it is FIXED. I don’t know for sure, but I think almost everything that you value is actually a path unfolding in time—possibilities might include: falling in love, learning something new, freedom/self-determination, growth and change.
“Deontologists are just elevating intermediate heuristics to terminal values” is true, but also misleading and unfair unless you prepend “Consequentialists and ” first. After all, it seems quite likely that joy, curiosity, love, and all the other things we value are also merely heuristics that evolution found to be useful for its terminal goal of “Make more mans. More mans!” But if our terminal values happen to match some other optimizing process’ instrumental values, so what? That’s an interesting observation, not a devastating criticism.