In the past year or two, I’ve spent a lot of time explicitly trying to taboo “agenty” modelling of people from my thoughts. I didn’t have a word for it before, and I’m still not sure agenty is the right word, but it’s the right idea. One interesting consequence is that I very rarely get angry any more. It just doesn’t make sense to be angry when you think of everyone (including yourself) mechanically. Frustration still happens, but it lacks the sense of blame that comes with anger, and it’s much easier to control. In fact, I often find others’ anger confusing now.
At this point, my efforts to taboo agenty thinking have been successful enough that I misinterpreted the first two paragraphs of this post. I thought it was about the distinction between people I model as full game-theoretic agents (I account for them accounting for my actions) versus people who will execute a fixed script without any reflective reasoning. To me, that’s the difference between PCs and NPCs.
More recently, following this same trajectory, I’ve experimented with tabooing moral value assignments from my thoughts. Whenever I catch myself thinking of what one “should” do, I taboo “should” and replace it with something else. Originally, this amorality-via-taboo was just an experiment, but I was so pleased with it that I kept it around. It really helps you notice what you actually want, and things like “ugh” reactions become more obvious. I highly recommend it, at least as an experiment for a week or two.
At this point, my efforts to taboo agenty thinking have been successful enough that I misinterpreted the first two paragraphs of this post. I thought it was about the distinction between people I model as full game-theoretic agents (I account for them accounting for my actions) versus people who will execute a fixed script without any reflective reasoning. To me, that’s the difference between PCs and NPCs.
This is exactly the kind of other-people-thinking-differently-than-I-do interestingness that caused me to write this post!
The thing that was most interesting to me, on reflection, is that I do get angry less since I’ve started modelling most people “mechanically”. It’s jus that my brain doesn’t automatically extend that to people whom I respect a lot for whatever reason. For them, I will get angry. Which isn’t helpful, but it is informative. I think it might just show that I’m more surprised when people who I think of as PCs let me down, and that when I get angry, it’s because I was relying on them and hadn’t made fallback plans, so the anger is more just my anxiety about my plans no longer working.
I do get angry less since I’ve started modelling most people “mechanically”. It’s jus that my brain doesn’t automatically extend that to people whom I respect a lot for whatever reason.
It seems that once you assign specific people to the NPC category you think of them as belonging to a lesser, inferior kind. That’s why you get less angry at them and that’s why those you respect don’t get assigned there.
I’ve gone through a change much like this over the past couple of years, although not with explicit effort. I would tend to get easily annoyed by crossing inconsequential stupidity or spite somewhere on the internet (not directed at me), and then proceed to be disappointed in myself for having something like that hang on my thoughts for a few hours.
Switching to a model in which I’m responsible for my own reaction to other people does a wonder for self control and saves some needless frustration.
I can only think of one person (that I know personally) whom I treat as possessing as much agency as I expect of myself, and that results in offering and expecting full honesty. If I view somebody as at all agenty, I generally wouldn’t try to spare their feelings or in any way emotionally manipulate them for my own benefit. I don’t find that to be a sustainable way to act with strangers: I can’t take the time to model why somebody flinging a poorly written insult over a meaningless topic that I happened to skim over is doing so, and I’d gain nothing (and very probably be wrong) in assuming they have a good reason.
As was mentioned with assigning non-agents negligible moral value, it does lead to higher standards, but those standards extend to oneself, potentially to one’s benefit. Once you make a distinction of what the acts of a non-agent look like, you start more consistently trying to justify everything you say or do yourself. Reminds me a bit of “Would an idiot do that?′ And if they would, I do not do that thing.”
I can still rather easily choose to view people as agents and assign moral value in any context where I have to make a decision, so I don’t think having a significantly reduced moral value for others is to my detriment: it just removes the pressure to find a justification for their actions.
This will constitute my first comment on Less Wrong, so thank you for the interesting topic, and please inform me of any errors or inconveniences in my writing style.
I was quoting. It would be more accurate to say that “Would this be done exclusively by idiots?”, what with reversed stupidity. Alternatively, if the answer to the default version is yes, that just suggests that you require further consideration. Either way, it’s pretty tautological “Would only smart people do this? If not, am I doing it for a smart reason?” but having an extra layer of flags for thinking doesn’t hurt.
Being in a situation somewhat similar to yours, I’ve been worrying that my lowered expectations about others’ level of agency (with elevated expectations as to what constitutes a “good” level of agency) has an influence on those I interact with: if I assume that people are somewhat influenced by what others expect of them, I must conclude that I should behave (as far as they can see) as if I believed them to be as capable of agency as myself, so that their actual level of agency will improve. This would would work on me, for instance I’d be more generally prone to take initiative if I saw trust in my peers’ eyes.
As was mentioned with assigning non-agents negligible moral value, it does lead to higher standards, but those standards extend to oneself, potentially to one’s benefit.
I can’t parse this. Is it a reference to something someone else is the thread said?
In that sense, I don’t know if modelling different people differently is, for me, a morally a right or a wrong thing to do. However, I spoke to someone whose default is not to assign people moral value, unless he models them as agents. I can see this being problematic, since it’s a high standard.
Thanks. Given that you want to improve your rationality to begin with, though, is believing that your moral worth depends on it really beneficial? Pain and gain motivation seems relevant.
Later, you say:
I can still rather easily choose to view people as agents and assign moral value in any context where I have to make a decision, so I don’t think having a significantly reduced moral value for others is to my detriment
Do you actually value us and temporarily convince yourself otherwise, or is it the other way around?
Given that you want to improve your rationality to begin with, though, is believing that your moral worth depends on it really beneficial?
I’m not sure if you’re asking my moral worth of myself or others, so I’ll answer both.
If you’re referring to my moral worth of myself, I’m assuming that the problem would be that, as I learn about biases, I would consider myself less of an agent, so I wouldn’t be motivated to discover my mistakes. You’ll have to take my word for it that I pat myself on the back whenever I discover an error in thinking and mark it down, but other than that, I don’t have an issue with my self-image being (significantly, long term) tied to how I estimate my efficacy at rationality, one way or another. I just enjoy the process.
If you’re referring to how I value others, then rationality seems inextricably tied to how I think of others. As I learn about how people get to certain views or actions, I consider them either more or less justified in doing so, and more or less “valuable” than others, if I may speak so bluntly of my fellow man. If I don’t think there’s a good reason to vandalize someone’s property, and I think that there is a good reason to offer food to a homeless man, then if given that isolated knowledge, and a choice from Omega on who I wish to save (assuming that I can’t save both), I’ll save the person who commits more justified actions. Learning about difficult to lose biases that can lead one to do “bad things” or about misguided notions that can cause people to do right for the wrong reason inevitably changes how I view others (however incrementally), even if I don’t offer them agency and see them as “merely” complex machines.
Do you actually value us and temporarily convince yourself otherwise, or is it the other way around?
Considering that I know that saying I value others is the ideal, and that if I don’t believe so, I’d prefer to, it would be difficult to honestly say that I don’t value others. I’m not an empathetic person and don’t tend to find myself worrying about the future of humanity, but I try to think as if I do for the purpose of moral questions.
Seeing as that I value valuing you, and am, from the outside, largely indistinguishable from somebody who values you, I think I can safely say that I do value others.
But, I didn’t quite have the confidence to answer that flatly.
My problem with this is that I want people to be agenty. For me the distinction between agent and complex system is about self-awareness and mindfullness. If you think about yourself and what you are and aren’t capable of and how you interact with the world, you will have more agency and be a better person. I’m disgusted by people who just live like thoughtless animals.
I guess the obvious solution is to get over it. But I’m not sure I want to. It holds people to a higher standard.
I think what OP meant was the following. Having two people with the same, positive aims (e.g. be a good parent, do your job well), the agency-driven one will achieve more with the same hard work as the another. Therefore, for people around you, you would wish them to be more agenty as a default.
In the past year or two, I’ve spent a lot of time explicitly trying to taboo “agenty” modelling of people from my thoughts. I didn’t have a word for it before, and I’m still not sure agenty is the right word, but it’s the right idea. One interesting consequence is that I very rarely get angry any more. It just doesn’t make sense to be angry when you think of everyone (including yourself) mechanically. Frustration still happens, but it lacks the sense of blame that comes with anger, and it’s much easier to control. In fact, I often find others’ anger confusing now.
At this point, my efforts to taboo agenty thinking have been successful enough that I misinterpreted the first two paragraphs of this post. I thought it was about the distinction between people I model as full game-theoretic agents (I account for them accounting for my actions) versus people who will execute a fixed script without any reflective reasoning. To me, that’s the difference between PCs and NPCs.
More recently, following this same trajectory, I’ve experimented with tabooing moral value assignments from my thoughts. Whenever I catch myself thinking of what one “should” do, I taboo “should” and replace it with something else. Originally, this amorality-via-taboo was just an experiment, but I was so pleased with it that I kept it around. It really helps you notice what you actually want, and things like “ugh” reactions become more obvious. I highly recommend it, at least as an experiment for a week or two.
Maybe you can write a post detailing your experiences? Sounds quite interesting.
This is exactly the kind of other-people-thinking-differently-than-I-do interestingness that caused me to write this post!
The thing that was most interesting to me, on reflection, is that I do get angry less since I’ve started modelling most people “mechanically”. It’s jus that my brain doesn’t automatically extend that to people whom I respect a lot for whatever reason. For them, I will get angry. Which isn’t helpful, but it is informative. I think it might just show that I’m more surprised when people who I think of as PCs let me down, and that when I get angry, it’s because I was relying on them and hadn’t made fallback plans, so the anger is more just my anxiety about my plans no longer working.
It seems that once you assign specific people to the NPC category you think of them as belonging to a lesser, inferior kind. That’s why you get less angry at them and that’s why those you respect don’t get assigned there.
I’ve gone through a change much like this over the past couple of years, although not with explicit effort. I would tend to get easily annoyed by crossing inconsequential stupidity or spite somewhere on the internet (not directed at me), and then proceed to be disappointed in myself for having something like that hang on my thoughts for a few hours.
Switching to a model in which I’m responsible for my own reaction to other people does a wonder for self control and saves some needless frustration.
I can only think of one person (that I know personally) whom I treat as possessing as much agency as I expect of myself, and that results in offering and expecting full honesty. If I view somebody as at all agenty, I generally wouldn’t try to spare their feelings or in any way emotionally manipulate them for my own benefit. I don’t find that to be a sustainable way to act with strangers: I can’t take the time to model why somebody flinging a poorly written insult over a meaningless topic that I happened to skim over is doing so, and I’d gain nothing (and very probably be wrong) in assuming they have a good reason.
As was mentioned with assigning non-agents negligible moral value, it does lead to higher standards, but those standards extend to oneself, potentially to one’s benefit. Once you make a distinction of what the acts of a non-agent look like, you start more consistently trying to justify everything you say or do yourself. Reminds me a bit of “Would an idiot do that?′ And if they would, I do not do that thing.”
I can still rather easily choose to view people as agents and assign moral value in any context where I have to make a decision, so I don’t think having a significantly reduced moral value for others is to my detriment: it just removes the pressure to find a justification for their actions.
This will constitute my first comment on Less Wrong, so thank you for the interesting topic, and please inform me of any errors or inconveniences in my writing style.
Welcome!
Slightly wrong, but as you’re still breathing I assume you know this.
I was quoting. It would be more accurate to say that “Would this be done exclusively by idiots?”, what with reversed stupidity. Alternatively, if the answer to the default version is yes, that just suggests that you require further consideration. Either way, it’s pretty tautological “Would only smart people do this? If not, am I doing it for a smart reason?” but having an extra layer of flags for thinking doesn’t hurt.
Being in a situation somewhat similar to yours, I’ve been worrying that my lowered expectations about others’ level of agency (with elevated expectations as to what constitutes a “good” level of agency) has an influence on those I interact with: if I assume that people are somewhat influenced by what others expect of them, I must conclude that I should behave (as far as they can see) as if I believed them to be as capable of agency as myself, so that their actual level of agency will improve. This would would work on me, for instance I’d be more generally prone to take initiative if I saw trust in my peers’ eyes.
Well posted. I hope we will hear more from you in the future.
I can’t parse this. Is it a reference to something someone else is the thread said?
From the main post.
Thanks. Given that you want to improve your rationality to begin with, though, is believing that your moral worth depends on it really beneficial? Pain and gain motivation seems relevant.
Later, you say:
Do you actually value us and temporarily convince yourself otherwise, or is it the other way around?
I’m not sure if you’re asking my moral worth of myself or others, so I’ll answer both.
If you’re referring to my moral worth of myself, I’m assuming that the problem would be that, as I learn about biases, I would consider myself less of an agent, so I wouldn’t be motivated to discover my mistakes. You’ll have to take my word for it that I pat myself on the back whenever I discover an error in thinking and mark it down, but other than that, I don’t have an issue with my self-image being (significantly, long term) tied to how I estimate my efficacy at rationality, one way or another. I just enjoy the process.
If you’re referring to how I value others, then rationality seems inextricably tied to how I think of others. As I learn about how people get to certain views or actions, I consider them either more or less justified in doing so, and more or less “valuable” than others, if I may speak so bluntly of my fellow man. If I don’t think there’s a good reason to vandalize someone’s property, and I think that there is a good reason to offer food to a homeless man, then if given that isolated knowledge, and a choice from Omega on who I wish to save (assuming that I can’t save both), I’ll save the person who commits more justified actions. Learning about difficult to lose biases that can lead one to do “bad things” or about misguided notions that can cause people to do right for the wrong reason inevitably changes how I view others (however incrementally), even if I don’t offer them agency and see them as “merely” complex machines.
Considering that I know that saying I value others is the ideal, and that if I don’t believe so, I’d prefer to, it would be difficult to honestly say that I don’t value others. I’m not an empathetic person and don’t tend to find myself worrying about the future of humanity, but I try to think as if I do for the purpose of moral questions.
Seeing as that I value valuing you, and am, from the outside, largely indistinguishable from somebody who values you, I think I can safely say that I do value others.
But, I didn’t quite have the confidence to answer that flatly.
My problem with this is that I want people to be agenty. For me the distinction between agent and complex system is about self-awareness and mindfullness. If you think about yourself and what you are and aren’t capable of and how you interact with the world, you will have more agency and be a better person. I’m disgusted by people who just live like thoughtless animals.
I guess the obvious solution is to get over it. But I’m not sure I want to. It holds people to a higher standard.
I think you’re confusing being proactive and being a good person.
If a homicidal maniac acquires more agency that doesn’t make him a better person, it just makes him more dangerous.
I think what OP meant was the following. Having two people with the same, positive aims (e.g. be a good parent, do your job well), the agency-driven one will achieve more with the same hard work as the another. Therefore, for people around you, you would wish them to be more agenty as a default.
That’s generally described by words like “effective” and “high-productivity”.
Why are you assuming that people around me have positive aims? Moreover, what’s important is not just aims, but also the costs (and who pays them)