I’ve gone through a change much like this over the past couple of years, although not with explicit effort. I would tend to get easily annoyed by crossing inconsequential stupidity or spite somewhere on the internet (not directed at me), and then proceed to be disappointed in myself for having something like that hang on my thoughts for a few hours.
Switching to a model in which I’m responsible for my own reaction to other people does a wonder for self control and saves some needless frustration.
I can only think of one person (that I know personally) whom I treat as possessing as much agency as I expect of myself, and that results in offering and expecting full honesty. If I view somebody as at all agenty, I generally wouldn’t try to spare their feelings or in any way emotionally manipulate them for my own benefit. I don’t find that to be a sustainable way to act with strangers: I can’t take the time to model why somebody flinging a poorly written insult over a meaningless topic that I happened to skim over is doing so, and I’d gain nothing (and very probably be wrong) in assuming they have a good reason.
As was mentioned with assigning non-agents negligible moral value, it does lead to higher standards, but those standards extend to oneself, potentially to one’s benefit. Once you make a distinction of what the acts of a non-agent look like, you start more consistently trying to justify everything you say or do yourself. Reminds me a bit of “Would an idiot do that?′ And if they would, I do not do that thing.”
I can still rather easily choose to view people as agents and assign moral value in any context where I have to make a decision, so I don’t think having a significantly reduced moral value for others is to my detriment: it just removes the pressure to find a justification for their actions.
This will constitute my first comment on Less Wrong, so thank you for the interesting topic, and please inform me of any errors or inconveniences in my writing style.
I was quoting. It would be more accurate to say that “Would this be done exclusively by idiots?”, what with reversed stupidity. Alternatively, if the answer to the default version is yes, that just suggests that you require further consideration. Either way, it’s pretty tautological “Would only smart people do this? If not, am I doing it for a smart reason?” but having an extra layer of flags for thinking doesn’t hurt.
Being in a situation somewhat similar to yours, I’ve been worrying that my lowered expectations about others’ level of agency (with elevated expectations as to what constitutes a “good” level of agency) has an influence on those I interact with: if I assume that people are somewhat influenced by what others expect of them, I must conclude that I should behave (as far as they can see) as if I believed them to be as capable of agency as myself, so that their actual level of agency will improve. This would would work on me, for instance I’d be more generally prone to take initiative if I saw trust in my peers’ eyes.
As was mentioned with assigning non-agents negligible moral value, it does lead to higher standards, but those standards extend to oneself, potentially to one’s benefit.
I can’t parse this. Is it a reference to something someone else is the thread said?
In that sense, I don’t know if modelling different people differently is, for me, a morally a right or a wrong thing to do. However, I spoke to someone whose default is not to assign people moral value, unless he models them as agents. I can see this being problematic, since it’s a high standard.
Thanks. Given that you want to improve your rationality to begin with, though, is believing that your moral worth depends on it really beneficial? Pain and gain motivation seems relevant.
Later, you say:
I can still rather easily choose to view people as agents and assign moral value in any context where I have to make a decision, so I don’t think having a significantly reduced moral value for others is to my detriment
Do you actually value us and temporarily convince yourself otherwise, or is it the other way around?
Given that you want to improve your rationality to begin with, though, is believing that your moral worth depends on it really beneficial?
I’m not sure if you’re asking my moral worth of myself or others, so I’ll answer both.
If you’re referring to my moral worth of myself, I’m assuming that the problem would be that, as I learn about biases, I would consider myself less of an agent, so I wouldn’t be motivated to discover my mistakes. You’ll have to take my word for it that I pat myself on the back whenever I discover an error in thinking and mark it down, but other than that, I don’t have an issue with my self-image being (significantly, long term) tied to how I estimate my efficacy at rationality, one way or another. I just enjoy the process.
If you’re referring to how I value others, then rationality seems inextricably tied to how I think of others. As I learn about how people get to certain views or actions, I consider them either more or less justified in doing so, and more or less “valuable” than others, if I may speak so bluntly of my fellow man. If I don’t think there’s a good reason to vandalize someone’s property, and I think that there is a good reason to offer food to a homeless man, then if given that isolated knowledge, and a choice from Omega on who I wish to save (assuming that I can’t save both), I’ll save the person who commits more justified actions. Learning about difficult to lose biases that can lead one to do “bad things” or about misguided notions that can cause people to do right for the wrong reason inevitably changes how I view others (however incrementally), even if I don’t offer them agency and see them as “merely” complex machines.
Do you actually value us and temporarily convince yourself otherwise, or is it the other way around?
Considering that I know that saying I value others is the ideal, and that if I don’t believe so, I’d prefer to, it would be difficult to honestly say that I don’t value others. I’m not an empathetic person and don’t tend to find myself worrying about the future of humanity, but I try to think as if I do for the purpose of moral questions.
Seeing as that I value valuing you, and am, from the outside, largely indistinguishable from somebody who values you, I think I can safely say that I do value others.
But, I didn’t quite have the confidence to answer that flatly.
I’ve gone through a change much like this over the past couple of years, although not with explicit effort. I would tend to get easily annoyed by crossing inconsequential stupidity or spite somewhere on the internet (not directed at me), and then proceed to be disappointed in myself for having something like that hang on my thoughts for a few hours.
Switching to a model in which I’m responsible for my own reaction to other people does a wonder for self control and saves some needless frustration.
I can only think of one person (that I know personally) whom I treat as possessing as much agency as I expect of myself, and that results in offering and expecting full honesty. If I view somebody as at all agenty, I generally wouldn’t try to spare their feelings or in any way emotionally manipulate them for my own benefit. I don’t find that to be a sustainable way to act with strangers: I can’t take the time to model why somebody flinging a poorly written insult over a meaningless topic that I happened to skim over is doing so, and I’d gain nothing (and very probably be wrong) in assuming they have a good reason.
As was mentioned with assigning non-agents negligible moral value, it does lead to higher standards, but those standards extend to oneself, potentially to one’s benefit. Once you make a distinction of what the acts of a non-agent look like, you start more consistently trying to justify everything you say or do yourself. Reminds me a bit of “Would an idiot do that?′ And if they would, I do not do that thing.”
I can still rather easily choose to view people as agents and assign moral value in any context where I have to make a decision, so I don’t think having a significantly reduced moral value for others is to my detriment: it just removes the pressure to find a justification for their actions.
This will constitute my first comment on Less Wrong, so thank you for the interesting topic, and please inform me of any errors or inconveniences in my writing style.
Welcome!
Slightly wrong, but as you’re still breathing I assume you know this.
I was quoting. It would be more accurate to say that “Would this be done exclusively by idiots?”, what with reversed stupidity. Alternatively, if the answer to the default version is yes, that just suggests that you require further consideration. Either way, it’s pretty tautological “Would only smart people do this? If not, am I doing it for a smart reason?” but having an extra layer of flags for thinking doesn’t hurt.
Being in a situation somewhat similar to yours, I’ve been worrying that my lowered expectations about others’ level of agency (with elevated expectations as to what constitutes a “good” level of agency) has an influence on those I interact with: if I assume that people are somewhat influenced by what others expect of them, I must conclude that I should behave (as far as they can see) as if I believed them to be as capable of agency as myself, so that their actual level of agency will improve. This would would work on me, for instance I’d be more generally prone to take initiative if I saw trust in my peers’ eyes.
Well posted. I hope we will hear more from you in the future.
I can’t parse this. Is it a reference to something someone else is the thread said?
From the main post.
Thanks. Given that you want to improve your rationality to begin with, though, is believing that your moral worth depends on it really beneficial? Pain and gain motivation seems relevant.
Later, you say:
Do you actually value us and temporarily convince yourself otherwise, or is it the other way around?
I’m not sure if you’re asking my moral worth of myself or others, so I’ll answer both.
If you’re referring to my moral worth of myself, I’m assuming that the problem would be that, as I learn about biases, I would consider myself less of an agent, so I wouldn’t be motivated to discover my mistakes. You’ll have to take my word for it that I pat myself on the back whenever I discover an error in thinking and mark it down, but other than that, I don’t have an issue with my self-image being (significantly, long term) tied to how I estimate my efficacy at rationality, one way or another. I just enjoy the process.
If you’re referring to how I value others, then rationality seems inextricably tied to how I think of others. As I learn about how people get to certain views or actions, I consider them either more or less justified in doing so, and more or less “valuable” than others, if I may speak so bluntly of my fellow man. If I don’t think there’s a good reason to vandalize someone’s property, and I think that there is a good reason to offer food to a homeless man, then if given that isolated knowledge, and a choice from Omega on who I wish to save (assuming that I can’t save both), I’ll save the person who commits more justified actions. Learning about difficult to lose biases that can lead one to do “bad things” or about misguided notions that can cause people to do right for the wrong reason inevitably changes how I view others (however incrementally), even if I don’t offer them agency and see them as “merely” complex machines.
Considering that I know that saying I value others is the ideal, and that if I don’t believe so, I’d prefer to, it would be difficult to honestly say that I don’t value others. I’m not an empathetic person and don’t tend to find myself worrying about the future of humanity, but I try to think as if I do for the purpose of moral questions.
Seeing as that I value valuing you, and am, from the outside, largely indistinguishable from somebody who values you, I think I can safely say that I do value others.
But, I didn’t quite have the confidence to answer that flatly.