In that sense, I don’t know if modelling different people differently is, for me, a morally a right or a wrong thing to do. However, I spoke to someone whose default is not to assign people moral value, unless he models them as agents. I can see this being problematic, since it’s a high standard.
Thanks. Given that you want to improve your rationality to begin with, though, is believing that your moral worth depends on it really beneficial? Pain and gain motivation seems relevant.
Later, you say:
I can still rather easily choose to view people as agents and assign moral value in any context where I have to make a decision, so I don’t think having a significantly reduced moral value for others is to my detriment
Do you actually value us and temporarily convince yourself otherwise, or is it the other way around?
Given that you want to improve your rationality to begin with, though, is believing that your moral worth depends on it really beneficial?
I’m not sure if you’re asking my moral worth of myself or others, so I’ll answer both.
If you’re referring to my moral worth of myself, I’m assuming that the problem would be that, as I learn about biases, I would consider myself less of an agent, so I wouldn’t be motivated to discover my mistakes. You’ll have to take my word for it that I pat myself on the back whenever I discover an error in thinking and mark it down, but other than that, I don’t have an issue with my self-image being (significantly, long term) tied to how I estimate my efficacy at rationality, one way or another. I just enjoy the process.
If you’re referring to how I value others, then rationality seems inextricably tied to how I think of others. As I learn about how people get to certain views or actions, I consider them either more or less justified in doing so, and more or less “valuable” than others, if I may speak so bluntly of my fellow man. If I don’t think there’s a good reason to vandalize someone’s property, and I think that there is a good reason to offer food to a homeless man, then if given that isolated knowledge, and a choice from Omega on who I wish to save (assuming that I can’t save both), I’ll save the person who commits more justified actions. Learning about difficult to lose biases that can lead one to do “bad things” or about misguided notions that can cause people to do right for the wrong reason inevitably changes how I view others (however incrementally), even if I don’t offer them agency and see them as “merely” complex machines.
Do you actually value us and temporarily convince yourself otherwise, or is it the other way around?
Considering that I know that saying I value others is the ideal, and that if I don’t believe so, I’d prefer to, it would be difficult to honestly say that I don’t value others. I’m not an empathetic person and don’t tend to find myself worrying about the future of humanity, but I try to think as if I do for the purpose of moral questions.
Seeing as that I value valuing you, and am, from the outside, largely indistinguishable from somebody who values you, I think I can safely say that I do value others.
But, I didn’t quite have the confidence to answer that flatly.
From the main post.
Thanks. Given that you want to improve your rationality to begin with, though, is believing that your moral worth depends on it really beneficial? Pain and gain motivation seems relevant.
Later, you say:
Do you actually value us and temporarily convince yourself otherwise, or is it the other way around?
I’m not sure if you’re asking my moral worth of myself or others, so I’ll answer both.
If you’re referring to my moral worth of myself, I’m assuming that the problem would be that, as I learn about biases, I would consider myself less of an agent, so I wouldn’t be motivated to discover my mistakes. You’ll have to take my word for it that I pat myself on the back whenever I discover an error in thinking and mark it down, but other than that, I don’t have an issue with my self-image being (significantly, long term) tied to how I estimate my efficacy at rationality, one way or another. I just enjoy the process.
If you’re referring to how I value others, then rationality seems inextricably tied to how I think of others. As I learn about how people get to certain views or actions, I consider them either more or less justified in doing so, and more or less “valuable” than others, if I may speak so bluntly of my fellow man. If I don’t think there’s a good reason to vandalize someone’s property, and I think that there is a good reason to offer food to a homeless man, then if given that isolated knowledge, and a choice from Omega on who I wish to save (assuming that I can’t save both), I’ll save the person who commits more justified actions. Learning about difficult to lose biases that can lead one to do “bad things” or about misguided notions that can cause people to do right for the wrong reason inevitably changes how I view others (however incrementally), even if I don’t offer them agency and see them as “merely” complex machines.
Considering that I know that saying I value others is the ideal, and that if I don’t believe so, I’d prefer to, it would be difficult to honestly say that I don’t value others. I’m not an empathetic person and don’t tend to find myself worrying about the future of humanity, but I try to think as if I do for the purpose of moral questions.
Seeing as that I value valuing you, and am, from the outside, largely indistinguishable from somebody who values you, I think I can safely say that I do value others.
But, I didn’t quite have the confidence to answer that flatly.