What I’m saying is that we already ignore the suffering of those who suffer the most.
Probably true, and possibly a tautology.
However, I think it’s the same fallacy as judging societies only by how the lowest status people are treated. It’s ignoring what happens to a large proportion, perhaps the majority of people.
Also, if better treatment can be figured out for some groups, then perhaps the knowledge can be applied to other suffering when it gets noticed. Life with people isn’t entirely zero-sum.
If you see life solely (or even merely primarily) in terms of status, as I believe Konkvistador does, then it is indeed a zero-sum game, since a person’s status is a relative ranking, and not an absolute measure (as contrasted with, say, top running speed).
Even if life is solely a zero-sum game, it would still be possible to narrow the status differences. It’s one thing to have most people think you’re funny-looking, and another to be at risk of being killed on sight.
That is true, but narrowing the status differences would severely penalize anyone whose status is higher than the minimum (or possibly only those with above-average status, depending on the scale you’re using). If we measure quality of life solely in terms of status, then such an action would be undesirable.
Granted, if we include other measures in our calculation, then it all depends on what weights we place on each measure, status included.
Again, as far as I understand, Konkvistador believes that humans are driven primarily by their desire to achieve a higher status, and that this is in fact one of our terminal goals. If we assume that this is true, then I believe my comments are correct.
Is that actually true, though ? Are humans driven primarily by their desire to achieve a higher status (in addition to the desires directly related to physical survival, of course) ? I don’t know, but maybe Konkvistador has some evidence for the proposition—assuming, of course, that I’m not misinterpreting his viewpoint.
Konkvistador believes that humans are driven primarily by their desire to achieve a higher status, and that this is in fact one of our terminal goals.
This needs to be consideredseparately as (1) a descriptive statement about actions (2) a descriptive statement about subjective experience (3) a normative statement about the utilitarian good. It seems much more accurate as (1) than (2) or (3), and I think Konkvistador means it as (1); meanwhile, statements about “quality of life” could mean (2) or (3) but not (1).
Yes, thank you. As far as I can tell, (1) and (2) are closest to the meaning I inferred. I understand that we can consider them separately, but IMO (2) implies (1).
If an agent seeks to maximize its sense of well-being (as it would reasonable to assume humans do), then we would expect the agent to take actions which it believes will achieve this effect. Its beliefs could be wrong, of course, but since the agent is descended from a long line of evolutionarily successful agents, we can expect it to be right a lot more often that it’s wrong.
Thus, if the agent’s sense of well-being can be accurately predicted as being proportional to its status (regardless of whether the agent itself is aware of this or not), then it would be reasonable to assume that the agent will take actions that, on average, lead to raising its status.
Probably true, and possibly a tautology.
However, I think it’s the same fallacy as judging societies only by how the lowest status people are treated. It’s ignoring what happens to a large proportion, perhaps the majority of people.
Also, if better treatment can be figured out for some groups, then perhaps the knowledge can be applied to other suffering when it gets noticed. Life with people isn’t entirely zero-sum.
If you see life solely (or even merely primarily) in terms of status, as I believe Konkvistador does, then it is indeed a zero-sum game, since a person’s status is a relative ranking, and not an absolute measure (as contrasted with, say, top running speed).
Even if life is solely a zero-sum game, it would still be possible to narrow the status differences. It’s one thing to have most people think you’re funny-looking, and another to be at risk of being killed on sight.
That is true, but narrowing the status differences would severely penalize anyone whose status is higher than the minimum (or possibly only those with above-average status, depending on the scale you’re using). If we measure quality of life solely in terms of status, then such an action would be undesirable.
Granted, if we include other measures in our calculation, then it all depends on what weights we place on each measure, status included.
It also depends on just how much narrowing we’re doing. I think that eliminating “able to literally get away with murder” wouldn’t be a great loss.
Is there a reason we might want to do this? It feels like your comments in this thread unjustifiably privilege this model.
Again, as far as I understand, Konkvistador believes that humans are driven primarily by their desire to achieve a higher status, and that this is in fact one of our terminal goals. If we assume that this is true, then I believe my comments are correct.
Is that actually true, though ? Are humans driven primarily by their desire to achieve a higher status (in addition to the desires directly related to physical survival, of course) ? I don’t know, but maybe Konkvistador has some evidence for the proposition—assuming, of course, that I’m not misinterpreting his viewpoint.
This needs to be considered separately as (1) a descriptive statement about actions (2) a descriptive statement about subjective experience (3) a normative statement about the utilitarian good. It seems much more accurate as (1) than (2) or (3), and I think Konkvistador means it as (1); meanwhile, statements about “quality of life” could mean (2) or (3) but not (1).
I don’t understand what (1) means, can you explain ?
The three interpretations I mean are:
(1) People’s behavior is accurately predicted by modeling them as status-maximizing agents.
(2) People’s subjective experience of well-being is accurately predicted by modeling it as proportional to status.
(3) A person is well-off, in the sense that an altruist should care about, in proportion to their status.
Is that clearer?
Yes, thank you. As far as I can tell, (1) and (2) are closest to the meaning I inferred. I understand that we can consider them separately, but IMO (2) implies (1).
If an agent seeks to maximize its sense of well-being (as it would reasonable to assume humans do), then we would expect the agent to take actions which it believes will achieve this effect. Its beliefs could be wrong, of course, but since the agent is descended from a long line of evolutionarily successful agents, we can expect it to be right a lot more often that it’s wrong.
Thus, if the agent’s sense of well-being can be accurately predicted as being proportional to its status (regardless of whether the agent itself is aware of this or not), then it would be reasonable to assume that the agent will take actions that, on average, lead to raising its status.
Consider this explanation, too.