I don’t. Simple selfishness is definitely an attractor (in the sense that it’s an attitude that many people end up adopting), and it wouldn’t take much axiological surgery to make it reflectively consistent.
Individual humans sometimes become more selfish, but not consistently reflectively so, and humanity seems to be becoming more humane over time. Obviously there’s a lot of interpersonal and even intrapersonal variance, but the trend in human values is both intuitively apparent and empirically verified by e.g. the World Values Survey and other axiological sociology. Also, I doubt selfishness is as strong an attractor among people who are the smartest and most knowledgeable of their time. Look at e.g. Maslow’s research on people at the peak of human performance and mental health, and the attractors he identified as self actualization and self transcendence. Selfishness (or simple/naive selfishness) mostly seems like a pitfall for stereotypical amateur philosophers and venture capitalists.
I’m taking another look at this and find it hard to sum up just how many problems there are with your argument.
I doubt selfishness is as strong an attractor among people who are the smartest and most knowledgeable of their time.
What about the people who are the most powerful of their time? Think about what the psychology of a billionaire must be. You don’t accumulate that much wealth just by setting out to serve humanity. You care about offering your customers a service, but you also try to kill the competition, and you cut deals with the already existing powers, especially the state. Most adults are slaves, to an economic function if not literally doing what another person tells them, and then there is a small wealthy class of masters who have the desire and ability to take advantage of this situation.
I started out by opposing “simple selfishness” to your hypothesis that “Buddhahood or something close” is the natural endpoint of human moral development. But there’s also group allegiance: my family, my country, my race, but not yours. I look out for my group, it looks out for me, and caring about other groups is a luxury for those who are really well off. Such caring is also likely to be pursued in a form which is advantageous, whether blatantly or subtly, for the group which gets to play benefactor. We will reshape you even while we care for you.
Individual humans sometimes become more selfish, but not consistently reflectively so
How close do you think anyone has ever come to reflective consistency? Anyway, you are reflectively consistent if there’s no impulse within you to change your goals. So anyone, whatever their current goals, can achieve reflective consistency by removing whatever impulses for change-of-values they may have.
the only real attractor in mindspace that could be construed as reflectively consistent given the vector humanity seems to be on.
Reflective consistency isn’t a matter of consistency with your trajectory so far, it’s a matter of consistency when examined according to your normative principles. The trajectory so far did not result from any such thorough and transparent self-scrutiny.
Frankly, if I ask myself, what does the average human want to be, I’d say a benevolent dictator. So yes, the trend of increasing humaneness corresponds to something—increased opportunity to take mercy on other beings. But there’s no corresponding diminution of interest in satisfying one’s own desires.
Let’s see, what else can I disagree with? I don’t really know what your concept of Buddhahood is, but it sounds a bit like nonattachment for the sake of pleasure. I’ll take what pleasures I can, and I’ll avoid the pain of losing them by not being attached to them. But that’s aestheticism or rational hedonism. My understanding of Buddhahood is somewhat harsher (to a pleasure-seeking sensibility), because it seeks to avoid pleasure as well as pain, the goal after all being extinction, removal from the cycle of life. But that was never a successful mass philosophy, so you got more superstitious forms of Buddhism in which there’s a happy pure-land afterlife and so on.
I also have to note that an AI does not have to be a person, so it’s questionable what implications trends in human values have for AI. What people want themselves to be and what they would want a non-person AI to be are different topics.
Frankly, if I ask myself, what does the average human want to be, I’d say a benevolent dictator.
Seriously? For my part, I doubt that a typical human wants to do that much work. I suspect that “favored and privileged subject of a benevolent dictator” would be much more popular. Even more popular would be “favored and privileged beneficiary of a benevolent system without a superior peer.”
But agreed that none of this implies a reduced interest in having one’s desires satisfied.
(ETA: And, I should note, I agree with your main point about nonuniversality of drives.)
Immediately after I posted that, I doubted it. A lot of people might just want autonomy—freedom from dependency on others and freedom from the control of others. Dictator of yourself, but not dictator of humanity as a whole. Though one should not underestimate the extent to which human desire is about other people.
Will Newsome is talking about—or I thought he was talking about—value systems that would be stable in a situation where human beings have superintelligence working on their side. That’s a scenario where domination should become easy and without costs, so if people with a desire to rule had that level of power, the only thing to stop them from reshaping everyone else would be their own scruples about doing so; and even if they were troubled in that way, what’s to stop them from first reshaping themselves so as to be guiltless rulers of the world?
Also, even if we suppose that that outcome, while stable, is not what anyone would really want, if they first spent half an eternity in self-optimization limbo investigating the structure of their personal utility function… I remain skeptical that “Buddhahood” is the universal true attractor, though it’s hard to tell without knowing exactly what connotations Will would like to convey through his use of the term.
I am skeptical about universal attractors in general, including but not limited to Buddhahood and domination. (Psychological ones, anyway. I suppose entropy is a universal attractor in some trivial sense.) I’m also inclined to doubt that anything is a stable choice, either in the sense you describe here, or in the sense of not palling after a time of experiencing it.
Of course, if human desires are editable, then anything can be a stable choice: just modify the person’s desires such that they never want anything else. By the same token, anything can be a universal attractor: just modify everyone’s desires so they choose it. These seem like uninteresting boundary cases.
I agree that some humans would, given the option, choose domination. I suspect that’s <1% of the population given a range of options, though rather more if the choice is “dominate or be dominated.” (Although I suspect most people would choose to try it out for a while, if that were an option, then would give it up in less than a year.)
I suspect about the same percentage would choose to be dominated as a long-term lifestyle choice, given the expectation that they can quit whenever they want.
I agree that some would choose autonomy, though again I suspect not that many (<5%, say) would choose it for any length of time.
I suspect the majority of humans would choose some form of interdependency, if that were an option.
Individual humans sometimes become more selfish, but not consistently reflectively so, and humanity seems to be becoming more humane over time. Obviously there’s a lot of interpersonal and even intrapersonal variance, but the trend in human values is both intuitively apparent and empirically verified by e.g. the World Values Survey and other axiological sociology. Also, I doubt selfishness is as strong an attractor among people who are the smartest and most knowledgeable of their time. Look at e.g. Maslow’s research on people at the peak of human performance and mental health, and the attractors he identified as self actualization and self transcendence. Selfishness (or simple/naive selfishness) mostly seems like a pitfall for stereotypical amateur philosophers and venture capitalists.
I’m taking another look at this and find it hard to sum up just how many problems there are with your argument.
What about the people who are the most powerful of their time? Think about what the psychology of a billionaire must be. You don’t accumulate that much wealth just by setting out to serve humanity. You care about offering your customers a service, but you also try to kill the competition, and you cut deals with the already existing powers, especially the state. Most adults are slaves, to an economic function if not literally doing what another person tells them, and then there is a small wealthy class of masters who have the desire and ability to take advantage of this situation.
I started out by opposing “simple selfishness” to your hypothesis that “Buddhahood or something close” is the natural endpoint of human moral development. But there’s also group allegiance: my family, my country, my race, but not yours. I look out for my group, it looks out for me, and caring about other groups is a luxury for those who are really well off. Such caring is also likely to be pursued in a form which is advantageous, whether blatantly or subtly, for the group which gets to play benefactor. We will reshape you even while we care for you.
How close do you think anyone has ever come to reflective consistency? Anyway, you are reflectively consistent if there’s no impulse within you to change your goals. So anyone, whatever their current goals, can achieve reflective consistency by removing whatever impulses for change-of-values they may have.
Reflective consistency isn’t a matter of consistency with your trajectory so far, it’s a matter of consistency when examined according to your normative principles. The trajectory so far did not result from any such thorough and transparent self-scrutiny.
Frankly, if I ask myself, what does the average human want to be, I’d say a benevolent dictator. So yes, the trend of increasing humaneness corresponds to something—increased opportunity to take mercy on other beings. But there’s no corresponding diminution of interest in satisfying one’s own desires.
Let’s see, what else can I disagree with? I don’t really know what your concept of Buddhahood is, but it sounds a bit like nonattachment for the sake of pleasure. I’ll take what pleasures I can, and I’ll avoid the pain of losing them by not being attached to them. But that’s aestheticism or rational hedonism. My understanding of Buddhahood is somewhat harsher (to a pleasure-seeking sensibility), because it seeks to avoid pleasure as well as pain, the goal after all being extinction, removal from the cycle of life. But that was never a successful mass philosophy, so you got more superstitious forms of Buddhism in which there’s a happy pure-land afterlife and so on.
I also have to note that an AI does not have to be a person, so it’s questionable what implications trends in human values have for AI. What people want themselves to be and what they would want a non-person AI to be are different topics.
Seriously? For my part, I doubt that a typical human wants to do that much work. I suspect that “favored and privileged subject of a benevolent dictator” would be much more popular. Even more popular would be “favored and privileged beneficiary of a benevolent system without a superior peer.”
But agreed that none of this implies a reduced interest in having one’s desires satisfied.
(ETA: And, I should note, I agree with your main point about nonuniversality of drives.)
Immediately after I posted that, I doubted it. A lot of people might just want autonomy—freedom from dependency on others and freedom from the control of others. Dictator of yourself, but not dictator of humanity as a whole. Though one should not underestimate the extent to which human desire is about other people.
Will Newsome is talking about—or I thought he was talking about—value systems that would be stable in a situation where human beings have superintelligence working on their side. That’s a scenario where domination should become easy and without costs, so if people with a desire to rule had that level of power, the only thing to stop them from reshaping everyone else would be their own scruples about doing so; and even if they were troubled in that way, what’s to stop them from first reshaping themselves so as to be guiltless rulers of the world?
Also, even if we suppose that that outcome, while stable, is not what anyone would really want, if they first spent half an eternity in self-optimization limbo investigating the structure of their personal utility function… I remain skeptical that “Buddhahood” is the universal true attractor, though it’s hard to tell without knowing exactly what connotations Will would like to convey through his use of the term.
I am skeptical about universal attractors in general, including but not limited to Buddhahood and domination. (Psychological ones, anyway. I suppose entropy is a universal attractor in some trivial sense.) I’m also inclined to doubt that anything is a stable choice, either in the sense you describe here, or in the sense of not palling after a time of experiencing it.
Of course, if human desires are editable, then anything can be a stable choice: just modify the person’s desires such that they never want anything else. By the same token, anything can be a universal attractor: just modify everyone’s desires so they choose it. These seem like uninteresting boundary cases.
I agree that some humans would, given the option, choose domination. I suspect that’s <1% of the population given a range of options, though rather more if the choice is “dominate or be dominated.” (Although I suspect most people would choose to try it out for a while, if that were an option, then would give it up in less than a year.)
I suspect about the same percentage would choose to be dominated as a long-term lifestyle choice, given the expectation that they can quit whenever they want.
I agree that some would choose autonomy, though again I suspect not that many (<5%, say) would choose it for any length of time.
I suspect the majority of humans would choose some form of interdependency, if that were an option.
Entropy is the lack of an identifiable attractor.