And even if you somehow worked around all these arguments, evolution, again, thwarts you. Even if you don’t agree that rational agents are selfish, your unselfish agents will be out-competed by selfish agents. The claim that rational agents are not selfish implies that rational agents are unfit.
This is not how evolution works. Evolution cares about how many of your offspring survive. Selfishness need not be conducive to this. Also, evolution can’t really thwart you. You’re done evolving; you can check it off your to-do list.
It’s entirely plausible that being unselfish is adaptive; from a personal (non-gene, i.e. the perspective we actually have) perspective, having children is extremely unselfish.
Selfishness and unselfishness are arational. Rationality is about maximizing the output of your utility function (in this context). Selfishness is about what that utility function actually is.
Honestly, isn’t this nitpicking? It’s true that Lord Azatoth stopped selecting for genes in our species ten thousand years ago, but when that game stopped working for him he switched to making our memes compete against eachother (in any sane world we’d be having this conversation in Chinese, and my mother’s ‘Scottish’ surname wouldn’t be Nordic).
You’re absolutely right, and he did simplify this portion, but it doesn’t undermine the weight of his argument any more than my saying “I’m not sexist, I’m a fully evolved male!” is rendered irrelevant by the fact that current social mores have little to nothing to do with evolutionary biology.
It’s one thing to correct Phil’s statement, or offer a suggested rewording that would improve the strength of the point he was trying to make, but if feels as if you’re pin pointing this one poor choice of wording, and using it to imply that the entire premise is flawed.
as if you’re pin pointing this one poor choice of wording, and using it to imply that the entire premise is flawed.
Argumentum ad evolutionum is both common enough and horribly wrong enough that I would not call it “nitpicking.” The claim that unselfish agents will be outcompeted by selfish agents is complex, context-dependent, and requires support. The idea that there will somehow be an equilibrium in which unselfish agents get crowded out seems absurd, and this is what “evolution” seems intended to evoke, because evolution is (in significant part) about competitively crowding out the sub-optimal.
He also makes a much bigger mistake, and I should have addressed that in greater detail. Utility curves are arational, and term “selfish” gets confused way more than it should. It seems clear from context that he means it hedonistically, i.e. my own hedonistic experience is my only concern if I’m selfish; I don’t care about what other people want or think. If my actual utility curve involves other people’s utility, or it involves maximizing the number of paper clips in existence, there is absolutely no reason to believe I could better accomplish goals if I were “selfish” by this definition.
Utility curves are strictly arational. A rational paperclip maximizer is an entirely possible being. Any statement of the kind “Rational agents are/are not selfish” is a type error; selfishness is entirely orthogonal to rationality.
It seems clear from context that he means it hedonistically, i.e. my own hedonistic experience is my only concern if I’m selfish; I don’t care about what other people want or think.
Instead of trying to interpret the context, you should believe that I mean what I say literally. I repeat:
If you still think that you wouldn’t, it’s probably because you’re thinking a 1% increase in your utility means something like a 1% increase in the pleasure you experience. It doesn’t. It’s a 1% increase in your utility. If you factor the rest of your universe into your utility function, then it’s already in there.
In fact, I have already explained my usage of the word “selfish” to you in this same context, repeatedly, in a different post.
Psychohistorian wrote:
Utility curves are strictly arational. A rational paperclip maximizer is an entirely possible being. Any statement of the kind “Rational agents are/are not selfish” is a type error; selfishness is entirely orthogonal to rationality.
I quote myself again:
If you act in the interest of others because it’s in your self-interest, you’re selfish. Rational “agents” are “selfish”, by definition, because they try to maximize their utility functions. An “unselfish” agent would be one trying to also maximize someone else’s utility function. That agent would either not be “rational”, because it was not maximizing its utiltity function; or it would not be an “agent”, because agenthood is found at the level of the utility function.
Rational agents incorporate the benefits to others into their utility functions.
as a section header may have thrown me off there.
That aside, I do understand what you’re saying, and I did notice the original contrast between the 1%/1%. Though I’d note it doesn’t follow that a rational agent would be willing to take a 1% chance of destroying the universe in exchange for a 1% increase in his utility function; the universe being destroyed would probably output a negative, i.e. greater than 100% loss, so that’s not an even bet.
The whole arational point is my mistake; the whole paragraph:
But maybe they’re just not as rational as you...
reads very much like it is using selfish in the strict rather than holistic utility sense, and that was what I was focusing on in this response. I was focusing specifically on that section and did not reread the whole post, so I got the wrong idea. My point on evolution remains, and the negative-utility argument still makes the +1% for 1% chance of destruction argument fail. But this doesn’t matter much, since one can hardly suppose all agents in charge of making such decisions will be perfectly rational.
and the negative-utility argument still makes the +1% for 1% chance of destruction argument fail
That’s why what I wrote in that section was:
it’s not possible that you would not accept a .999% risk, unless you are not maximizing expected value, or you assign the null state after universe-destruction negative utility.
You wrote:
But this doesn’t matter much, since one can hardly suppose all agents in charge of making such decisions will be perfectly rational.
I am supposing that. That’s why it’s in the title of the post. I don’t mean that I am certain that is how things will turn out to be. I mean that this post says that rational behavior leads to these consequences. If that means that the only way to avoid the destruction of life is to cultivate a particular bias, then that’s the implication.
Of course, you have already shown that you choose to pretend I am using the word “selfish” in the colloquial sense which I have repeatedly explicitly said is not the sense I am using it in, in this post and in others, so this isn’t going to help.
If it isn’t working, why don’t you try something different?
I don’t think it’s really a necessary distinction; the idea of an unselfish utility maximizer doesn’t quite make sense, because utility is defined so nebulously that pretty much everyone has to seek maximizing their utility.
the idea of an unselfish utility maximizer doesn’t quite make sense
You’re right that it doesn’t make sense, which is why some people assume I mean something else when I say “selfish”. But a lot of commenters do seem to believe in unselfish utility maximizers, which is why I keep using the word.
Evolution cares about how many of your offspring survive. Selfishness need not be conducive to this.
Selection, acting on the individual, selects for those individuals who act in ways that cause their own offspring to survive more. That is what I mean by selfishness. Selfish genes. Selfish memes.
Once people no longer die, selection will not have so much to do with death and reproduction, but with the accumulation of resources. Think about that, and it will become more clear that that will select directly for selfishness in the conventional sense.
This is not how evolution works. Evolution cares about how many of your offspring survive. Selfishness need not be conducive to this. Also, evolution can’t really thwart you. You’re done evolving; you can check it off your to-do list.
It’s entirely plausible that being unselfish is adaptive; from a personal (non-gene, i.e. the perspective we actually have) perspective, having children is extremely unselfish.
Selfishness and unselfishness are arational. Rationality is about maximizing the output of your utility function (in this context). Selfishness is about what that utility function actually is.
Honestly, isn’t this nitpicking? It’s true that Lord Azatoth stopped selecting for genes in our species ten thousand years ago, but when that game stopped working for him he switched to making our memes compete against eachother (in any sane world we’d be having this conversation in Chinese, and my mother’s ‘Scottish’ surname wouldn’t be Nordic).
You’re absolutely right, and he did simplify this portion, but it doesn’t undermine the weight of his argument any more than my saying “I’m not sexist, I’m a fully evolved male!” is rendered irrelevant by the fact that current social mores have little to nothing to do with evolutionary biology.
It’s one thing to correct Phil’s statement, or offer a suggested rewording that would improve the strength of the point he was trying to make, but if feels as if you’re pin pointing this one poor choice of wording, and using it to imply that the entire premise is flawed.
Argumentum ad evolutionum is both common enough and horribly wrong enough that I would not call it “nitpicking.” The claim that unselfish agents will be outcompeted by selfish agents is complex, context-dependent, and requires support. The idea that there will somehow be an equilibrium in which unselfish agents get crowded out seems absurd, and this is what “evolution” seems intended to evoke, because evolution is (in significant part) about competitively crowding out the sub-optimal.
He also makes a much bigger mistake, and I should have addressed that in greater detail. Utility curves are arational, and term “selfish” gets confused way more than it should. It seems clear from context that he means it hedonistically, i.e. my own hedonistic experience is my only concern if I’m selfish; I don’t care about what other people want or think. If my actual utility curve involves other people’s utility, or it involves maximizing the number of paper clips in existence, there is absolutely no reason to believe I could better accomplish goals if I were “selfish” by this definition.
Utility curves are strictly arational. A rational paperclip maximizer is an entirely possible being. Any statement of the kind “Rational agents are/are not selfish” is a type error; selfishness is entirely orthogonal to rationality.
Instead of trying to interpret the context, you should believe that I mean what I say literally. I repeat:
In fact, I have already explained my usage of the word “selfish” to you in this same context, repeatedly, in a different post.
Psychohistorian wrote:
I quote myself again:
as a section header may have thrown me off there.
That aside, I do understand what you’re saying, and I did notice the original contrast between the 1%/1%. Though I’d note it doesn’t follow that a rational agent would be willing to take a 1% chance of destroying the universe in exchange for a 1% increase in his utility function; the universe being destroyed would probably output a negative, i.e. greater than 100% loss, so that’s not an even bet.
The whole arational point is my mistake; the whole paragraph:
reads very much like it is using selfish in the strict rather than holistic utility sense, and that was what I was focusing on in this response. I was focusing specifically on that section and did not reread the whole post, so I got the wrong idea. My point on evolution remains, and the negative-utility argument still makes the +1% for 1% chance of destruction argument fail. But this doesn’t matter much, since one can hardly suppose all agents in charge of making such decisions will be perfectly rational.
That’s why what I wrote in that section was:
You wrote:
I am supposing that. That’s why it’s in the title of the post. I don’t mean that I am certain that is how things will turn out to be. I mean that this post says that rational behavior leads to these consequences. If that means that the only way to avoid the destruction of life is to cultivate a particular bias, then that’s the implication.
If it isn’t working, why don’t you try something different?
(I deleted that paragraph.)
Do you have an idea for something else to try?
I don’t think it’s really a necessary distinction; the idea of an unselfish utility maximizer doesn’t quite make sense, because utility is defined so nebulously that pretty much everyone has to seek maximizing their utility.
You’re right that it doesn’t make sense, which is why some people assume I mean something else when I say “selfish”. But a lot of commenters do seem to believe in unselfish utility maximizers, which is why I keep using the word.
Avoiding morally charged words. If possible shy far far away from ANY pattern that people can automatically match against with system 2 so that system 1 stays engaged.
My article here http://www.forbes.com/2009/06/22/singularity-robots-computers-opinions-contributors-artificial-intelligence-09_land.html is an attempt to do this.
Do you mean “system 1 … system 2”?
Selection, acting on the individual, selects for those individuals who act in ways that cause their own offspring to survive more. That is what I mean by selfishness. Selfish genes. Selfish memes.
Once people no longer die, selection will not have so much to do with death and reproduction, but with the accumulation of resources. Think about that, and it will become more clear that that will select directly for selfishness in the conventional sense.