It seems clear from context that he means it hedonistically, i.e. my own hedonistic experience is my only concern if I’m selfish; I don’t care about what other people want or think.
Instead of trying to interpret the context, you should believe that I mean what I say literally. I repeat:
If you still think that you wouldn’t, it’s probably because you’re thinking a 1% increase in your utility means something like a 1% increase in the pleasure you experience. It doesn’t. It’s a 1% increase in your utility. If you factor the rest of your universe into your utility function, then it’s already in there.
In fact, I have already explained my usage of the word “selfish” to you in this same context, repeatedly, in a different post.
Psychohistorian wrote:
Utility curves are strictly arational. A rational paperclip maximizer is an entirely possible being. Any statement of the kind “Rational agents are/are not selfish” is a type error; selfishness is entirely orthogonal to rationality.
I quote myself again:
If you act in the interest of others because it’s in your self-interest, you’re selfish. Rational “agents” are “selfish”, by definition, because they try to maximize their utility functions. An “unselfish” agent would be one trying to also maximize someone else’s utility function. That agent would either not be “rational”, because it was not maximizing its utiltity function; or it would not be an “agent”, because agenthood is found at the level of the utility function.
Rational agents incorporate the benefits to others into their utility functions.
as a section header may have thrown me off there.
That aside, I do understand what you’re saying, and I did notice the original contrast between the 1%/1%. Though I’d note it doesn’t follow that a rational agent would be willing to take a 1% chance of destroying the universe in exchange for a 1% increase in his utility function; the universe being destroyed would probably output a negative, i.e. greater than 100% loss, so that’s not an even bet.
The whole arational point is my mistake; the whole paragraph:
But maybe they’re just not as rational as you...
reads very much like it is using selfish in the strict rather than holistic utility sense, and that was what I was focusing on in this response. I was focusing specifically on that section and did not reread the whole post, so I got the wrong idea. My point on evolution remains, and the negative-utility argument still makes the +1% for 1% chance of destruction argument fail. But this doesn’t matter much, since one can hardly suppose all agents in charge of making such decisions will be perfectly rational.
and the negative-utility argument still makes the +1% for 1% chance of destruction argument fail
That’s why what I wrote in that section was:
it’s not possible that you would not accept a .999% risk, unless you are not maximizing expected value, or you assign the null state after universe-destruction negative utility.
You wrote:
But this doesn’t matter much, since one can hardly suppose all agents in charge of making such decisions will be perfectly rational.
I am supposing that. That’s why it’s in the title of the post. I don’t mean that I am certain that is how things will turn out to be. I mean that this post says that rational behavior leads to these consequences. If that means that the only way to avoid the destruction of life is to cultivate a particular bias, then that’s the implication.
Of course, you have already shown that you choose to pretend I am using the word “selfish” in the colloquial sense which I have repeatedly explicitly said is not the sense I am using it in, in this post and in others, so this isn’t going to help.
If it isn’t working, why don’t you try something different?
I don’t think it’s really a necessary distinction; the idea of an unselfish utility maximizer doesn’t quite make sense, because utility is defined so nebulously that pretty much everyone has to seek maximizing their utility.
the idea of an unselfish utility maximizer doesn’t quite make sense
You’re right that it doesn’t make sense, which is why some people assume I mean something else when I say “selfish”. But a lot of commenters do seem to believe in unselfish utility maximizers, which is why I keep using the word.
Instead of trying to interpret the context, you should believe that I mean what I say literally. I repeat:
In fact, I have already explained my usage of the word “selfish” to you in this same context, repeatedly, in a different post.
Psychohistorian wrote:
I quote myself again:
as a section header may have thrown me off there.
That aside, I do understand what you’re saying, and I did notice the original contrast between the 1%/1%. Though I’d note it doesn’t follow that a rational agent would be willing to take a 1% chance of destroying the universe in exchange for a 1% increase in his utility function; the universe being destroyed would probably output a negative, i.e. greater than 100% loss, so that’s not an even bet.
The whole arational point is my mistake; the whole paragraph:
reads very much like it is using selfish in the strict rather than holistic utility sense, and that was what I was focusing on in this response. I was focusing specifically on that section and did not reread the whole post, so I got the wrong idea. My point on evolution remains, and the negative-utility argument still makes the +1% for 1% chance of destruction argument fail. But this doesn’t matter much, since one can hardly suppose all agents in charge of making such decisions will be perfectly rational.
That’s why what I wrote in that section was:
You wrote:
I am supposing that. That’s why it’s in the title of the post. I don’t mean that I am certain that is how things will turn out to be. I mean that this post says that rational behavior leads to these consequences. If that means that the only way to avoid the destruction of life is to cultivate a particular bias, then that’s the implication.
If it isn’t working, why don’t you try something different?
(I deleted that paragraph.)
Do you have an idea for something else to try?
I don’t think it’s really a necessary distinction; the idea of an unselfish utility maximizer doesn’t quite make sense, because utility is defined so nebulously that pretty much everyone has to seek maximizing their utility.
You’re right that it doesn’t make sense, which is why some people assume I mean something else when I say “selfish”. But a lot of commenters do seem to believe in unselfish utility maximizers, which is why I keep using the word.
Avoiding morally charged words. If possible shy far far away from ANY pattern that people can automatically match against with system 2 so that system 1 stays engaged.
My article here http://www.forbes.com/2009/06/22/singularity-robots-computers-opinions-contributors-artificial-intelligence-09_land.html is an attempt to do this.
Do you mean “system 1 … system 2”?