Imagine an agent that maximizes an altruistic utility function. It still only maximizes its own utility, no one else’s. And its utility function wouldn’t even directly depend on other agents’ utility (you might or might not care about someone else’s utility function, but a positive dependence on its value could cause a runaway feedback loop). But it does value other agents’ health, happiness, freedom, etc. (i.e. most or all of the same inputs that would go into a selfish agent’s utility function, except aggregated).
Two such agents don’t have to have exactly the same utility function. As long as A values A’s happiness, and B values A’s happiness, then A and B can agree to take some action that makes A happier, even using ordinary causal decision theory with no precommitment mechanism.
There is no such thing as an altruistic utility function. By “selfish” I mean exactly that it maximizes its own utility. It doesn’t matter if it values the purring of kittens and the happy smiles of children. It is still selfish. An unselfish agent is one that lets you rewrite its utility function.
You are making exactly the same misinterpretation that almost every commenter here is making, and it is based on reading using pattern-matching instead of parsing. Just forget the word selfish. I have removed it from the original statement. I am sorry that it confused people, and I understand why it could.
By your interpretation, using the word “selfish” will never add any extra information and “selfish utility maximizer” is a tautology.
If this is true, please stop using it. You’re just confusing people since they’re naturally expecting you to use the normal, non-tautological, more interesting definition of “selfish”.
By your interpretation, using the word “selfish” will never add any extra information and “selfish utility maximizer” is a tautology.
Yes, you are correct. I’m sorry that I used the word selfish. If you had read my post before replying, you would have seen this sentence:
You may argue that pragmatics argue against this use of the word “selfish” because it thus adds no meaning. Fine. I have removed the word “selfish”.
But, jeez, folks—can’t any of you get past the use of the word ‘selfish’ and read the post? You are all off chasing red herrings. This is not an argument about whether rational agents are selfish or not. It does not make a difference to the argument I am presenting whether you believe rational agents are selfish or cooperative.
Imagine an agent that maximizes an altruistic utility function. It still only maximizes its own utility, no one else’s. And its utility function wouldn’t even directly depend on other agents’ utility (you might or might not care about someone else’s utility function, but a positive dependence on its value could cause a runaway feedback loop). But it does value other agents’ health, happiness, freedom, etc. (i.e. most or all of the same inputs that would go into a selfish agent’s utility function, except aggregated).
Two such agents don’t have to have exactly the same utility function. As long as A values A’s happiness, and B values A’s happiness, then A and B can agree to take some action that makes A happier, even using ordinary causal decision theory with no precommitment mechanism.
There is no such thing as an altruistic utility function. By “selfish” I mean exactly that it maximizes its own utility. It doesn’t matter if it values the purring of kittens and the happy smiles of children. It is still selfish. An unselfish agent is one that lets you rewrite its utility function.
You are making exactly the same misinterpretation that almost every commenter here is making, and it is based on reading using pattern-matching instead of parsing. Just forget the word selfish. I have removed it from the original statement. I am sorry that it confused people, and I understand why it could.
By your interpretation, using the word “selfish” will never add any extra information and “selfish utility maximizer” is a tautology.
If this is true, please stop using it. You’re just confusing people since they’re naturally expecting you to use the normal, non-tautological, more interesting definition of “selfish”.
Yes, you are correct. I’m sorry that I used the word selfish. If you had read my post before replying, you would have seen this sentence:
But, jeez, folks—can’t any of you get past the use of the word ‘selfish’ and read the post? You are all off chasing red herrings. This is not an argument about whether rational agents are selfish or not. It does not make a difference to the argument I am presenting whether you believe rational agents are selfish or cooperative.