By this I mean it’s entirely in our self-interest to act in the interest of some others. That was to partially address your “unselfish agents will be out-competed by selfish agents” claim. False dichotomy.
It’s not a false dichotomy. If you act in the interest of others because it’s in your self-interest, you’re selfish. Rational “agents” are “selfish”, by definition, because they try to maximize their utility functions. An “unselfish” agent would be one trying to also maximize someone else’s utility function. That agent would either not be “rational”, because it was not maximizing its utiltity function; or it would not be an “agent”, because agenthood is found at the level of the utility function. I tried to make this point in another thread, and lost like 20 karma points doing so. But it’s still right. I request anyone down-voting this comment to provide some alternative interpretation under which a rational agent is not selfish.
EDIT: A great example of what I mean by “agenthood is found at the level of the utility function” is that you shouldn’t consider an ant an agent.
The whole point of the essay is to try to find some way for it to be in everyone’s self-interest to act in ways that will prevent us from taking small risks of exterminating life. And I failed to find any such way. So you see, the entire essay is predicated on the point that you’re making.
You haven’t given a convincing argument that people will stop having Life-And-Death conflicts.
Do you mean, I haven’t given a convincing argument that people will not stop having Life-And-Death conflicts?
On the LHC, it sounds like you’re arguing for a more precautionary approach to science.
Not actually. The next hundred thousand years are a special case.
I think we’re agreeing on the first point—any rational agent is selfish. But then there’s no such thing as an unselfish agent, right? Also, no need to use the term selfish, if it’s implicit in rational agent. If unselfish agents don’t exist, it’s easy to out-compete them!
“trying to also maximize someone else’s utility function… would not be an ‘agent’, because agenthood is found at the level of the utility function.” What do you mean by this? I read this as saying that a utility function which is directly dependent on another’s utility is not a utility function. In other words, anyone who cares about another, and takes direct pleasure from another’s wellbeing, is not an agent. If that’s what you mean, then most humans aren’t agents. Otherwise, I’m not understanding.
On Life-And-Death conflicts, yes, that’s what I meant. You haven’t given any such argument!
On the LHC, why are the next hundred thousand years a special case? And again, under what conditions should the LHC run?
Also, no need to use the term selfish, if it’s implicit in rational agent.
Right—now I remember, we’ve gone over this before. I think it is implicit in rational agent; but a lot of people forget this, as evidenced by the many responses that say something like, “But it’s often in an agent’s self-interest to act in the interest of others!”
If you think about why they’re saying this in protest to my saying that a rational agent is selfish, it can’t be because they are legitimately trying to point out that a selfish agent will sometimes act in ways that benefit others. That would be an uninteresting and uninformative point. No, I think the only thing they can mean is that they believe that decision theory is something like the Invisible Hand, and will magically result in an equilibrium where everybody is nice to each other, and so the agents really aren’t selfish at all.
So I use the word “selfish” to emphasize that, yes, these agents really pursue their own utility.
(Well, “we” haven’t—I’m pretty new on these forums, and missed that disagreement!)
You still haven’t addressed any of my complaints with your argument. I never mentioned anything about time-discounting—it looked like you saw your second-to-last proposition to be the only one with merit, so I was totally addressing two that you dismissed.
In my first point, now that we are clear on definitions, I meant that you 1) implied a dichotomy between agents whose utility functions are entirely independent of other people’s, and those whose utility functions are very heavily dependent (Scrooges and Saints). You then made the statement “unselfish agents will be out-competed by selfish agents.” Since we agree that there’s no such thing as an unselfish agent, you probably meant “people who care a lot about everyone will be out competed by people who care about nobody but themselves” (selfish rational agents with highly dependent vs. highly independent utility functions). This is a false dichotomy because most people don’t fall into either extreme, but have a utility function that depends on some others, but not everyone and not to an equal degree.
And my two questions still stand, on conflict and the LHC.
you probably meant “people who care a lot about everyone will be out competed by people who care about nobody but themselves.”
No, I didn’t mean that. This is, I think, the 5th time I’ve denied saying this on Less Wrong. I’ve got to find a way of saying this more clearly. I was arguing against people who think that rational agents are not “selfish” in the sense that I’ve described elsewhere in these comments. If it helps, I’m using the word “selfish” in a way so that an agent could consciously desire strongly to help other people, but still be “selfish”.
On life-and-death conflicts, I did give such an argument, but very briefly:
Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.
I realize this isn’t enough for someone who isn’t already familiar with the full argument, but it’s after midnight and I’m going to bed.
On the LHC, why are the next hundred thousand years a special case? And again, under what conditions should the LHC run?
The next 100,000 years are a special case because we may learn most of what we will learn over the next billion years in the next 100,000 years. During this period, the risks of something like running the LHC are probably outweighed by how much the knowledge acquired as a result will help us estimate future risks, and figure out a solution to the problem.
My confusion isn’t coming from the term selfish, but from the term unselfish agent. You clearly suggested that such a thing exists in the quoted statement, and I have no idea what this creature is.
On life-and-death conflicts, sorry if I’m inquiring on something widely known by everyone else, but I wouldn’t mind a link or elaboration if you find the time. I agree that people will have conflicts both as a result of human nature and of finite resources, but I don’t see why conflicts must always be deadly.
During this period, the risks of something like running the LHC are probably outweighed by how much the knowledge acquired as a result will help us estimate future risks, and figure out a solution to the problem.
You just said the opposite of what you said in your original post here, that the LHC was turned on for no practical advantage.
My confusion isn’t coming from the term selfish, but from the term unselfish agent. You clearly suggested that such a thing exists in the quoted statement, and I have no idea what this creature is.
I wrote, “Even if you don’t agree that rational agents are selfish, your unselfish agents will be out-competed by selfish agents.” The “unselfish agent” is a hypothetical that I don’t believe in, but that the imaginary person I’m arguing with believes in; and I’m saying, “Even if there were such an agent, it wouldn’t be competitive.”
My argument was not very clear. I wouldn’t worry too much over that point.
You just said the opposite of what you said in your original post here, that the LHC was turned on for no practical advantage.
No; I said, “no practical advantage that I’ve heard of yet.” First, the word “practical” means “put into practice”, so that learning more theory doesn’t count as practical. Second, “that I’ve heard of yet” was a qualifier because I suppose that some practical advantage might result from the LHC, but we might not know yet what that will be.
If “selfish” (as you use it) is a word that applies to every agent without significant exception, why would you ever need to use the word? Why not just say “agent”? It seems redundant, like saying “warm-blooded mammal” or something.
Yes, it’s redundant. I explained why I used it nonetheless in the great-great-great-grandparent of the comment you just made. Summary: You might say “warm-blooded mammal” if you were talking with people who believed in cold-blooded mammals.
Someone who believes in cold-blooded mammals is either misusing the term “mammal” or the term “cold-blooded” or both, and I don’t think I’d refer to “cold-blooded mammals” without addressing the question of where that misunderstanding is. If people don’t understand you when you say “selfish” (because I think you are using an unpopular definition, if nothing else) why don’t you leave it out or try another word? If I was talking to someone who insisted that mammals were cold-blooded because they thought “warm” was synonymous with “water boils at this temperature” or something, I’d probably first try to correct them—which you seem to have attempted for “selfish” with mixed results—and then give up and switch to “endothermic”.
I read this as saying that a utility function which is directly dependent on another’s utility is not a utility function.
No; I meant that each agent has a utility function, and tries to maximize that utility function.
If we can find an evolutionarily-stable cognitive makeup for an agent that allows it to have a utility function that weighs the consequences in the distant future equally with the consequences to the present, then we may be saved. In other words, we need to eliminate time-discounting.
One thing I didn’t explain clearly, is that it may be that uncertainty alone provides a large enough time-discounting to make universe-death inevitable. Because you’re more and more uncertain what the impact of a decision will be the farther you look into the future, you weigh that impact less and less the farther into the future you go.
But maybe this is not inevitably the right thing to do, if you can find a way to predict future impacts that is uncertain, but also unbiased!
EDIT: No. Unbiased doesn’t cut it.
On life-and-death conflicts, I did give such an argument, but very briefly:
Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.
I realize this isn’t enough for someone who isn’t already familiar with the full argument, but it’s after midnight and I’m going to bed.
On the LHC, why are the next hundred thousand years a special case? And again, under what conditions should the LHC run?
The next 100,000 years are a special case because we may learn most of what we will learn over the next billion years in the next 100,000 years. During this period, the risks of something like running the LHC are probably outweighed by how much the knowledge acquired as a result will help us estimate future risks, and figure out a solution to the problem.
It’s not a false dichotomy. If you act in the interest of others because it’s in your self-interest, you’re selfish. Rational “agents” are “selfish”, by definition, because they try to maximize their utility functions. An “unselfish” agent would be one trying to also maximize someone else’s utility function. That agent would either not be “rational”, because it was not maximizing its utiltity function; or it would not be an “agent”, because agenthood is found at the level of the utility function. I tried to make this point in another thread, and lost like 20 karma points doing so. But it’s still right. I request anyone down-voting this comment to provide some alternative interpretation under which a rational agent is not selfish.
EDIT: A great example of what I mean by “agenthood is found at the level of the utility function” is that you shouldn’t consider an ant an agent.
The whole point of the essay is to try to find some way for it to be in everyone’s self-interest to act in ways that will prevent us from taking small risks of exterminating life. And I failed to find any such way. So you see, the entire essay is predicated on the point that you’re making.
Do you mean, I haven’t given a convincing argument that people will not stop having Life-And-Death conflicts?
Not actually. The next hundred thousand years are a special case.
I think we’re agreeing on the first point—any rational agent is selfish. But then there’s no such thing as an unselfish agent, right? Also, no need to use the term selfish, if it’s implicit in rational agent. If unselfish agents don’t exist, it’s easy to out-compete them!
“trying to also maximize someone else’s utility function… would not be an ‘agent’, because agenthood is found at the level of the utility function.” What do you mean by this? I read this as saying that a utility function which is directly dependent on another’s utility is not a utility function. In other words, anyone who cares about another, and takes direct pleasure from another’s wellbeing, is not an agent. If that’s what you mean, then most humans aren’t agents. Otherwise, I’m not understanding.
On Life-And-Death conflicts, yes, that’s what I meant. You haven’t given any such argument!
On the LHC, why are the next hundred thousand years a special case? And again, under what conditions should the LHC run?
Right—now I remember, we’ve gone over this before. I think it is implicit in rational agent; but a lot of people forget this, as evidenced by the many responses that say something like, “But it’s often in an agent’s self-interest to act in the interest of others!”
If you think about why they’re saying this in protest to my saying that a rational agent is selfish, it can’t be because they are legitimately trying to point out that a selfish agent will sometimes act in ways that benefit others. That would be an uninteresting and uninformative point. No, I think the only thing they can mean is that they believe that decision theory is something like the Invisible Hand, and will magically result in an equilibrium where everybody is nice to each other, and so the agents really aren’t selfish at all.
So I use the word “selfish” to emphasize that, yes, these agents really pursue their own utility.
(Well, “we” haven’t—I’m pretty new on these forums, and missed that disagreement!)
You still haven’t addressed any of my complaints with your argument. I never mentioned anything about time-discounting—it looked like you saw your second-to-last proposition to be the only one with merit, so I was totally addressing two that you dismissed.
In my first point, now that we are clear on definitions, I meant that you 1) implied a dichotomy between agents whose utility functions are entirely independent of other people’s, and those whose utility functions are very heavily dependent (Scrooges and Saints). You then made the statement “unselfish agents will be out-competed by selfish agents.” Since we agree that there’s no such thing as an unselfish agent, you probably meant “people who care a lot about everyone will be out competed by people who care about nobody but themselves” (selfish rational agents with highly dependent vs. highly independent utility functions). This is a false dichotomy because most people don’t fall into either extreme, but have a utility function that depends on some others, but not everyone and not to an equal degree.
And my two questions still stand, on conflict and the LHC.
(Interesting post, by the way!)
No, I didn’t mean that. This is, I think, the 5th time I’ve denied saying this on Less Wrong. I’ve got to find a way of saying this more clearly. I was arguing against people who think that rational agents are not “selfish” in the sense that I’ve described elsewhere in these comments. If it helps, I’m using the word “selfish” in a way so that an agent could consciously desire strongly to help other people, but still be “selfish”.
On life-and-death conflicts, I did give such an argument, but very briefly:
I realize this isn’t enough for someone who isn’t already familiar with the full argument, but it’s after midnight and I’m going to bed.
The next 100,000 years are a special case because we may learn most of what we will learn over the next billion years in the next 100,000 years. During this period, the risks of something like running the LHC are probably outweighed by how much the knowledge acquired as a result will help us estimate future risks, and figure out a solution to the problem.
My confusion isn’t coming from the term selfish, but from the term unselfish agent. You clearly suggested that such a thing exists in the quoted statement, and I have no idea what this creature is.
On life-and-death conflicts, sorry if I’m inquiring on something widely known by everyone else, but I wouldn’t mind a link or elaboration if you find the time. I agree that people will have conflicts both as a result of human nature and of finite resources, but I don’t see why conflicts must always be deadly.
You just said the opposite of what you said in your original post here, that the LHC was turned on for no practical advantage.
I wrote, “Even if you don’t agree that rational agents are selfish, your unselfish agents will be out-competed by selfish agents.” The “unselfish agent” is a hypothetical that I don’t believe in, but that the imaginary person I’m arguing with believes in; and I’m saying, “Even if there were such an agent, it wouldn’t be competitive.”
My argument was not very clear. I wouldn’t worry too much over that point.
No; I said, “no practical advantage that I’ve heard of yet.” First, the word “practical” means “put into practice”, so that learning more theory doesn’t count as practical. Second, “that I’ve heard of yet” was a qualifier because I suppose that some practical advantage might result from the LHC, but we might not know yet what that will be.
If “selfish” (as you use it) is a word that applies to every agent without significant exception, why would you ever need to use the word? Why not just say “agent”? It seems redundant, like saying “warm-blooded mammal” or something.
Yes, it’s redundant. I explained why I used it nonetheless in the great-great-great-grandparent of the comment you just made. Summary: You might say “warm-blooded mammal” if you were talking with people who believed in cold-blooded mammals.
Someone who believes in cold-blooded mammals is either misusing the term “mammal” or the term “cold-blooded” or both, and I don’t think I’d refer to “cold-blooded mammals” without addressing the question of where that misunderstanding is. If people don’t understand you when you say “selfish” (because I think you are using an unpopular definition, if nothing else) why don’t you leave it out or try another word? If I was talking to someone who insisted that mammals were cold-blooded because they thought “warm” was synonymous with “water boils at this temperature” or something, I’d probably first try to correct them—which you seem to have attempted for “selfish” with mixed results—and then give up and switch to “endothermic”.
Sounds like good advice.
No; I meant that each agent has a utility function, and tries to maximize that utility function.
If we can find an evolutionarily-stable cognitive makeup for an agent that allows it to have a utility function that weighs the consequences in the distant future equally with the consequences to the present, then we may be saved. In other words, we need to eliminate time-discounting.
One thing I didn’t explain clearly, is that it may be that uncertainty alone provides a large enough time-discounting to make universe-death inevitable. Because you’re more and more uncertain what the impact of a decision will be the farther you look into the future, you weigh that impact less and less the farther into the future you go.
But maybe this is not inevitably the right thing to do, if you can find a way to predict future impacts that is uncertain, but also unbiased!
EDIT: No. Unbiased doesn’t cut it.
On life-and-death conflicts, I did give such an argument, but very briefly:
I realize this isn’t enough for someone who isn’t already familiar with the full argument, but it’s after midnight and I’m going to bed.
The next 100,000 years are a special case because we may learn most of what we will learn over the next billion years in the next 100,000 years. During this period, the risks of something like running the LHC are probably outweighed by how much the knowledge acquired as a result will help us estimate future risks, and figure out a solution to the problem.