you probably meant “people who care a lot about everyone will be out competed by people who care about nobody but themselves.”
No, I didn’t mean that. This is, I think, the 5th time I’ve denied saying this on Less Wrong. I’ve got to find a way of saying this more clearly. I was arguing against people who think that rational agents are not “selfish” in the sense that I’ve described elsewhere in these comments. If it helps, I’m using the word “selfish” in a way so that an agent could consciously desire strongly to help other people, but still be “selfish”.
On life-and-death conflicts, I did give such an argument, but very briefly:
Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.
I realize this isn’t enough for someone who isn’t already familiar with the full argument, but it’s after midnight and I’m going to bed.
On the LHC, why are the next hundred thousand years a special case? And again, under what conditions should the LHC run?
The next 100,000 years are a special case because we may learn most of what we will learn over the next billion years in the next 100,000 years. During this period, the risks of something like running the LHC are probably outweighed by how much the knowledge acquired as a result will help us estimate future risks, and figure out a solution to the problem.
My confusion isn’t coming from the term selfish, but from the term unselfish agent. You clearly suggested that such a thing exists in the quoted statement, and I have no idea what this creature is.
On life-and-death conflicts, sorry if I’m inquiring on something widely known by everyone else, but I wouldn’t mind a link or elaboration if you find the time. I agree that people will have conflicts both as a result of human nature and of finite resources, but I don’t see why conflicts must always be deadly.
During this period, the risks of something like running the LHC are probably outweighed by how much the knowledge acquired as a result will help us estimate future risks, and figure out a solution to the problem.
You just said the opposite of what you said in your original post here, that the LHC was turned on for no practical advantage.
My confusion isn’t coming from the term selfish, but from the term unselfish agent. You clearly suggested that such a thing exists in the quoted statement, and I have no idea what this creature is.
I wrote, “Even if you don’t agree that rational agents are selfish, your unselfish agents will be out-competed by selfish agents.” The “unselfish agent” is a hypothetical that I don’t believe in, but that the imaginary person I’m arguing with believes in; and I’m saying, “Even if there were such an agent, it wouldn’t be competitive.”
My argument was not very clear. I wouldn’t worry too much over that point.
You just said the opposite of what you said in your original post here, that the LHC was turned on for no practical advantage.
No; I said, “no practical advantage that I’ve heard of yet.” First, the word “practical” means “put into practice”, so that learning more theory doesn’t count as practical. Second, “that I’ve heard of yet” was a qualifier because I suppose that some practical advantage might result from the LHC, but we might not know yet what that will be.
If “selfish” (as you use it) is a word that applies to every agent without significant exception, why would you ever need to use the word? Why not just say “agent”? It seems redundant, like saying “warm-blooded mammal” or something.
Yes, it’s redundant. I explained why I used it nonetheless in the great-great-great-grandparent of the comment you just made. Summary: You might say “warm-blooded mammal” if you were talking with people who believed in cold-blooded mammals.
Someone who believes in cold-blooded mammals is either misusing the term “mammal” or the term “cold-blooded” or both, and I don’t think I’d refer to “cold-blooded mammals” without addressing the question of where that misunderstanding is. If people don’t understand you when you say “selfish” (because I think you are using an unpopular definition, if nothing else) why don’t you leave it out or try another word? If I was talking to someone who insisted that mammals were cold-blooded because they thought “warm” was synonymous with “water boils at this temperature” or something, I’d probably first try to correct them—which you seem to have attempted for “selfish” with mixed results—and then give up and switch to “endothermic”.
No, I didn’t mean that. This is, I think, the 5th time I’ve denied saying this on Less Wrong. I’ve got to find a way of saying this more clearly. I was arguing against people who think that rational agents are not “selfish” in the sense that I’ve described elsewhere in these comments. If it helps, I’m using the word “selfish” in a way so that an agent could consciously desire strongly to help other people, but still be “selfish”.
On life-and-death conflicts, I did give such an argument, but very briefly:
I realize this isn’t enough for someone who isn’t already familiar with the full argument, but it’s after midnight and I’m going to bed.
The next 100,000 years are a special case because we may learn most of what we will learn over the next billion years in the next 100,000 years. During this period, the risks of something like running the LHC are probably outweighed by how much the knowledge acquired as a result will help us estimate future risks, and figure out a solution to the problem.
My confusion isn’t coming from the term selfish, but from the term unselfish agent. You clearly suggested that such a thing exists in the quoted statement, and I have no idea what this creature is.
On life-and-death conflicts, sorry if I’m inquiring on something widely known by everyone else, but I wouldn’t mind a link or elaboration if you find the time. I agree that people will have conflicts both as a result of human nature and of finite resources, but I don’t see why conflicts must always be deadly.
You just said the opposite of what you said in your original post here, that the LHC was turned on for no practical advantage.
I wrote, “Even if you don’t agree that rational agents are selfish, your unselfish agents will be out-competed by selfish agents.” The “unselfish agent” is a hypothetical that I don’t believe in, but that the imaginary person I’m arguing with believes in; and I’m saying, “Even if there were such an agent, it wouldn’t be competitive.”
My argument was not very clear. I wouldn’t worry too much over that point.
No; I said, “no practical advantage that I’ve heard of yet.” First, the word “practical” means “put into practice”, so that learning more theory doesn’t count as practical. Second, “that I’ve heard of yet” was a qualifier because I suppose that some practical advantage might result from the LHC, but we might not know yet what that will be.
If “selfish” (as you use it) is a word that applies to every agent without significant exception, why would you ever need to use the word? Why not just say “agent”? It seems redundant, like saying “warm-blooded mammal” or something.
Yes, it’s redundant. I explained why I used it nonetheless in the great-great-great-grandparent of the comment you just made. Summary: You might say “warm-blooded mammal” if you were talking with people who believed in cold-blooded mammals.
Someone who believes in cold-blooded mammals is either misusing the term “mammal” or the term “cold-blooded” or both, and I don’t think I’d refer to “cold-blooded mammals” without addressing the question of where that misunderstanding is. If people don’t understand you when you say “selfish” (because I think you are using an unpopular definition, if nothing else) why don’t you leave it out or try another word? If I was talking to someone who insisted that mammals were cold-blooded because they thought “warm” was synonymous with “water boils at this temperature” or something, I’d probably first try to correct them—which you seem to have attempted for “selfish” with mixed results—and then give up and switch to “endothermic”.
Sounds like good advice.