Nonsense. Hunter A kills hunter B, takes his wives, his meat, and his cave and lives in it happily thereafter.
“Overall” means “Combining the utility-analog of both parties”, not “More utility-analog for a given party”. With only one hunter, there are fewer kills/less meat overall, at the least.
Nope, I strongly disagree. To take a trivial example, Alice doesn’t steal Bob’s car because she thinks she’ll be caught and sent to prison. Alice is NOT “cooperating” with Bob, she is reacting to incentives (in this case, threat of imprisonment) which have nothing to do with the prisoner’s dilemma.
The incentives are the product of breaking the prisoner’s dilemma—the “government altered the payoff matrix” and all that. Etiquette, ethics, and law are all increasing levels of rules, and punishment for those rules, whose core purpose is to alter payoffs for defection; from as subtle as the placement of utensils at a dinner table to prohibit subtle threats to other guests, and less desirable seat placements as punishments for not living up to standards of etiquette, to shooting somebody for escalating a police situation one time too many in an attempt to escape punishment.
I am not a utilitarian. I don’t understand how are you going to combine the utils of both parties.
With one hunter less, there are fewer kills but fewer mouths to feed as well.
The incentives are the product of breaking the prisoner’s dilemma
If it’s broken, it’s not a prisoner’s dilemma situation any more. If you want to argue that it exists as a counterfactual I’ll agree and point out that a great variety of things (including ravenous pink unicorns with piranha teeth) exist as a counterfactual.
I am not a utilitarian. I don’t understand how are you going to combine the utils of both parties.
I’m also not a utilitarian, and at this point you’re just quibbling over semantics rather than making any kind of coherent point. Of course you can’t combine the utils, that’s the -point- of the problem. Arguing that cooperation-defection results in the most gain for the defector is just repeating part of the problem statement of the prisoner’s dilemma.
If it’s broken, it’s not a prisoner’s dilemma situation any more. If you want to argue that it exists as a counterfactual I’ll agree and point out that a great variety of things (including ravenous pink unicorns with piranha teeth) exist as a counterfactual.
Please, if you would, maintain the context of the conversation taking place. This gets very tedious when I have to repeat everything that was said in every previous comment. http://lesswrong.com/lw/m6b/thoughts_on_minimizing_designer_baby_drama/cdaa ← This is where this chain of conversation began. If this is your response, you’re doing nothing but conceding the point in a hostile and argumentative way.
Then I have no idea what you meant by “Cooperation always pays more overall, defection pays the defector better”—what is the “more overall” bit?
This is where this chain of conversation began
Yes, and I still don’t get the LW’s obsession with it. You are just providing supporting examples by claiming that everthing is PD and only the government’s hand saves us from an endless cycle of defections.
I will repeat my assertion that in real life, the great majority of choices people make are NOT in the PD context. This might or might not be different in the counterfactual anarchy case where there is no government, but in reality I claim that PD is rare and unusual.
So Lumifer, I appreciate the time you’ve taken to engage on this thread. I think the topic is an important one and it’s great to see more people discussing it. But...
I agree with OrphanWilde that you would be more pleasant to engage with if you tried to meet people halfway during discussions. Have you read Paul Graham on disagreement? The highest form of disagreement is to improve your opponent’s argument, then refute it. If we’re collaborating to figure out the truth, it’s possible for me to skip spelling out a particular point I’m making in full detail and trust that you’re a smart person and you can figure out that part of the argument. (That’s not to say that there isn’t a flaw in that part of the argument. If you understand the thrust of the argument and also notice a flaw, pointing out the flaw is appreciated.) Being forced to spell things out, especially repeatedly, can be very tedious. Assume good faith, principle of charity, construct steel men instead of straw men, etc. I wrote more on this.
You seem like a smart guy, and I appreciate the cynical perspective you have to offer. But I think I could get even more out of talking to you if you helped me make my arguments for me, e.g. the way I tried to do for you here and here. Let’s collaborate and figure out what’s true!
In real life (aka meatspace) I usually have to control my speech for nuances, implications, connotations, etc. It is not often that you can actually tell a fucking idiot that he is a fucking idiot.
One of the advantages of LW is that I can call a “digging implement named without any disrespect for oppressed people of color” a “spade” and be done with it. I value this advantage and use it. Clarity of speech leads to clarity of thought.
If I may make a recommendation about speaking to me, it would be useful to assume I am not stupid (most of the time, that is :-/). If I’m forcing you to “spell things out” that’s because there is a point to it which you should be able to discover after a bit of thought and just shortcut to the end. If I’m arguing with you this means I already disagree with some issue and the reason for the arguments is to figure out whether it’s a real (usually value-based) disagreement, a definition problem, or just a misunderstanding. A lot of my probing is aimed at firming up and sharpening your argument so that we can see where in that amorphous mass the kernel of contention is. I do steelman the opponents’ position, but if the steelman succeeds, I usually just agree and move to the parts where there is still disagreement or explicitly list the conditions under which the steelman works.
In arguments I mostly aim to define, isolate, and maximally sharpen the point of disagreement—because only then can you really figure out what the disagreement is about and whether it’s real or imaginary. I make no apologies for that—I think it’s good practice.
Cool, it sounds like we’re mostly on the same page about how disagreements should proceed, in theory at least. I’m a bit surprised when you say that your disagreements are usually values-based. It seems like in a lot of cases when I disagree with people it’s because we have different information, and over the course of our conversation, we share information and often converge on the same conclusion.
If I’m forcing you to “spell things out” that’s because there is a point to it which you should be able to discover after a bit of thought and just shortcut to the end.
So maybe this is what frustrated me about our previous discussion… I think I would have appreciated a stronger pointer from you as to where our actual point of disagreement might lay. I’d rather you explain your perceived weakness in my argument rather than forcing me to discover it for myself. (Having arguments is frustrating enough without adding on a puzzle solving aspect.) For example, if you had said something like “communism was a movement founded by people with genes for altruism, and look where that went” earlier in our discussion, I think I would have appreciated that.
If you want, try predicting how I feel about communism, then rot13 the rest of this paragraph. V guvax pbzzhavfz vf n snyfvsvrq ulcbgurfvf ng orfg. Fbpvrgl qrfvta vf n gevpxl ceboyrz, fb rzcvevpvfz vf xrl. Rzcvevpnyyl, pbzzhavfg fbpvrgvrf (bapr gurl fpnyr cnfg ivyyntr-fvmrq) qba’g frrz irel shapgvbany, juvpu vf fgebat rivqrapr gung pbzzhavfz vf n onq zbqry. V qba’g guvax jr unir n inyhrf qvfnterrzrag urer—jr frrz gb or va nterrzrag gung pbzzhavfz naq eryngrq snvyher zbqrf ner onq bhgpbzrf. Engure, V guvax jr unq na vasb qvfpercnapl, jvgu lbh univat gur vafvtug gung nygehvfz trarf zvtug yrnq gb pbzzhavfz naq zr ynpxvat vg. Gur vyyhfvba bs genafcnerapl zvtug unir orra va bcrengvba urer.
I’m a bit surprised when you say that your disagreements are usually values-based.
I don’t know if they are “usually” value-based, but those are the serious, unresolvable ones. If the disagreement is due to miscommunication (e.g. a definitions issue), it’s easy to figure out once you get precise. If the disagreement is about empirical reality, well, you should stop arguing and go get a look at the empirical reality. But if it’s value-based, there is not much you can do.
Besides, a lot of value-based disagreements masquerade as arguments about definitions or data.
I think I would have appreciated a stronger pointer from you as to where our actual point of disagreement might lay.
Mea culpa. I do have a tendency to argue by questions—which I’m generally fine with—but sometimes it gets… excessive :-) I know it can be a problem.
how I feel about communism
Well, it’s 2015 and you’re an American, I think, so it’s highly unlikely you have (or are willing to admit) a liking for communism :-)
But the issue here is this: some people argue that communism failed, yes, but is was a noble and righteous dream which was doomed by imperfect, selfish, nasty people. If only the people were better (higher level of consciousness and all that), communism would work and be just about perfect.
Now, if you can genetically engineer people to be suitable for communism...
Then I have no idea what you meant by “Cooperation always pays more overall, defection pays the defector better”—what is the “more overall” bit?
The total payoff—the combined benefits both players receive—is better. This -matters-, because it’s possible to -bribe- cooperation. So one hunter pays the other hunter meat -not- to kill him and take his wife, or whatever. Extortionate behavior is itself another level of PD that I don’t care to get into.
Yes, and I still don’t get the LW’s obsession with it. You are just providing supporting examples by claiming that everthing is PD and only the government’s hand saves us from an endless cycle of defections.
Okay. This conversation? This is a PD. You’re defecting while I’m cooperating. You’re changing the goalposts and changing the conversational topic in an attempt to try to be right about something, violating the implicit rules of a conversation, while I’ve been polite and not calling you out on it; since this is an iterated Prisoner’s Dilemma, I can react to your defection by defecting myself. The karma system? It’s the government. It changes the payoffs. So what’s the relevance? It helps us construct better rules and plan for behaviors.
Do you also show up to parties uninvited? Yell at managers until they give in to your demands? Make shit up about people so you have something to add to conversations? Refuse to tip waitstaff, or leave subpar tips? These are all defections in variations on the Prisoner’s Dilemma, usually asymmetric variations.
I will repeat my assertion that in real life, the great majority of choices people make are NOT in the PD context. This might or might not be different in the counterfactual anarchy case where there is no government, but in reality I claim that PD is rare and unusual.
And I will repeat my assertion that in this conversation, we aren’t having that discussion. It -might- matter in a counterfactual case where we were talking about whether or not PD payoff matrices are a good model for a society with a government, but your actual claim was that PD didn’t apply in the first place, not that it doesn’t apply now.
The total payoff—the combined benefits both players receive
Sigh. So you’re looking at combined benefits, aka “utility-analog of both parties”, aka utils, about which you just said “of course you can’t combine the utils”.
Okay. This conversation? This is a PD.
Bullshit.
Instead of handwaving at each other, let’s define PD and then see what qualifies. I can start.
I’ll generalize PD—since we’re talking about social issues—to multiple agents (and call it GPD).
So, a prisoner’s dilemma is a particular situation that is characterized by the following:
Multiple agents (2 or more) have to make a particular choice after which they receive the payoffs.
All agents know they are in the GPD. There are no marks, patsies, or innocent bystanders.
All agents have to make a choice between the two alternatives, conventionally called cooperate (C) or defect (D). They have to make a choice—not making a choice is not an option, and neither is picking E. In some situations it doesn’t matter (when D is defined as not-C), in some it does.
All agents make their choice without knowing what other agents chose and before anyone receives the payoff.
For each agent the payoff from choosing D is known and fixed: decisions of other agents do not change it. In other words, if any agent chooses D, he is guaranteed to receive the D payoff known to him.
For each agent the payoff from choosing C varies depending on the decisions of other agents. If many other agents also chose C, the C payoff is high, more than D. If only a few other agents chose C, the C payoff is low, less than D (this is the generalization to multiple agents).
Given this definition, on which basis, more or less, I am arguing in this subthread, this conversation (or any single comment) is nowhere near a PD. Nor are the great majority of real-life situations calling for a choice.
“Overall” means “Combining the utility-analog of both parties”, not “More utility-analog for a given party”. With only one hunter, there are fewer kills/less meat overall, at the least.
The incentives are the product of breaking the prisoner’s dilemma—the “government altered the payoff matrix” and all that. Etiquette, ethics, and law are all increasing levels of rules, and punishment for those rules, whose core purpose is to alter payoffs for defection; from as subtle as the placement of utensils at a dinner table to prohibit subtle threats to other guests, and less desirable seat placements as punishments for not living up to standards of etiquette, to shooting somebody for escalating a police situation one time too many in an attempt to escape punishment.
I am not a utilitarian. I don’t understand how are you going to combine the utils of both parties.
With one hunter less, there are fewer kills but fewer mouths to feed as well.
If it’s broken, it’s not a prisoner’s dilemma situation any more. If you want to argue that it exists as a counterfactual I’ll agree and point out that a great variety of things (including ravenous pink unicorns with piranha teeth) exist as a counterfactual.
I’m also not a utilitarian, and at this point you’re just quibbling over semantics rather than making any kind of coherent point. Of course you can’t combine the utils, that’s the -point- of the problem. Arguing that cooperation-defection results in the most gain for the defector is just repeating part of the problem statement of the prisoner’s dilemma.
Please, if you would, maintain the context of the conversation taking place. This gets very tedious when I have to repeat everything that was said in every previous comment. http://lesswrong.com/lw/m6b/thoughts_on_minimizing_designer_baby_drama/cdaa ← This is where this chain of conversation began. If this is your response, you’re doing nothing but conceding the point in a hostile and argumentative way.
Then I have no idea what you meant by “Cooperation always pays more overall, defection pays the defector better”—what is the “more overall” bit?
Yes, and I still don’t get the LW’s obsession with it. You are just providing supporting examples by claiming that everthing is PD and only the government’s hand saves us from an endless cycle of defections.
I will repeat my assertion that in real life, the great majority of choices people make are NOT in the PD context. This might or might not be different in the counterfactual anarchy case where there is no government, but in reality I claim that PD is rare and unusual.
So Lumifer, I appreciate the time you’ve taken to engage on this thread. I think the topic is an important one and it’s great to see more people discussing it. But...
I agree with OrphanWilde that you would be more pleasant to engage with if you tried to meet people halfway during discussions. Have you read Paul Graham on disagreement? The highest form of disagreement is to improve your opponent’s argument, then refute it. If we’re collaborating to figure out the truth, it’s possible for me to skip spelling out a particular point I’m making in full detail and trust that you’re a smart person and you can figure out that part of the argument. (That’s not to say that there isn’t a flaw in that part of the argument. If you understand the thrust of the argument and also notice a flaw, pointing out the flaw is appreciated.) Being forced to spell things out, especially repeatedly, can be very tedious. Assume good faith, principle of charity, construct steel men instead of straw men, etc. I wrote more on this.
You seem like a smart guy, and I appreciate the cynical perspective you have to offer. But I think I could get even more out of talking to you if you helped me make my arguments for me, e.g. the way I tried to do for you here and here. Let’s collaborate and figure out what’s true!
I value speaking plainly and clearly.
In real life (aka meatspace) I usually have to control my speech for nuances, implications, connotations, etc. It is not often that you can actually tell a fucking idiot that he is a fucking idiot.
One of the advantages of LW is that I can call a “digging implement named without any disrespect for oppressed people of color” a “spade” and be done with it. I value this advantage and use it. Clarity of speech leads to clarity of thought.
If I may make a recommendation about speaking to me, it would be useful to assume I am not stupid (most of the time, that is :-/). If I’m forcing you to “spell things out” that’s because there is a point to it which you should be able to discover after a bit of thought and just shortcut to the end. If I’m arguing with you this means I already disagree with some issue and the reason for the arguments is to figure out whether it’s a real (usually value-based) disagreement, a definition problem, or just a misunderstanding. A lot of my probing is aimed at firming up and sharpening your argument so that we can see where in that amorphous mass the kernel of contention is. I do steelman the opponents’ position, but if the steelman succeeds, I usually just agree and move to the parts where there is still disagreement or explicitly list the conditions under which the steelman works.
In arguments I mostly aim to define, isolate, and maximally sharpen the point of disagreement—because only then can you really figure out what the disagreement is about and whether it’s real or imaginary. I make no apologies for that—I think it’s good practice.
Cool, it sounds like we’re mostly on the same page about how disagreements should proceed, in theory at least. I’m a bit surprised when you say that your disagreements are usually values-based. It seems like in a lot of cases when I disagree with people it’s because we have different information, and over the course of our conversation, we share information and often converge on the same conclusion.
So maybe this is what frustrated me about our previous discussion… I think I would have appreciated a stronger pointer from you as to where our actual point of disagreement might lay. I’d rather you explain your perceived weakness in my argument rather than forcing me to discover it for myself. (Having arguments is frustrating enough without adding on a puzzle solving aspect.) For example, if you had said something like “communism was a movement founded by people with genes for altruism, and look where that went” earlier in our discussion, I think I would have appreciated that.
If you want, try predicting how I feel about communism, then rot13 the rest of this paragraph. V guvax pbzzhavfz vf n snyfvsvrq ulcbgurfvf ng orfg. Fbpvrgl qrfvta vf n gevpxl ceboyrz, fb rzcvevpvfz vf xrl. Rzcvevpnyyl, pbzzhavfg fbpvrgvrf (bapr gurl fpnyr cnfg ivyyntr-fvmrq) qba’g frrz irel shapgvbany, juvpu vf fgebat rivqrapr gung pbzzhavfz vf n onq zbqry. V qba’g guvax jr unir n inyhrf qvfnterrzrag urer—jr frrz gb or va nterrzrag gung pbzzhavfz naq eryngrq snvyher zbqrf ner onq bhgpbzrf. Engure, V guvax jr unq na vasb qvfpercnapl, jvgu lbh univat gur vafvtug gung nygehvfz trarf zvtug yrnq gb pbzzhavfz naq zr ynpxvat vg. Gur vyyhfvba bs genafcnerapl zvtug unir orra va bcrengvba urer.
I don’t know if they are “usually” value-based, but those are the serious, unresolvable ones. If the disagreement is due to miscommunication (e.g. a definitions issue), it’s easy to figure out once you get precise. If the disagreement is about empirical reality, well, you should stop arguing and go get a look at the empirical reality. But if it’s value-based, there is not much you can do.
Besides, a lot of value-based disagreements masquerade as arguments about definitions or data.
Mea culpa. I do have a tendency to argue by questions—which I’m generally fine with—but sometimes it gets… excessive :-) I know it can be a problem.
Well, it’s 2015 and you’re an American, I think, so it’s highly unlikely you have (or are willing to admit) a liking for communism :-)
But the issue here is this: some people argue that communism failed, yes, but is was a noble and righteous dream which was doomed by imperfect, selfish, nasty people. If only the people were better (higher level of consciousness and all that), communism would work and be just about perfect.
Now, if you can genetically engineer people to be suitable for communism...
Judging by the reactions of some people in this thread, for a lot of LWers, their knowledge of game theory starts and ends with PD.
The total payoff—the combined benefits both players receive—is better. This -matters-, because it’s possible to -bribe- cooperation. So one hunter pays the other hunter meat -not- to kill him and take his wife, or whatever. Extortionate behavior is itself another level of PD that I don’t care to get into.
Okay. This conversation? This is a PD. You’re defecting while I’m cooperating. You’re changing the goalposts and changing the conversational topic in an attempt to try to be right about something, violating the implicit rules of a conversation, while I’ve been polite and not calling you out on it; since this is an iterated Prisoner’s Dilemma, I can react to your defection by defecting myself. The karma system? It’s the government. It changes the payoffs. So what’s the relevance? It helps us construct better rules and plan for behaviors.
Do you also show up to parties uninvited? Yell at managers until they give in to your demands? Make shit up about people so you have something to add to conversations? Refuse to tip waitstaff, or leave subpar tips? These are all defections in variations on the Prisoner’s Dilemma, usually asymmetric variations.
And I will repeat my assertion that in this conversation, we aren’t having that discussion. It -might- matter in a counterfactual case where we were talking about whether or not PD payoff matrices are a good model for a society with a government, but your actual claim was that PD didn’t apply in the first place, not that it doesn’t apply now.
Sigh. So you’re looking at combined benefits, aka “utility-analog of both parties”, aka utils, about which you just said “of course you can’t combine the utils”.
Bullshit.
Instead of handwaving at each other, let’s define PD and then see what qualifies. I can start.
I’ll generalize PD—since we’re talking about social issues—to multiple agents (and call it GPD).
So, a prisoner’s dilemma is a particular situation that is characterized by the following:
Multiple agents (2 or more) have to make a particular choice after which they receive the payoffs.
All agents know they are in the GPD. There are no marks, patsies, or innocent bystanders.
All agents have to make a choice between the two alternatives, conventionally called cooperate (C) or defect (D). They have to make a choice—not making a choice is not an option, and neither is picking E. In some situations it doesn’t matter (when D is defined as not-C), in some it does.
All agents make their choice without knowing what other agents chose and before anyone receives the payoff.
For each agent the payoff from choosing D is known and fixed: decisions of other agents do not change it. In other words, if any agent chooses D, he is guaranteed to receive the D payoff known to him.
For each agent the payoff from choosing C varies depending on the decisions of other agents. If many other agents also chose C, the C payoff is high, more than D. If only a few other agents chose C, the C payoff is low, less than D (this is the generalization to multiple agents).
Given this definition, on which basis, more or less, I am arguing in this subthread, this conversation (or any single comment) is nowhere near a PD. Nor are the great majority of real-life situations calling for a choice.