This discussion is triggering an interesting thought. To learn how willpower works in individuals, we should study how groups come to decisions and stick by them.
(Because we’re modeling within-individual conflict as between-subagent conflict, and thus making no relevant distinction between individuals and their subagents. It’s subagents all the way down.)
People get along best when their interactions are non-zero-sum.
Which is why, as I said in the comment above, “you have to actually surface your true desires and objections in order to resolve them.”
This need, incidentally, is raised quite often in books on sales, negotiation, etc. -- that in order to succeed, you need to find out what the other person really wants/needs (not just what they say they want), and then find a way to give them that, in exchange for what you really want/need (not just what you’d like to get).
In some cases, it may be easier to do this with another person than with yourself, because, as Feynman says, “you are the easiest person to fool.” There’s also the additional problem that by self-alienating (i.e., perceiving one of your desires as “other”, “bad”, or “not you”) you can make it virtually impossible to negotiate in good faith.
Actually, scratch that. I hate using “negotiate” as a metaphor for this, precisely because it implies an adversarial, non-zero-sum interaction. The other pieces of what you want are not alien beings trying to force you to give something up. They are you, even if you pretend they aren’t, and until you see through that, you won’t see any of the possibilities for resolving the conflict that get you more of everything you want.
Also, while the other party in a negotiation may not tell you what they really want, even if you ask, in internal conflict resolution you will get an answer if you sincerely ask… especially if you accept all your desires and needs as being truly your own, even if you don’t always like the consequences of having those desires or needs.
Well, strictly from the theoretical perspective of rational-agent game theory, we know quite a lot.
Subagents need to communicate so as to coordinate. Cooperation works best when there are no secrets.
On the other hand, it is often in the interests of the individual agents to keep some things secret from other agents. There is a fascinating theory of correlated equilibria and mechanism design to enable the sharing of the information you want to share and the hiding of information you wish to keep secret.
Punishment of one agent by other agents, and threats of punishment, are important in bargaining and in incentivizing adherence to bargains. There is no known way to dispense with threatened punishment, and probably no way to dispense entirely with real punishment. Rational cooperation (justified by reciprocity) cannot be built on any other basis.
To my mind, the idea of modeling mind as a society of autonomous agents is definitely worth exploring. And I see no reason not to treat at least some of those component agents as rational.
Dennett has been on a competitive neuron kick recently. Which would make game theory (or variants of it with applicable assumptions) a central part of understanding how the brain works.
Rational cooperation (justified by reciprocity) cannot be built on any other basis.
You can get cooperation through kin selection, though. If you are dealing with your brother reciprocity can be dispensed with. Thus the interest in things like showing others your source code.
Yep. Fully agree, assuming you meant twin brother. I originally left the parenthetical qualification out, then added it when I thought what you just now said.
It seems as though a lot of your third point unravels, though.
If you are a machine, you can—under some circumstances—rationally arrange cooperation with other machines without threats of punishment. The procedure involves exhibiting your actual source code (and there are ways of doing that convincingly). No punishment is needed, and it can work even if agents are unrelated, and have different goals.
None of my third point unravels. I was talking about bargaining. Bargaining between rational agents with different goals requires threats, if only threats not to make a bargain—threats not to share source.
You talk about cooperation. Certainly cooperation is possible without threats. But what do you cooperate in doing? You need to bargain so as to jointly decide that.
I’m inclined to ask you what you mean by “threat”.
However, rather than do that, please imagine two agents bargaining over the price of something, who are prevented from “threatening” each other by a police man, applying
your preferred definition of the term—whatever that may be.
Do you think that the police man necessarily prevents a bargain being reached?
I’m inclined to ask you what you mean by “threat”.
I’m inclined to refer you to the standard literature of game theory. I assure you, you will not be harmed by the information you encounter there.
...who are prevented from “threatening” each other by a police man …
I will at least mention that the definition of “threat” is inclusive enough that a cons table would not always intervene to prevent a threat.
… Do you think that the police man necessarily prevents a bargain being reached?
No, the cons table’s intervention merely alters the bargaining position of the players, thus leading to a different bargain being reached. Very likely, though, one or the other of the players will be harmed by the intervention and the other player helped. Whether this shift in results is or is not a good thing is a value judgment that not even the most ideological laissez-faire advocate would undertake without serious misgivings.
If rational bargainers fail to reach agreement, this is usually because their information is different, thus leading each to believe the other is being unreasonable; it is not because one or another negotiating tactic is disallowed.
ETA: Only after posting this did I look back and see why you asked these questions. It was my statement to the effect that “bargaining requires threats”. Let me clarify. The subject of bargaining includes the subject of threats. A theory of bargaining which attempts to exclude threats is not a theory of bargaining at all.
This discussion is triggering an interesting thought. To learn how willpower works in individuals, we should study how groups come to decisions and stick by them.
(Because we’re modeling within-individual conflict as between-subagent conflict, and thus making no relevant distinction between individuals and their subagents. It’s subagents all the way down.)
So what do we know about the latter?
People get along best when their interactions are non-zero-sum.
Which is why, as I said in the comment above, “you have to actually surface your true desires and objections in order to resolve them.”
This need, incidentally, is raised quite often in books on sales, negotiation, etc. -- that in order to succeed, you need to find out what the other person really wants/needs (not just what they say they want), and then find a way to give them that, in exchange for what you really want/need (not just what you’d like to get).
In some cases, it may be easier to do this with another person than with yourself, because, as Feynman says, “you are the easiest person to fool.” There’s also the additional problem that by self-alienating (i.e., perceiving one of your desires as “other”, “bad”, or “not you”) you can make it virtually impossible to negotiate in good faith.
Actually, scratch that. I hate using “negotiate” as a metaphor for this, precisely because it implies an adversarial, non-zero-sum interaction. The other pieces of what you want are not alien beings trying to force you to give something up. They are you, even if you pretend they aren’t, and until you see through that, you won’t see any of the possibilities for resolving the conflict that get you more of everything you want.
Also, while the other party in a negotiation may not tell you what they really want, even if you ask, in internal conflict resolution you will get an answer if you sincerely ask… especially if you accept all your desires and needs as being truly your own, even if you don’t always like the consequences of having those desires or needs.
Well, strictly from the theoretical perspective of rational-agent game theory, we know quite a lot.
Subagents need to communicate so as to coordinate. Cooperation works best when there are no secrets.
On the other hand, it is often in the interests of the individual agents to keep some things secret from other agents. There is a fascinating theory of correlated equilibria and mechanism design to enable the sharing of the information you want to share and the hiding of information you wish to keep secret.
Punishment of one agent by other agents, and threats of punishment, are important in bargaining and in incentivizing adherence to bargains. There is no known way to dispense with threatened punishment, and probably no way to dispense entirely with real punishment. Rational cooperation (justified by reciprocity) cannot be built on any other basis.
To my mind, the idea of modeling mind as a society of autonomous agents is definitely worth exploring. And I see no reason not to treat at least some of those component agents as rational.
Dennett has been on a competitive neuron kick recently. Which would make game theory (or variants of it with applicable assumptions) a central part of understanding how the brain works.
I’m curious what he will come up with.
You can get cooperation through kin selection, though. If you are dealing with your brother reciprocity can be dispensed with. Thus the interest in things like showing others your source code.
Yep. Fully agree, assuming you meant twin brother. I originally left the parenthetical qualification out, then added it when I thought what you just now said.
It seems as though a lot of your third point unravels, though.
If you are a machine, you can—under some circumstances—rationally arrange cooperation with other machines without threats of punishment. The procedure involves exhibiting your actual source code (and there are ways of doing that convincingly). No punishment is needed, and it can work even if agents are unrelated, and have different goals.
None of my third point unravels. I was talking about bargaining. Bargaining between rational agents with different goals requires threats, if only threats not to make a bargain—threats not to share source.
You talk about cooperation. Certainly cooperation is possible without threats. But what do you cooperate in doing? You need to bargain so as to jointly decide that.
I’m inclined to ask you what you mean by “threat”.
However, rather than do that, please imagine two agents bargaining over the price of something, who are prevented from “threatening” each other by a police man, applying your preferred definition of the term—whatever that may be.
Do you think that the police man necessarily prevents a bargain being reached?
I’m inclined to refer you to the standard literature of game theory. I assure you, you will not be harmed by the information you encounter there.
I will at least mention that the definition of “threat” is inclusive enough that a cons table would not always intervene to prevent a threat.
No, the cons table’s intervention merely alters the bargaining position of the players, thus leading to a different bargain being reached. Very likely, though, one or the other of the players will be harmed by the intervention and the other player helped. Whether this shift in results is or is not a good thing is a value judgment that not even the most ideological laissez-faire advocate would undertake without serious misgivings.
If rational bargainers fail to reach agreement, this is usually because their information is different, thus leading each to believe the other is being unreasonable; it is not because one or another negotiating tactic is disallowed.
ETA: Only after posting this did I look back and see why you asked these questions. It was my statement to the effect that “bargaining requires threats”. Let me clarify. The subject of bargaining includes the subject of threats. A theory of bargaining which attempts to exclude threats is not a theory of bargaining at all.