The things he is talking about in the book is bluffing and psychological warfare.
That may be, but you said:
If we only learn about how to win competitive games, then we have fewer tools for dealing with cooperative situations
Which, to me, is about more than just bluffing and psychological warfare.
Are these useful skills when trying get something done cooperatively or do they harm the process if used?
I don’t know. I could easily imagine the answer being both, depending on circumstance, thus making as simple a characterization of them as you seem to implying pretty difficult.
Possibly we mean different things by cooperative situations. I’m talking about situations where people have to work together to win, you can’t just wipe out or ignore everyone else. This means balancing your goals with others
This makes the point that cooperative scenarios are harder than purely competitive scenarios, not that we’re particularly bad at them. “Balancing your goals with others” is in the end just another way of saying that your goals positively correlate with theirs. Most big problems (and yes, even Magic) contain agents with goals both positively and negatively correlated with yours, so “cooperative or competitive” is not, in general, a binary proposition. Do you think we’re particularly bad at planning in the presence of others with positively correlated goals?
You can treat solving global warming as having competitive elements, but then you will be less efficient at actually solving the problem by having to spend resources on competing, which could have been used for solving the problem.
If it has competitive elements, then I certainly want to treat it as though it has competitive elements, regardless of my final strategy. But you also seem to be suggesting that approaching an objective competitively is inherently less efficient than approaching it cooperatively. Surely you don’t mean that.
If we only learn about how to win competitive games, then we have fewer tools for
dealing with cooperative situations
Which, to me, is about more than just bluffing and psychological warfare.
Take my comments in light of the context.
I don’t know. I could easily imagine the answer being both, depending on
circumstance, thus making as simple a characterization of them as you seem to
implying pretty difficult.
I can’t really get a handle on where you are coming from. Are you saying that it is often useful to bluff the people you are cooperating with, or would it be a once in the blue moon kind of situation? Give an example of it helping?
But you also seem to be suggesting that approaching an objective competitively is
inherently less efficient than approaching it cooperatively. Surely you don’t mean that.
Only if you have total knowledge of the situation… Consider the human body, the places where competition help it to achieve objectives, (the brain possibly and the immune system) are the portions trying to gain knowledge about the outside world. Can you tell me how competition would help the human body apart from in these situations?
Are you saying that it is often useful to bluff the people you are cooperating with, or would it be a once in the blue moon kind of situation? Give an example of it helping?
Drivers often slow down or stop far ahead of time for pedestrians, wasting more of their time to do so than it costs the pedestrian to wait for the car. When I’m on foot and anticipate this, I often bluff the driver by looking away or pretending to change direction. It’s minor, but effective and quite frequent.
Only if you have total knowledge of the situation
What about perfect knowledge of a prisoner’s dilemma involving non-cooperative agents?
Drivers often slow down or stop far ahead of time for pedestrians, wasting more of their
time to do so than it costs the pedestrian to wait for the car. When I’m on foot and
anticipate this, I often bluff the driver by looking away or pretending to change direction.
It’s minor, but effective and quite frequent.
Could you do it by signaling openly?
What about perfect knowledge of a prisoner’s dilemma involving non-cooperative agents?
What do you mean by non-cooperative agents, that they always defect, or don’t communicate? And do the agents have perfect knowledge or is there a third party?
That may be, but you said:
Which, to me, is about more than just bluffing and psychological warfare.
I don’t know. I could easily imagine the answer being both, depending on circumstance, thus making as simple a characterization of them as you seem to implying pretty difficult.
This makes the point that cooperative scenarios are harder than purely competitive scenarios, not that we’re particularly bad at them. “Balancing your goals with others” is in the end just another way of saying that your goals positively correlate with theirs. Most big problems (and yes, even Magic) contain agents with goals both positively and negatively correlated with yours, so “cooperative or competitive” is not, in general, a binary proposition. Do you think we’re particularly bad at planning in the presence of others with positively correlated goals?
If it has competitive elements, then I certainly want to treat it as though it has competitive elements, regardless of my final strategy. But you also seem to be suggesting that approaching an objective competitively is inherently less efficient than approaching it cooperatively. Surely you don’t mean that.
Take my comments in light of the context.
I can’t really get a handle on where you are coming from. Are you saying that it is often useful to bluff the people you are cooperating with, or would it be a once in the blue moon kind of situation? Give an example of it helping?
Only if you have total knowledge of the situation… Consider the human body, the places where competition help it to achieve objectives, (the brain possibly and the immune system) are the portions trying to gain knowledge about the outside world. Can you tell me how competition would help the human body apart from in these situations?
Drivers often slow down or stop far ahead of time for pedestrians, wasting more of their time to do so than it costs the pedestrian to wait for the car. When I’m on foot and anticipate this, I often bluff the driver by looking away or pretending to change direction. It’s minor, but effective and quite frequent.
What about perfect knowledge of a prisoner’s dilemma involving non-cooperative agents?
Could you do it by signaling openly?
What do you mean by non-cooperative agents, that they always defect, or don’t communicate? And do the agents have perfect knowledge or is there a third party?