I’m only just learning that (apparently) the standard rival of causal decision theory is “evidential decision theory”. So is that the original acausal decision theory, with TDT and UDT just latecomers local to LW? As you can see I am dangerously underinformed about the preexisting theoretical landscape, but I will nonetheless state my impressions.
If I think about a “decision theory” appropriate for real-world decisions, I think about something like expected-utility maximization. There are a number of problems specific to the adoption of a EUM framework. For example, you have to establish a total order on all possible states of the world, and so you want to be sure that the utility function you construct genuinely represents your preferences. But assuming that this has been accomplished, the problem of actually maximizing expected utility turns into a problem of computation, modeling an uncertain world, and so forth.
The problems showing up in these debates about causal vs evidential and causal vs acausal seem to have a very different character. If I am making a practical decision, I expect both to use causal thinking and to rely on evidence. CDT vs EDT then sounds like a debate about which indispensable thing I can dispense with.
Another thing I notice is that the thought experiments which supposedly create problems for CDT all involve extremes that don’t actually happen. Newcomb’s problem involves a superbeing with a perfect capacity to predict your choice, Parfit’s Hitchhiker is picked up by a mind reader who absolutely knows whether you will keep a promise or not, PD against your copy assumes that you and your copy will knowably make exactly the same choice. (At least this last thought experiment is realizable, in miniature, with simple computer programs.) What happens to these problems if you remove the absolutism?
Suppose Omega or Parfit’s mindreader is right only 99% of the time. Suppose your copy only makes the same choice as you do, 99% of the time. It seems like a practically relevant decision theory (whether or not you call it CDT) should be able to deal with such situations, because they are only a variation on the usual situation in reality, where you don’t have paranormally assured 100% knowledge of other agents, and where everything is a little inferential and a little uncertain. It seem that, if you want to think about these matters, first you should see how your decision theory deals with the “99% case”, and then you should “take the limit” to the 100% case which defines the traditional thought experiment, and you should see if the recommended decisions vary continuously or discontinuously.
All these thought experiments are realizable as simple computer programs, not only PD. In fact the post I linked to shows how to implement Newcomb’s Problem.
The 99% case is not very different from the 100% case, it’s continuous. If you’re facing a 99% Omega (or even a 60% Omega) in Newcomb’s Problem, you’re still better off being a one-boxer. That’s true even if both boxes are transparent and you can see what’s in them before choosing whether to take one or two—a fact that should make any intellectually honest CDT-er stop and scratch their head.
No offense, but I think you should try to understand what’s already been done (and why) before criticizing it.
To get to the conclusion that against a 60% Omega you’re better off to one-box, I think you have to put in a strong independence assumption: that the probability of Omega getting it wrong is independent of the ways of thinking that the player is using to make her choice.
I think that’s really the original problem in disguise (it’s a 100% Omega who rolls dice and sometimes decides to reward two-boxing instead of one-boxing). The analysis if all you know is that Omega is right 60% of the time would look different.
It would become a mind game: you’d have to explicitly model how you think Omega is making the decision.
The problem you’re facing is to maximise P(Omega rewards you|all your behaviour that Omega can observe). In the classical problem you can substitute the actual choice of one-boxing or two-boxing for the ‘all your behaviour’ part, because Omega is always right. But in the ‘imperfect Omega’ case you can’t.
It’s still not clear to me why playing mind games is a better strategy than just one-boxing, even in the 60% case. But I do understand your point about independence assumptions.
Start at 50% then, with Omega no better than chance. For each thought experiment, start with a null version where there’s nothing unusual and where CDT is supposed to work. Then vary the relevant parameter until there’s a problem, and understand what has changed.
That’s part of what the people who have been exploring this problem have already done, and why some posters are upset that you’re asking this without apparently having tried to get up-to-date on any of this.
I don’t see the bridge from ordinary decision problems to the thought experiments. I see extreme scenarios being constructed, and then complicated solutions being proposed just to deal with those scenarios. I don’t consider this a reliable way to arrive at the correct general form of decision theory.
You say that some people have already gone in the other direction, starting with ordinary decision problems and then slowly changing something until ordinary decision theory breaks. If so, great, and I’m sorry I missed it, but where is it? Is it on this site? Somewhere in the literature?
The analogy with relativity had occurred to me. But we could use another analogy from high-energy physics: There are a very large number of theories which have the standard model (the empirically validated part of particle physics) as their low-energy limit. We can’t just rely on high-energy thought-experiments to figure out the actual high-energy physics. We need to do some real experiments where we start low, ramp up the energy, and see what happens.
We can’t just rely on high-energy thought-experiments to figure out the actual high-energy physics.
Right. We can only use it to rule out incoherent or otherwise “clearly wrong” high-energy physics. But in this analogy, we’ve shown that CDT seems to not be optimal in this extreme case. if we can define a DT that does better than CDT in this case, and no worse in normal cases, we should use it. I don’t think TDT has been well enough defined yet to subject to all conceivable tests, but anything that is following the same kinds of principals will reproduce CDT in most cases, and do better in this case.
We need to do some real experiment where we start low, ramp up the energy, and see what happens.
Here’s where the analogy falls down—we only need to start low and ramp up the energy because of the difficulties of doing high-energy experiments. (And theory-wise, we extrapolate down from clear differences between theories at high energies to find signatures of small differences at lower energies.) If the extreme energies are accessible (and not crazily dangerous), we can just go ahead and test in that regime. Game theory is math. In math, unlike physics, there is no difference between thought experiments and real experiments. The question of applicability in everyday life is an applied economics / sociology / psychology one. How close are people or situations that appear to be screwy in this omega-like way to actually being that way?
See my other reply, or the links any others have given you, or Drescher’s handling of acausal means-end links in chapter 7 of Good and Real, which I think I did a good job summarizing here.
It sounds like I’ll have to work through this in my own fashion. As I said, I want to start with a null version, where CDT works—for example, a situation where Omega has no special knowledge and just guesses what your choice was. Obviously two-boxing is the right thing to do in that situation, CDT says so, and I assume that TDT says so too (though it would be nice to see a worked-out derivation in TDT of that conclusion). Then we give Omega some small but nonzero ability to predict what your choice is going to be. At a guess, the optimal strategy here will be a mixed one, one-boxing with probability p and two-boxing with probability (1-p). I think everyone will tell me that CDT always says p should be zero, but is that really so? I’m just not convinced that I need TDT in order to reach the obvious conclusion.
At a guess, the optimal strategy here will be a mixed one, one-boxing with probability p and two-boxing with probability (1-p).
If Omega’s correctness is independent of your thought process, the optimal strategy will be pure, not mixed. As you make Omega more accurate, at some point you switch from pure two-boxing to pure one-boxing.
Are you sure about that? If you’re right, that’s the exact transition point I’ve been looking to scrutinize. But what is the point at which you switch strategies?
cousin_it answered as I would, but I’ll go ahead and give the formal calculation anyway. If you start from an Omega accuracy rate r = 50%, that is equivalent to the case of Omega’s choice and yours being uncorrelated (causally or acausally). In that case, two boxing is optimal, and TDT and CDT both output that (as a pure strategy). As you increase r, CDT continues to output two-box, as it assigns the same optimality, while TDT will assign increasing optimality (call it TDTO, though it amounts to the same as EU) to one-boxing and decreasing optimality to two-boxing.
Solving for TDTO(one-box) > TDTO(two-box), you get that one-boxing chosen is under TDT (and optimal) whenever r > 50.05%, or whenever Omega has more than 721 nanobits of information (!!!) about your decision theory. (Note, that’s 0.000000721 bits of information.)
Viewed in this light, it should make more sense—do people never have more than 1 microbit of information about your decision theory? (Note: with less drastic differences between the outcomes, the threshold is higher.)
(I don’t think the inclusion of probabilistic strategies changes the basic point.)
I had been thinking that the only way to even approximately realize a Newcomb’s-problem situation was with computer programs. But a threshold so low makes it sound as if even a human being could qualify as a fallible Omega, and that maybe you could somehow test all this experimentally. Though even if we had human players in an experiment who were one-boxing and reaping the rewards, I’d still be very wary of supposing that the reason they were winning was because TDT is correct. If the Omega player was successfully anticipating the choices of a player who uses TDT, it suggests that the Omega player knows what TDT is. The success of one-boxing in such a situation might be fundamentally due to coordination arising from common concepts, rather than due to TDT being the right decision theory.
But first let me talk about realizing Newcomb’s problem with computer programs, and then I’ll return to the human scenario.
When I think about doing it with computer programs, two questions arise.
First question: Would an AI that was capable of understanding that it was in a Newcomb situation also be capable of figuring out the right thing to do?
In other words, do we need to include a “TDT special sauce” from the beginning, in the makeup of such a program, in order for it to discover the merits of one-boxing; or is a capacity for ordinary causal reasoning, coupled with the capacity to represent the defining elements of Newcomb’s problem, enough for an independent discovery of these ideas?
Second question: How does Omega get its knowledge of the player’s dispositions, and does this make any difference to the situation? (And we can also ask how the player knows that Omega has the power of prediction!)
If omega() and player() are two agents running in the same computer, the easiest way for omega() to predict player()’s behavior is just to simulate player(). omega() would then enact the game twice. First, it would start a copy of player() running, telling it (falsely) that it had predicted its choice, and then it would see the choice it made under such conditions. Then, omega() would play the game for real with the original(?) player(), now telling it (truthfully) that it has a prediction for its choice (due to the simulation of the game situation that had just been performed).
For certain types of player(), explicit simulation should not be necessary. If player() always does the same thing, completely unaffected by initial conditions and without any cognitive process, omega() can just inspect the source code. If player() has a simple decision procedure, something less than full simulation may also be sufficient. But full simulation of the game, including simulation of the beginning, where player() is introduced to the situation, should surely be sufficient, and for some cases (some complex agents) it will be necessary.
cousin_it’s scenario is a step down this path—world() corresponds to omega(), agent() to player(). But its agents, world() at least, lack the cognitive structure of real decision-makers. world() and agent() are functions whose values mimic the mutual dependency of Newcomb’s Omega and a TDT agent, and agent() has a decision procedure, though it’s just a brute-force search (and it requires access to world()’s source, which is unusual). But to really have confidence that TDT was the right approach in this situation, and that its apparent success was not just an artefact arising (e.g.) from more superficial features of the scenario, I need both omega() and player() to explicitly be agents that reason on the basis of evidence.
If we return now to the scenario of human beings playing this game with each other, with one human player being a “fallible Omega”… we do at least know that humans are agents that reason on the basis of evidence. But here, what we’d want to show is that any success of TDT among human beings actually resulted because of evidence-based cognition, rather than from (e.g.) “coordination due to common concepts”, as I suggested in the first paragraph.
In other words, do we need to include a “TDT special sauce” from the beginning, in the makeup of such a program, in order for it to discover the merits of one-boxing; or is a capacity for ordinary causal reasoning, coupled with the capacity to represent the defining elements of Newcomb’s problem, enough for an independent discovery of these ideas?
This is basically what EY discusses in pp. ~27-37 of the thesis he posted, where he poses it as the difference between optimality on action-determined problems (in which ordinary causal reasoning suffices to win) and optimality on decision-determined problems (on which ordinary causal reasoning loses, and you have to incorporate knowledge of “what kind of being makes this decision”).
I don’t think there’s anything especially interesting about that point, it’s just the point where the calculated expected utilities of one-boxing and two-boxing become equal.
They don’t make those decisions with “paranormally assured 100% knowledge” of my decision theory. That’s the “extreme that doesn’t actually happen”. And this is why I won’t be adopting any new paradigm of decision theory unless I can start in the middle, with situations that do happen, and move gradually towards the extremes, and see the desirability or necessity of the new paradigm that way.
As has been said many times (at least by me, definitely by many others), you don’t need 100% accuracy for the argument to hold. If Parfit’s mindreader is only 75% accurate, that still justifies choosing the pay/ cooperate / one-box option. One-boxing on newcomblike problems is simply what you get when you have a decision theory that wins in these reasonable cases, and which is continuous—and then take the limit as all the predicate variables go to what they need to be to make it Newcomb’s problem (such as making the predictor 100% accurate).
It doesn’t matter that you’ll never be in Newcomb’s problem. It doesn’t matter that you’ll never be in an epistemic state where you can justifiably believe that you are. It’s just an implication of having a good decision theory.
Part of my concern is that I’ll end up wasting time, chasing my tail in an attempt to deal with fictitious problems, when I could be working on real problems. I’m still undecided about the merits of acausal decision theories, as a way of dealing with the thought experiments, but I am really skeptical that they are relevant to anything practical, like coordination problems.
I’m only just learning that (apparently) the standard rival of causal decision theory is “evidential decision theory”. So is that the original acausal decision theory, with TDT and UDT just latecomers local to LW? As you can see I am dangerously underinformed about the preexisting theoretical landscape, but I will nonetheless state my impressions.
If I think about a “decision theory” appropriate for real-world decisions, I think about something like expected-utility maximization. There are a number of problems specific to the adoption of a EUM framework. For example, you have to establish a total order on all possible states of the world, and so you want to be sure that the utility function you construct genuinely represents your preferences. But assuming that this has been accomplished, the problem of actually maximizing expected utility turns into a problem of computation, modeling an uncertain world, and so forth.
The problems showing up in these debates about causal vs evidential and causal vs acausal seem to have a very different character. If I am making a practical decision, I expect both to use causal thinking and to rely on evidence. CDT vs EDT then sounds like a debate about which indispensable thing I can dispense with.
Another thing I notice is that the thought experiments which supposedly create problems for CDT all involve extremes that don’t actually happen. Newcomb’s problem involves a superbeing with a perfect capacity to predict your choice, Parfit’s Hitchhiker is picked up by a mind reader who absolutely knows whether you will keep a promise or not, PD against your copy assumes that you and your copy will knowably make exactly the same choice. (At least this last thought experiment is realizable, in miniature, with simple computer programs.) What happens to these problems if you remove the absolutism?
Suppose Omega or Parfit’s mindreader is right only 99% of the time. Suppose your copy only makes the same choice as you do, 99% of the time. It seems like a practically relevant decision theory (whether or not you call it CDT) should be able to deal with such situations, because they are only a variation on the usual situation in reality, where you don’t have paranormally assured 100% knowledge of other agents, and where everything is a little inferential and a little uncertain. It seem that, if you want to think about these matters, first you should see how your decision theory deals with the “99% case”, and then you should “take the limit” to the 100% case which defines the traditional thought experiment, and you should see if the recommended decisions vary continuously or discontinuously.
All these thought experiments are realizable as simple computer programs, not only PD. In fact the post I linked to shows how to implement Newcomb’s Problem.
The 99% case is not very different from the 100% case, it’s continuous. If you’re facing a 99% Omega (or even a 60% Omega) in Newcomb’s Problem, you’re still better off being a one-boxer. That’s true even if both boxes are transparent and you can see what’s in them before choosing whether to take one or two—a fact that should make any intellectually honest CDT-er stop and scratch their head.
No offense, but I think you should try to understand what’s already been done (and why) before criticizing it.
To get to the conclusion that against a 60% Omega you’re better off to one-box, I think you have to put in a strong independence assumption: that the probability of Omega getting it wrong is independent of the ways of thinking that the player is using to make her choice.
I think that’s really the original problem in disguise (it’s a 100% Omega who rolls dice and sometimes decides to reward two-boxing instead of one-boxing). The analysis if all you know is that Omega is right 60% of the time would look different.
How exactly different?
It would become a mind game: you’d have to explicitly model how you think Omega is making the decision.
The problem you’re facing is to maximise P(Omega rewards you|all your behaviour that Omega can observe). In the classical problem you can substitute the actual choice of one-boxing or two-boxing for the ‘all your behaviour’ part, because Omega is always right. But in the ‘imperfect Omega’ case you can’t.
It’s still not clear to me why playing mind games is a better strategy than just one-boxing, even in the 60% case. But I do understand your point about independence assumptions.
Start at 50% then, with Omega no better than chance. For each thought experiment, start with a null version where there’s nothing unusual and where CDT is supposed to work. Then vary the relevant parameter until there’s a problem, and understand what has changed.
That’s part of what the people who have been exploring this problem have already done, and why some posters are upset that you’re asking this without apparently having tried to get up-to-date on any of this.
I don’t see the bridge from ordinary decision problems to the thought experiments. I see extreme scenarios being constructed, and then complicated solutions being proposed just to deal with those scenarios. I don’t consider this a reliable way to arrive at the correct general form of decision theory.
You say that some people have already gone in the other direction, starting with ordinary decision problems and then slowly changing something until ordinary decision theory breaks. If so, great, and I’m sorry I missed it, but where is it? Is it on this site? Somewhere in the literature?
Ah, so you don’t see the utility of thought experiments about traveling near light speed either then?
The analogy with relativity had occurred to me. But we could use another analogy from high-energy physics: There are a very large number of theories which have the standard model (the empirically validated part of particle physics) as their low-energy limit. We can’t just rely on high-energy thought-experiments to figure out the actual high-energy physics. We need to do some real experiments where we start low, ramp up the energy, and see what happens.
Right. We can only use it to rule out incoherent or otherwise “clearly wrong” high-energy physics. But in this analogy, we’ve shown that CDT seems to not be optimal in this extreme case. if we can define a DT that does better than CDT in this case, and no worse in normal cases, we should use it. I don’t think TDT has been well enough defined yet to subject to all conceivable tests, but anything that is following the same kinds of principals will reproduce CDT in most cases, and do better in this case.
Here’s where the analogy falls down—we only need to start low and ramp up the energy because of the difficulties of doing high-energy experiments. (And theory-wise, we extrapolate down from clear differences between theories at high energies to find signatures of small differences at lower energies.) If the extreme energies are accessible (and not crazily dangerous), we can just go ahead and test in that regime. Game theory is math. In math, unlike physics, there is no difference between thought experiments and real experiments. The question of applicability in everyday life is an applied economics / sociology / psychology one. How close are people or situations that appear to be screwy in this omega-like way to actually being that way?
See my other reply, or the links any others have given you, or Drescher’s handling of acausal means-end links in chapter 7 of Good and Real, which I think I did a good job summarizing here.
It sounds like I’ll have to work through this in my own fashion. As I said, I want to start with a null version, where CDT works—for example, a situation where Omega has no special knowledge and just guesses what your choice was. Obviously two-boxing is the right thing to do in that situation, CDT says so, and I assume that TDT says so too (though it would be nice to see a worked-out derivation in TDT of that conclusion). Then we give Omega some small but nonzero ability to predict what your choice is going to be. At a guess, the optimal strategy here will be a mixed one, one-boxing with probability p and two-boxing with probability (1-p). I think everyone will tell me that CDT always says p should be zero, but is that really so? I’m just not convinced that I need TDT in order to reach the obvious conclusion.
If Omega’s correctness is independent of your thought process, the optimal strategy will be pure, not mixed. As you make Omega more accurate, at some point you switch from pure two-boxing to pure one-boxing.
Are you sure about that? If you’re right, that’s the exact transition point I’ve been looking to scrutinize. But what is the point at which you switch strategies?
cousin_it answered as I would, but I’ll go ahead and give the formal calculation anyway. If you start from an Omega accuracy rate r = 50%, that is equivalent to the case of Omega’s choice and yours being uncorrelated (causally or acausally). In that case, two boxing is optimal, and TDT and CDT both output that (as a pure strategy). As you increase r, CDT continues to output two-box, as it assigns the same optimality, while TDT will assign increasing optimality (call it TDTO, though it amounts to the same as EU) to one-boxing and decreasing optimality to two-boxing.
TDT will reason as such:
One box: TDTO = r*(1e6) + (1-r)*0 = (1000e3)r
Two box: TDTO = r*1000 + (1-r)*(1,001,000) = 1001e3 - (1000e3)r
Solving for TDTO(one-box) > TDTO(two-box), you get that one-boxing chosen is under TDT (and optimal) whenever r > 50.05%, or whenever Omega has more than 721 nanobits of information (!!!) about your decision theory. (Note, that’s 0.000000721 bits of information.)
Viewed in this light, it should make more sense—do people never have more than 1 microbit of information about your decision theory? (Note: with less drastic differences between the outcomes, the threshold is higher.)
(I don’t think the inclusion of probabilistic strategies changes the basic point.)
I had been thinking that the only way to even approximately realize a Newcomb’s-problem situation was with computer programs. But a threshold so low makes it sound as if even a human being could qualify as a fallible Omega, and that maybe you could somehow test all this experimentally. Though even if we had human players in an experiment who were one-boxing and reaping the rewards, I’d still be very wary of supposing that the reason they were winning was because TDT is correct. If the Omega player was successfully anticipating the choices of a player who uses TDT, it suggests that the Omega player knows what TDT is. The success of one-boxing in such a situation might be fundamentally due to coordination arising from common concepts, rather than due to TDT being the right decision theory.
But first let me talk about realizing Newcomb’s problem with computer programs, and then I’ll return to the human scenario.
When I think about doing it with computer programs, two questions arise.
First question: Would an AI that was capable of understanding that it was in a Newcomb situation also be capable of figuring out the right thing to do?
In other words, do we need to include a “TDT special sauce” from the beginning, in the makeup of such a program, in order for it to discover the merits of one-boxing; or is a capacity for ordinary causal reasoning, coupled with the capacity to represent the defining elements of Newcomb’s problem, enough for an independent discovery of these ideas?
Second question: How does Omega get its knowledge of the player’s dispositions, and does this make any difference to the situation? (And we can also ask how the player knows that Omega has the power of prediction!)
If omega() and player() are two agents running in the same computer, the easiest way for omega() to predict player()’s behavior is just to simulate player(). omega() would then enact the game twice. First, it would start a copy of player() running, telling it (falsely) that it had predicted its choice, and then it would see the choice it made under such conditions. Then, omega() would play the game for real with the original(?) player(), now telling it (truthfully) that it has a prediction for its choice (due to the simulation of the game situation that had just been performed).
For certain types of player(), explicit simulation should not be necessary. If player() always does the same thing, completely unaffected by initial conditions and without any cognitive process, omega() can just inspect the source code. If player() has a simple decision procedure, something less than full simulation may also be sufficient. But full simulation of the game, including simulation of the beginning, where player() is introduced to the situation, should surely be sufficient, and for some cases (some complex agents) it will be necessary.
cousin_it’s scenario is a step down this path—world() corresponds to omega(), agent() to player(). But its agents, world() at least, lack the cognitive structure of real decision-makers. world() and agent() are functions whose values mimic the mutual dependency of Newcomb’s Omega and a TDT agent, and agent() has a decision procedure, though it’s just a brute-force search (and it requires access to world()’s source, which is unusual). But to really have confidence that TDT was the right approach in this situation, and that its apparent success was not just an artefact arising (e.g.) from more superficial features of the scenario, I need both omega() and player() to explicitly be agents that reason on the basis of evidence.
If we return now to the scenario of human beings playing this game with each other, with one human player being a “fallible Omega”… we do at least know that humans are agents that reason on the basis of evidence. But here, what we’d want to show is that any success of TDT among human beings actually resulted because of evidence-based cognition, rather than from (e.g.) “coordination due to common concepts”, as I suggested in the first paragraph.
This is basically what EY discusses in pp. ~27-37 of the thesis he posted, where he poses it as the difference between optimality on action-determined problems (in which ordinary causal reasoning suffices to win) and optimality on decision-determined problems (on which ordinary causal reasoning loses, and you have to incorporate knowledge of “what kind of being makes this decision”).
Of course, if player() is sentient, doing so would require omega() to create and destroy a sentient being in order to model player().
I don’t think there’s anything especially interesting about that point, it’s just the point where the calculated expected utilities of one-boxing and two-boxing become equal.
Really? People never decide how to treat you based on estimations of your decision theory (aka your “character”)?
They don’t make those decisions with “paranormally assured 100% knowledge” of my decision theory. That’s the “extreme that doesn’t actually happen”. And this is why I won’t be adopting any new paradigm of decision theory unless I can start in the middle, with situations that do happen, and move gradually towards the extremes, and see the desirability or necessity of the new paradigm that way.
As has been said many times (at least by me, definitely by many others), you don’t need 100% accuracy for the argument to hold. If Parfit’s mindreader is only 75% accurate, that still justifies choosing the pay/ cooperate / one-box option. One-boxing on newcomblike problems is simply what you get when you have a decision theory that wins in these reasonable cases, and which is continuous—and then take the limit as all the predicate variables go to what they need to be to make it Newcomb’s problem (such as making the predictor 100% accurate).
If it helps, think of the belief in one-boxing as belief in the implied optimal.
It doesn’t matter that you’ll never be in Newcomb’s problem. It doesn’t matter that you’ll never be in an epistemic state where you can justifiably believe that you are. It’s just an implication of having a good decision theory.
Part of my concern is that I’ll end up wasting time, chasing my tail in an attempt to deal with fictitious problems, when I could be working on real problems. I’m still undecided about the merits of acausal decision theories, as a way of dealing with the thought experiments, but I am really skeptical that they are relevant to anything practical, like coordination problems.