You shouldn’t take my claims on argument-from-authority alone, but it might help you have better priors about whether I’m right to know that I’ve published traditional-academic work in the specific field of matching theory.
rossry
With respect, I think that’s wrong.
If all parents agree that school A is better than B, but parent 1 cares much more about A>B than parent 2 does, then the sum-of-utilities is different (so, not “zero sum”) depending on whether [ 1→A; 2→B ] or [ 1→B; 2→A ]. Every change in outcomes leads to someone losing (compared to the counterfactual), but the payoffs aren’t zero-sum.
That example is kind of useless, but if you have three parents and three schools (and even if parents agree on order), but each of the parents care about A>B and B>C in different ratios, then you can use that fact to engineer a lottery where all three parents are better off than if you assigned them to schools uniformly at random. (Sketch of construction: Start with equal probabilities, and let parents trade some percentage of “upgrade C to B” for some (different?) percentage of “upgrade B to A” with each other. If they have different ratios of their A>B and B>C preferences, positive-sum trades exist.)
Then, in theory, a set of parents cooperating could implement this lottery on their own and agree to apply just to their lottery-assigned school, and if they don’t defect in the prisoners’ dilemma then they all benefit. Not zero-sum.
Of course, it can also be the case that they value different schools different amounts and a bad mechanism can lead to an inefficient allocation (where pairs would just be better off switching), and I could construct such an example if this margin weren’t too narrow to contain it.
It is separately the case that if the administrators have meta-preferences over what parents’ preferences get satisfied, then they can make a choice of mechanisms (“play the metagame”, as you put it) that give better / worse / differently-distributed results with respect to their meta-preferences.
While the zero-sum nature is unavoidable
I believe this is false as stated:
Given the mechanism you described, it is not possible to give every parent better outcomes with a change to their schools...
...but it might be the case that the parents being improved get more increase in value than the parents being disapproved, so it’s not constant-sum.
While ‘zero-sum’ is correct in a loose colloquial sense that at least one person has to lose something for any group to improve, I think it’s actually important to realize that there are mechanisms that improve overall welfare—and so the system administrators should be trying to find them!
You cite the Gale-Shapley papers, but are you aware that the school-choice mechanism you described is called the “Boston mechanism” in the field of mechanism design? Because, well, it was also the system in place at Boston Public Schools (until the early 2000s, when they changed to a Gale-Shapley algorithm).
Pathak and Sonmez (2008) is the usual citation on the topic, and they find (as you suggest) that the change makes the most “sophisticated” parent-players worse off, but the least-sophisticated better off.
Drug development costs can range over two orders of magnitude
Think of it as your own little lesson in irreversible consequences of appealing actions, maybe? Rather than a fully-realistic element.
Great if true!
As a Citizen, and without suggesting that you are, I would not endorse anyone else lying about similar topics, even for the good consequences you are trying to achieve.
My guess would be that a commitment to retaliation—including one that you don’t manage to announce to General Logoff before they log off—is positive, not negative, to one’s reputation around these here parts. Sophisticated decision theories have been popular for fifteen years, and “I retaliate to defection even when it’s costly to me and negative-net-welfare” reads to me as sophisticated, not shameworthy.
If a general of mine reads a blind commitment by General Logoff on the other side and does nuke back, I’ll think positively of them-and-all-my-generals. (Note: if they fire without seeing such a commitment, I’ll think negatively of them-and-all-my-generals, and update on whether I want them as collaborators in projects going forward.)
Grumble grumble unilateralists’ curse...
The charger was marked 150kWh, but my understanding is the best the Bolt can do, in ideal conditions with a battery below 50%, is 53kW. And the 23kWh I saw is about typical for a Bolt getting to 75%
I think that this paragraph should say “kW” instead of “kWh” both times? Either that, or I’ve misunderstood what you’re trying to communicate.
I think all of Ben’s and my proposals have assumed (without saying explicitly) that you shuffle within each suit. If you do that, then I think your concerns all go away? Let me know if you don’t think so.
That makes sense; I am generally a big believer in the power of physical tokens in learning exercises. For example, I was pretty opposed to electronic transfers of the internal currency that Atlas Fellowship participants used to bet their beliefs (even though it was significantly more convenient than the physical stones that we also gave them to use).
I do think that the Figgie app has the advantage of taking care of the mechanics of figuring out who trades with who, or what the current markets are (which aren’t core to the parts of the game I find most broadly useful), so I’m still trying to figure out whether I think the game is better taught with the app or with cards.
Good to hear it!
One of the things I find most remarkable about Figgie (cf. poker) is just how educational it can be with only a minimal explanation of the rules—I’m generally pretty interested in what kinds of pedagogy can scale because it largely “teaches itself”.
Do you think it was educational even though you were making clearly bad decisions / not at “an acceptable standard” for the first dozen games?
In a slightly different vein, I think the D&D.Sci series is great at training analysis and inference (though I will admit I haven’t sat down to do one properly).
Depending on your exact goals, a simulated trading challenge might be better than that, which I have even more thoughts about (and hopefully, someday, plans for).
In my personal canon of literature, they never made a movie.
I think I’ve seen it...once? And cached the thought that it wasn’t worth remembering or seeing again. When I wrote those paragraphs, I was thinking not at all about the portrayal in Hood’s film, just what’s in Card’s novels and written works.
But I imagine that the most interesting rationality lessons from poker come from studying other players and exploiting them, rather than memorizing and developing an intuition for the pure game theory of the game.
Strongly agree. I didn’t realize this when I wrote the original post, but I’m now convinced. It has been the most interesting / useful thing that I’ve learned in the working-out of Cunningham’s Law with respect to this post.
And so, there’s a reason that the curriculum for my and Max’s course shifts away from Nash equilibrium as the solution concept to optimizing winnings against an empirical (and non-Nash) field just as soon as we can manage it. For example, Practicum #3 (of 6) is “write a rock-paper-scissors bot that takes advantage of our not-exactly-random players as much as you can” without much further specification.
it’s not too hard to come up with a protocol
For example: A moves the piles with B watching and C+D looking away, then C removes 1 / 3 / 3 / 5 cards from random piles and shuffles them together with D watching and A+B looking away.
Oh, I agree. Sort-of-relatedly, I asked a few poker pros at Manifest why we conventionally play 8-handed when we play socially, and my favorite answer was “because playing heads-up doesn’t give you enough time to relax and chat”. (My second-favorite, which is probably more explanatory, was “it’s more economical for in-person casinos, and everyone else apes that.”) And if you talk to home-game pros, they will absolutely have thoughts about how to win money on average while keeping their benefactors from knowing that they’re reliably losing. The format of the game we play is shaped by social-emotional-economic factors other than pedagogy, but which are real incentives all the same.
Is there some way of making games like Figgie also have some of these properties?
I mean, Figgie itself is not purely skill-testing; you can always blame the cards, or other blame A for feeding B and losing but also causing you to lose, or any number of other things.
If you wanted to make it fuzzier on purpose, I think you could do the thing that often gets proposed for dealing it at home, which is to deal 40 cards out of a 52-card deck and call the goal suit the opposite of the longest suit (which might not be 12), with some way to break ties. I think it’s a worse pedagogical game for being less clear—not unrelated to the fact that it will make it harder to figure out why you’re winning or losing. And my guess is that the skill ceiling is higher, also not-unrelatedly.
It might!
In case it would also help to have two-to-three Harvard and/or MIT professors who work on exactly this topic to write supporting letters or talk with your school board, I’ll bet money at $1:$1 that I could arrange that. Or I’ll give emails and a warm intro for free.