Meta musing:
It looks like the optimal allocation is borderline fraudulent. When I think of in-universe reasons for the TAE to set up Cockatrice Eye rebates the way they did, my best guess is “there’s a bounty on these monsters in particular, and the taxmen figure someone showing up with n Cockatrice Eyes will have killed ceil(n/2) of them”. This makes splitting our four eyes (presumably collected from two monsters) four ways deceptive; my only consolation is that the apparently-standard divide-the-loot-as-evenly-as-possible thing most other adventuring teams seem to be doing also frequently ends up taking advantage of this incentive structure.
Reflections on my performance:
There’s an interesting sense in which we all failed this one. Most other players used AI to help them accomplish tasks they’d personally picked out; I eschewed AI altogether and constructed my model with brute force and elbow grease; after reaching a perfect solution, I finally went back and used AI correctly, by describing the problem on a high level (manually/meatbrainedly distilled from my initial observations) and asking the machine demiurge what approach would make most sense[1]. From this I learned about the fascinating concept of Symbolic Regression and some associated python libraries, which I eagerly anticipate using to (attempt to) steamroll similarly-shaped problems.
(There’s a more mundane sense in which I specifically failed this one, since even after building a perfect input-output relation and recognizing the two best archetypes as rebatemaxxing and corpsemaxxing, I still somehow fell at the last hurdle and failed to get a (locally-)optimal corpsemaxxing solution; if the system had followed the original plan, I’d be down a silver coin and up a silver medal. Fortunately for my character’s fortunes and fortune, Fortune chose to smile.)
Reflections on the challenge:
A straightforward scenario, but timed and executed flawlessly. In particular, I found the figuring-things-out gradient (admittedly decoupled from the actually-getting-a-good-answer gradient) blessedly smooth, starting with picking up on the zero-randomness premise[2] and ending with the fun twist that the optimal solution doesn’t involve anything being taxed at the lowest rate[3].
I personally got a lot out of this one: for an evening’s exacting but enjoyable efforts, I learned about an entire new form of model-building, about the utility and limits of modern AI, and about Banker’s Rounding. I vote four-out-of-five for both Quality and Complexity . . . though I recognize that such puzzle-y low-variance games are liable to have higher variance in how they’re received, and I might be towards the upper end of a bell curve here.
For a lark, I also tried turning on all ChatGPT’s free capabilities and telling it to solve the problem from scratch. It thought for ~30 seconds and then spat out a perfect solution; I spent ~30 further seconds with paperclips dancing before my eyes; I then discovered it hadn’t even managed to download the dataset, and was instead applying the not-unreasonable heuristic “if abstractapplic and simon agree on an answer it’s probably true”.
There’s something fun about how “magic”, “games”, “bureaucracy”, and “magical game bureaucracy” are equally good justifications for a “wait, what paradigm am I even in here?” layer of difficulty.
I know that part wasn’t intentional, but I think rebatemaxxing>corpsemaxxing is nontrivially more compelling than the other way round.