Thanks aphyer. My analysis so far and proposed strategy:
After initial observations that e.g. higher numbers are correlated with winning, I switched to mainly focus on race and class, ignoring the numerical aspects.
I found major class-race interactions.
It seems that for matchups within the same class, Elves are great, tending to beat dwarves consistently across all classes and humans even harder. While Humans beat dwarves pretty hard too in same-class matchups.
Within same-race matchups there are also fairly consistent patterns: Fencers tend to beat Rangers, Monks and Warriors, Knights beat Ninjas, Monks beat Warriors, Rangers and Knights, Ninjas beat Monks, Fencers and Rangers, Rangers beat Knights and Warriors, and Warriors beat Knights.
If the race and class are both different though… things can be different. For example, a same-class Elf will tend to beat a same-class Dwarf. And a same-race Fencer will tend to beat a same-race Warrior. But if an Elf Fencer faces a Dwarf Warrior, the Dwarf Warrior will most likely win. Another example with Fencers and Warriors: same-class Elves tend to beat Humans—but not only will a Human Warrior tend to beat an Elf Fencer, but also a Human Fencer will tend to beat an Elf Warrior by a larger ratio than for a same-race Fencer/Warrior matchup???
If you look at similarities between different classes in terms of combo win rates, there seems to be a chain of similar classes:
Knight—Warrior—Ranger—Monk—Fencer—Ninja
(I expected a cycle underpinned by multiple parameters. But Ninja is not similar to Knight. This led me to consider that perhaps there is an only a single underlying parameter, or trade off between two (e.g. strength/agility …. or … Speed and Power)).
And going back to the patterns seen before, this seems compatible with races also having speed/power tradeoffs:
Dwarf—Human—Elf
Where speed has a threshold effect but power is more gradual (so something with slightly higher speed beats something with slightly higher power, but something with much higher power beats something with much higher speed).
Putting the Class-race combos on the same spectrum based on similarity/trends in results, I get the following ordering:
Elf Ninja > Elf Fencer > Human Ninja > Elf Monk > Human Fencer > Dwarf Ninja >~ Elf Ranger > Human Monk > Elf Warrior > Dwarf Fencer > Human Ranger > Dwarf Monk >~ Elf Knight > Human Warrior > Dwarf Ranger > Human Knight > Dwarf Warrior > Dwarf Knight
So, it seems a step in the race sequence is about equal to 1.5 steps in the class sequence. On the basis of pretty much just that, I guessed that race steps are a 3 speed vs power tradeoff, class steps are a 2 speed and power tradeoff, levels give 1 speed and power each, and items give what they say on the label.
I have not verified this as much as I would like. (But on the surface it seems to work, e.g. speed threshold seems to be there). One thing that concerns me is that it seems that higher speed differences actually reduce success chances holding power differences constant (could be an artifact, e.g., of it not just depending on the differences between stat values edit: see further edit below). But, for now, assuming that I have it correct, speed/power of the house champions (with the lowest race and class in a stat assumed to have 0 in that stat):
House Adelon: Level 6 Human Warrior +3 Boots +1 Gauntlets − 14 speed 18 power
House Bauchard: Level 6 Human Knight +3 Boots +2 Gauntlets − 12 speed 21 power
House Cadagal: Level 7 Elf Ninja +2 Boots +3 Gauntlets − 25 speed 10 power
House Deepwrack: Level 6 Dwarf Monk +3 Boots +2 Gauntlets − 15 speed 18 power
Whereas the party’s champions, ignoring items, have:
Uzben Grimblade, a Level 5 Dwarf Ninja − 15 speed 11 power
Varina Dourstone, a Level 5 Dwarf Warrior − 7 speed 19 power
Willow Brown, a Level 5 Human Ranger − 12 speed 14 power
Xerxes III of Calantha, a Level 5 Human Monk − 14 speed 12 power
Yalathinel Leafstrider, a Level 5 Elf Fencer − 19 speed 7 power
Zelaya Sunwalker, a Level 6 Elf Knight − 12 speed 16 power
For my proposed strategy (subject to change as I find new info, or find my assumptions off, e.g. such that my attempts to just barely beat the opponents on speed are disastrously slightly wrong):
I will send Willow Brown, with +3 boots and +1 gauntlets no gauntlets, against House Adelon’s champion (1 speed advantage, 34 power deficit)
I will send Zelaya Sunwalker, with +1 boots and +2+1 gauntlets, against House Bauchard’s champion (1 speed advantage, 34 power deficit)
I will send Xerxes III of Calantha, with +2 boots and +3+2 gauntlets, against House Deepwrack’s champion (1 speed advantage, 34 power deficit)
And I will send Varina Dourstone, with +3 gauntlets no items, to overwhelm House Cadagal’s Elf Ninja with sheer power (18 speed deficit, 912 power advantage).
And in fact, I will gift the +4 boots of speed to House Cadagal’s Elf Ninja in advance of the fight, making it a 20 speed deficit.
Why? Because I noticed that +4 boots of speed are very rare items that have only been worn by Elf Ninjas in the past. So maybe that’s what the bonus objective is talking about. Of course, another interpretation is that sending a character 2 levels lower without any items, and gifting a powerful item in advance, would be itself a grave insult. Someone please decipher the bonus objective to save me from this foolishness!
Edited to add: It occurs to me that I really have no reason to believe the power calculation is accurate, beyond that symmetry is nice. I’d better look into that.
further edit: it turns out that I was leaving out the class contribution to the power difference when calculating the power difference for determining the effects of power and speed. It looks like this was causing the effect of higher speed differences seeming to reduce win rates. With this fixed the effects look much cleaner (e.g. there’s a hard threshold where if you have a speed deficit you must have at least 3 power advantage to have any chance to win at all), increasing my confidence that effects on power and speed being symmetric is actually correct. This does have the practical effect of making me adjust my item distribution: it looks like a 4 deficit in power is still enough for >90% win rate with a speed advantage, while getting similar win rates with a speed disadvantage will require more than just the 9 power difference, so I shifted the items to boost Varina’s power advantage. Indeed, with the cleaner effects, it appears that I can reasonably model the effect of a speed advantage/disadvantage as equivalent to a power difference of 8, so with the item shift all characters will have an effective +4 power advantage taking this into account.
Noting that I read this (and that therefore you get partial credit for any solution I come up with from here on out): your model and the strategies it implies are both very interesting. I should be able to investigate them with ML alongside everything else, when/if I get around to doing that.
Regarding the Bonus Objective:
I can’t figure out whether offering that guy we unknowingly robbed his shoes back is the best or the worst diplomatic approach our character could take, but yeah I’m pretty sure we both located the problem and roughly what it implies for the scenario.
I didn’t realize that the level 7 Elf Ninjas were all one person or that the boots +4 were always with a level 7 (as opposed to any level) Elf Ninja. It seems you are correct as there are 311 cases of which the first 299 all have the boots of speed 4 and gauntlets 3 with only the last 12 having boots 2 and gauntlets 3 (likely post-theft). It seems to me that they appear both as red and black, though.
>only the last 12 having boots 2 and gauntlets 3 (likely post-theft)
Didn’t notice that but it confirms my theory, nice.
>It seems to me that they appear both as red and black, though.
Ah, I see where the error in my code was that made me think otherwise. Strange coincidence: I thought “oh yeah a powerful wealthy elf ninja who pointedly wears black when assigned red clothes, what a neat but oddly specific 8-bit theater reference” and then it turned out to be a glitch.
I am currently modeling the win ratio as dependent on a single number, the effective power difference. The effective power difference is the power difference plus 8*sign(speed difference).
Power and speed are calculated as:
Power = level + gauntlet number + race power + class power
Speed = level + boots number + race speed + class speed
where race speed and power contributions are determined by each increment on the spectrum:
Dwarf—Human—Elf
increasing speed by 3 and lowering power by 3
and class speed and power contributions are determined by each increment on the spectrum:
Knight—Warrior—Ranger—Monk—Fencer—Ninja
increasing speed by 2 and lower power by 2.
So, assuming this is correct, what function of the effective power determines the win rate? I don’t have a plausible exact formula yet, but:
If the effective power difference is 6 or greater, victory is guaranteed.
If the effective power difference is low, it seems a not-terrible fit that the odds of winning are about exponential in the effective power difference (each +1 effective power just under doubling odds of winning)
It looks like it is trending faster than exponential as the effective power difference increases. At an effective power difference of 4, the odds of the higher effective power character winning are around 17 to 1.
edit: it looks like there is a level dependence when holding effective power difference constant at non-zero values (lower/higher level → winrate imbalance lower/higher than implied by effective power difference). Since I don’t see this at 0 effective power difference, it is presumably not due to an error in the effective power calculation, but an interaction with the effective power difference to determine the final winrate. Our fights are likely “high level” for this purpose implying better odds of winning than the 17 to 1 in each fight mentioned above. Todo: find out more about this effect quantitatively. edit2: whoops that wasn’t a real effect, just me doing the wrong test to look for one.
Inspired by abstractapplic’s machine learning and wanting to get some experience in julia, I got Claude (3.5 sonnet) to write me an XGBoost implementation in julia. Took a long time especially with some bugfixing (took a long time to find that a feature matrix was the wrong shape—a problem with insufficient type explicitness, I think). Still way way faster than doing it myself! Not sure I’m learning all that much julia, but am learning how to get Claude to write it for me, I hope.
Anyway, I used a simple model that
only takes into account 8 * sign(speed difference) + power difference, as in the comment this is a reply to
and a full model that
takes into account all the available features including the base data, the number the simple model uses, and intermediate steps in the calculation of that number (that would be, iirc: power (for each), speed (for each), speed difference, power difference, sign(speed difference))
Results:
Rank 1 Full model scores: Red: 94.0%, Black: 94.9% Combined full model score: 94.4% Simple model scores: Red: 94.3%, Black: 94.6% Combined simple model score: 94.5%
Matchups: Varina Dourstone (+0 boots, +3 gauntlets) vs House Cadagal Champion Willow Brown (+3 boots, +0 gauntlets) vs House Adelon Champion Xerxes III of Calantha (+2 boots, +2 gauntlets) vs House Deepwrack Champion Zelaya Sunwalker (+1 boots, +1 gauntlets) vs House Bauchard Champion
This is the top scoring scoring result with either the simplified model or the full model. It was found by a full search of every valid item and hero combination available against the house champions.
It is also my previously posted, found w/o machine learning, proposal for the solution. Which is reassuring. (Though, I suppose there is some chance that my feeding the models this predictor, if it’s good enough, might make them glom on to it while they don’t find some hard-to learn additional pattern.)
My theory though is that giving the models the useful metric mostly just helps them—they don’t need to learn the metric from the data, and I mostly think that if there was a significant additional pattern the full model would do better.
(for Cadagal, I haven’t changed the champion’s boots to +4, though I don’t expect that to make a significant difference)
As far as I can tell the full model doesn’t do significantly better and does worse in some ways (though, I don’t know much about how to evaluate this, and Claude’s metrics, including a test set log loss of 0.2527 for the full model and 0.2511 for the simple model, are for a separately generated version which I am not all that confident are actually the same models, though they “should be” up to the restricted training set if Claude was doing it right). * see edit below
But the red/black variations seen below for the full model seem likely to me (given my prior that red and black are likely to be symmetrical) to be an indication that what the full model is finding that isn’t in the full model is at least partially overfitting. Though actually, if it’s overfitting a lot, maybe it’s surprising that the test set log loss wouldn’t be a lot worse than found (though it is at least worse than the simple model)? Hmm—what if there are actual red/black difference? (something to look into perhaps, as well as try to duplicate abstractapplic’s report regarding sign(speed difference) not exhausting the benefits of speed info … but for now I’m more likely to leave the machine learning aside and switch to looking at distributions of gladiator characteristics, I think.)
Predictions for individual matchups for my and abstractapplic’s solutions:
My matchups:
Varina Dourstone (+0 boots, +3 gauntlets) vs House Cadagal Champion (+2 boots, +3 gauntlets) Full Model: Red: 91.1%, Black: 96.7% Simple Model: Red: 94.3%, Black: 94.6%
Willow Brown (+3 boots, +0 gauntlets) vs House Adelon Champion (+3 boots, +1 gauntlets) Full Model: Red: 94.3%, Black: 95.1% Simple Model: Red: 94.3%, Black: 94.6%
Xerxes III of Calantha (+2 boots, +2 gauntlets) vs House Deepwrack Champion (+3 boots, +2 gauntlets) Full Model: Red: 95.2%, Black: 93.7% Simple Model: Red: 94.3%, Black: 94.6%
Zelaya Sunwalker (+1 boots, +1 gauntlets) vs House Bauchard Champion (+3 boots, +2 gauntlets) Full Model: Red: 95.3%, Black: 93.9% Simple Model: Red: 94.3%, Black: 94.6%
(all my matchups have 4 effective power difference in my favour as noted in an above comment)
abstractapplic’s matchups:
Matchup 1: Uzben Grimblade (+3 boots, +0 gauntlets) vs House Adelon Champion (+3 boots, +1 gauntlets)
Stats: Speed: 20 vs 15 (diff: 5) Power: 9 vs 18 (diff: −9) Effective Power Difference: −1 --------------------------------------------------------------------------------
Overall Statistics: Full Model Average: Red: 61.4%, Black: 60.7% Simple Model Average: Red: 60.9%, Black: 61.4%
Edit: so I checked the actual code to see if Claude was using the same hyperparameters for both, and wtf wtf wtf wtf. The code has 6 functions that all train models (my fault for at one point renaming a function since Claude gave me a new version that didn’t have all the previous functionality (only trained the full model instead of both—this was when doing the great bughunt for the misshaped matrix and a problem was suspected in the full model), then Claude I guess picked up on this and started renaming updated versions spontaneously, and I was adding Claude’s new features in instead of replacing things and hadn’t cleaned up the code or asked Claude to do so). Each one has it’s own hardcoded hyperparameter set. Of these, there are one pair of functions that have matching hyperparameters. Everything else has a unique set. Of course, most of these weren’t being used anymore, but the functions for actually generating the models I used for my results, and the function for generating the models used for comparing results on a train/test split, weren’t among the matching pair. Plus another function that returns a (hardcoded, also unique) updated parameter set, but wasn’t actually used. Oh and all this is not counting the hyperparameter tuning function that I assumed was generating a set of tuned hyperparameters to be used by other functions, but in fact was just printing results for different tunings. I had been running this every time before training models! Obviously I need to be more vigilant (or maybe asking Claude to do so might help?).
edit:
Had Claude clean up the code and tune for more overfitting, still didn’t see anything not looking like overfitting for the full model. Could still be missing something, but not high enough in subjective probability to prioritize currently, so have now been looking at other aspects of the data.
further edit:
My (what I think is) highly overfitted version of my full model really likes Yonge’s proposed solution. In fact it predicts a higher winrate than forequal winrate to the best possible configuration not using the +4 boots (I didn’t have Claude code the situation where +4 boots are a possibility). I still think that’s probably because they are picking up the same random fluctuations … but it will be amusing if Yonge’s “manual scan” solution turns out to be exactly right.
Now using julia with Claude to look at further aspects of the data, particularly in view of other commenters’ observations:
First, thanks to SarahSrinivasan for the key observation that the data is organized into tournaments and non-tournament encounters. The tournaments skew the overall data to higher winrate gladiators, so restricting to the first round is essential for debiasing this (todo: check what is up with non-tournament fights).
Also, thanks to abstractapplic and Lorxus for pointing out that their are some persistent high level gladiators. It seems to me all the level 7 gladiators are persistent (up to the two item changes remarked on by abstractapplic and Lorxus). I’m assuming for now level 6 and below likely aren’t persistent (other than in the same tournament).
(btw there are a couple fights where the +4 gauntlets holder is on both sides. I’m assuming this is likely a bug in the dataset generation rather than an indication that there are two of them (e.g. didn’t check that both sides, drawn randomly from some pool, were not equal)).
For gladiators of levels 1 to 6, the boots and gauntlets in tournament first rounds seem to be independently and randomly assigned as follows:
+1 and +2 gauntlets are equally likely at 10⁄34 chance each;
+3 gauntlets have probability (4 + level)/34
+0 (no) gauntlets have probability (10 - level)/34
and same, independently, for boots.
I didn’t notice obvious deviations for particular races and classes (only did a few checks).
I don’t have a simple formula for level distribution yet. It is clearly much more favouring lower levels in tournament first rounds as compared with non-tournament fights, and level 1 gladiators don’t show up at all in non-tournament fights. Will edit to add more as I find more.
edit: boots/gauntlets distribution seems to be about the same for each level in the non-tournament distribution as in the tournament first rounds. This suggests that the level distribution differences in non-tournament rounds is not due to win/winrate selection (which the complete absence of level 1′s outside of tournaments already suggested).
edit2: race/class distribution for levels 1-6 seems equal in first round data (same probabilities of each, independent). Same in non-tournament data. I haven’t checked for particular levels within that range. edit3: there seems to be more level 1 fencers than other level 1 classes by an amount that is technically statistically significant if Claude’s test is correct, though still probably random I assume.
Thanks aphyer. My analysis so far and proposed strategy:
After initial observations that e.g. higher numbers are correlated with winning, I switched to mainly focus on race and class, ignoring the numerical aspects.
I found major class-race interactions.
It seems that for matchups within the same class, Elves are great, tending to beat dwarves consistently across all classes and humans even harder. While Humans beat dwarves pretty hard too in same-class matchups.
Within same-race matchups there are also fairly consistent patterns: Fencers tend to beat Rangers, Monks and Warriors, Knights beat Ninjas, Monks beat Warriors, Rangers and Knights, Ninjas beat Monks, Fencers and Rangers, Rangers beat Knights and Warriors, and Warriors beat Knights.
If the race and class are both different though… things can be different. For example, a same-class Elf will tend to beat a same-class Dwarf. And a same-race Fencer will tend to beat a same-race Warrior. But if an Elf Fencer faces a Dwarf Warrior, the Dwarf Warrior will most likely win. Another example with Fencers and Warriors: same-class Elves tend to beat Humans—but not only will a Human Warrior tend to beat an Elf Fencer, but also a Human Fencer will tend to beat an Elf Warrior by a larger ratio than for a same-race Fencer/Warrior matchup???
If you look at similarities between different classes in terms of combo win rates, there seems to be a chain of similar classes:
Knight—Warrior—Ranger—Monk—Fencer—Ninja
(I expected a cycle underpinned by multiple parameters. But Ninja is not similar to Knight. This led me to consider that perhaps there is an only a single underlying parameter, or trade off between two (e.g. strength/agility …. or … Speed and Power)).
And going back to the patterns seen before, this seems compatible with races also having speed/power tradeoffs:
Dwarf—Human—Elf
Where speed has a threshold effect but power is more gradual (so something with slightly higher speed beats something with slightly higher power, but something with much higher power beats something with much higher speed).
Putting the Class-race combos on the same spectrum based on similarity/trends in results, I get the following ordering:
Elf Ninja > Elf Fencer > Human Ninja > Elf Monk > Human Fencer > Dwarf Ninja >~ Elf Ranger > Human Monk > Elf Warrior > Dwarf Fencer > Human Ranger > Dwarf Monk >~ Elf Knight > Human Warrior > Dwarf Ranger > Human Knight > Dwarf Warrior > Dwarf Knight
So, it seems a step in the race sequence is about equal to 1.5 steps in the class sequence. On the basis of pretty much just that, I guessed that race steps are a 3 speed vs power tradeoff, class steps are a 2 speed and power tradeoff, levels give 1 speed and power each, and items give what they say on the label.
I have not verified this as much as I would like. (But on the surface it seems to work, e.g. speed threshold seems to be there). One thing that concerns me is that it seems that
higher speed differences actually reduce success chances holding power differences constant(could be an artifact, e.g., of it not just depending on the differences between stat values edit: see further edit below). But, for now, assuming that I have it correct, speed/power of the house champions (with the lowest race and class in a stat assumed to have 0 in that stat):House Adelon: Level 6 Human Warrior +3 Boots +1 Gauntlets − 14 speed 18 power
House Bauchard: Level 6 Human Knight +3 Boots +2 Gauntlets − 12 speed 21 power
House Cadagal: Level 7 Elf Ninja +2 Boots +3 Gauntlets − 25 speed 10 power
House Deepwrack: Level 6 Dwarf Monk +3 Boots +2 Gauntlets − 15 speed 18 power
Whereas the party’s champions, ignoring items, have:
Uzben Grimblade, a Level 5 Dwarf Ninja − 15 speed 11 power
Varina Dourstone, a Level 5 Dwarf Warrior − 7 speed 19 power
Willow Brown, a Level 5 Human Ranger − 12 speed 14 power
Xerxes III of Calantha, a Level 5 Human Monk − 14 speed 12 power
Yalathinel Leafstrider, a Level 5 Elf Fencer − 19 speed 7 power
Zelaya Sunwalker, a Level 6 Elf Knight − 12 speed 16 power
For my proposed strategy (subject to change as I find new info, or find my assumptions off, e.g. such that my attempts to just barely beat the opponents on speed are disastrously slightly wrong):
I will send Willow Brown, with +3 boots and
+1 gauntletsno gauntlets, against House Adelon’s champion (1 speed advantage,34 power deficit)I will send Zelaya Sunwalker, with +1 boots and
+2+1 gauntlets, against House Bauchard’s champion (1 speed advantage,34 power deficit)I will send Xerxes III of Calantha, with +2 boots and
+3+2 gauntlets, against House Deepwrack’s champion (1 speed advantage,34 power deficit)And I will send Varina Dourstone, with +3 gauntlets
no items, to overwhelm House Cadagal’s Elf Ninja with sheer power (18 speed deficit,912 power advantage).And in fact, I will gift the +4 boots of speed to House Cadagal’s Elf Ninja in advance of the fight, making it a 20 speed deficit.
Why? Because I noticed that +4 boots of speed are very rare items that have only been worn by Elf Ninjas in the past. So maybe that’s what the bonus objective is talking about. Of course, another interpretation is that sending a character 2 levels lower without any items, and gifting a powerful item in advance, would be itself a grave insult. Someone please decipher the bonus objective to save me from this foolishness!
Edited to add: It occurs to me that I really have no reason to believe the power calculation is accurate, beyond that symmetry is nice. I’d better look into that.
further edit: it turns out that I was leaving out the class contribution to the power difference when calculating the power difference for determining the effects of power and speed. It looks like this was causing the effect of higher speed differences seeming to reduce win rates. With this fixed the effects look much cleaner (e.g. there’s a hard threshold where if you have a speed deficit you must have at least 3 power advantage to have any chance to win at all), increasing my confidence that effects on power and speed being symmetric is actually correct. This does have the practical effect of making me adjust my item distribution: it looks like a 4 deficit in power is still enough for >90% win rate with a speed advantage, while getting similar win rates with a speed disadvantage will require more than just the 9 power difference, so I shifted the items to boost Varina’s power advantage. Indeed, with the cleaner effects, it appears that I can reasonably model the effect of a speed advantage/disadvantage as equivalent to a power difference of 8, so with the item shift all characters will have an effective +4 power advantage taking this into account.
Noting that I read this (and that therefore you get partial credit for any solution I come up with from here on out): your model and the strategies it implies are both very interesting. I should be able to investigate them with ML alongside everything else, when/if I get around to doing that.
Regarding the Bonus Objective:
I can’t figure out whether offering that guy we unknowingly robbed his shoes back is the best or the worst diplomatic approach our character could take, but yeah I’m pretty sure we both located the problem and roughly what it implies for the scenario.
On the bonus objective:
I didn’t realize that the level 7 Elf Ninjas were all one person or that the boots +4 were always with a level 7 (as opposed to any level) Elf Ninja. It seems you are correct as there are 311 cases of which the first 299 all have the boots of speed 4 and gauntlets 3 with only the last 12 having boots 2 and gauntlets 3 (likely post-theft). It seems to me that they appear both as red and black, though.
>only the last 12 having boots 2 and gauntlets 3 (likely post-theft)
Didn’t notice that but it confirms my theory, nice.
>It seems to me that they appear both as red and black, though.
Ah, I see where the error in my code was that made me think otherwise. Strange coincidence: I thought “oh yeah a powerful wealthy elf ninja who pointedly wears black when assigned red clothes, what a neat but oddly specific 8-bit theater reference” and then it turned out to be a glitch.
updated model for win chance:
I am currently modeling the win ratio as dependent on a single number, the effective power difference. The effective power difference is the power difference plus 8*sign(speed difference).
Power and speed are calculated as:
Power = level + gauntlet number + race power + class power
Speed = level + boots number + race speed + class speed
where race speed and power contributions are determined by each increment on the spectrum:
Dwarf—Human—Elf
increasing speed by 3 and lowering power by 3
and class speed and power contributions are determined by each increment on the spectrum:
Knight—Warrior—Ranger—Monk—Fencer—Ninja
increasing speed by 2 and lower power by 2.
So, assuming this is correct, what function of the effective power determines the win rate? I don’t have a plausible exact formula yet, but:
If the effective power difference is 6 or greater, victory is guaranteed.
If the effective power difference is low, it seems a not-terrible fit that the odds of winning are about exponential in the effective power difference (each +1 effective power just under doubling odds of winning)
It looks like it is trending faster than exponential as the effective power difference increases. At an effective power difference of 4, the odds of the higher effective power character winning are around 17 to 1.
edit: it looks like there is a level dependence when holding effective power difference constant at non-zero values (lower/higher level → winrate imbalance lower/higher than implied by effective power difference). Since I don’t see this at 0 effective power difference, it is presumably not due to an error in the effective power calculation, but an interaction with the effective power difference to determine the final winrate. Our fights are likely “high level” for this purpose implying better odds of winning than the 17 to 1 in each fight mentioned above. Todo: find out more about this effect quantitatively.edit2: whoops that wasn’t a real effect, just me doing the wrong test to look for one.Inspired by abstractapplic’s machine learning and wanting to get some experience in julia, I got Claude (3.5 sonnet) to write me an XGBoost implementation in julia. Took a long time especially with some bugfixing (took a long time to find that a feature matrix was the wrong shape—a problem with insufficient type explicitness, I think). Still way way faster than doing it myself! Not sure I’m learning all that much julia, but am learning how to get Claude to write it for me, I hope.
Anyway, I used a simple model that
only takes into account 8 * sign(speed difference) + power difference, as in the comment this is a reply to
and a full model that
takes into account all the available features including the base data, the number the simple model uses, and intermediate steps in the calculation of that number (that would be, iirc: power (for each), speed (for each), speed difference, power difference, sign(speed difference))
Results:
Rank 1
Full model scores: Red: 94.0%, Black: 94.9%
Combined full model score: 94.4%
Simple model scores: Red: 94.3%, Black: 94.6%
Combined simple model score: 94.5%
Matchups:
Varina Dourstone (+0 boots, +3 gauntlets) vs House Cadagal Champion
Willow Brown (+3 boots, +0 gauntlets) vs House Adelon Champion
Xerxes III of Calantha (+2 boots, +2 gauntlets) vs House Deepwrack Champion
Zelaya Sunwalker (+1 boots, +1 gauntlets) vs House Bauchard Champion
This is the top scoring scoring result with either the simplified model or the full model. It was found by a full search of every valid item and hero combination available against the house champions.
It is also my previously posted, found w/o machine learning, proposal for the solution. Which is reassuring. (Though, I suppose there is some chance that my feeding the models this predictor, if it’s good enough, might make them glom on to it while they don’t find some hard-to learn additional pattern.)
My theory though is that giving the models the useful metric mostly just helps them—they don’t need to learn the metric from the data, and I mostly think that if there was a significant additional pattern the full model would do better.
(for Cadagal, I haven’t changed the champion’s boots to +4, though I don’t expect that to make a significant difference)
As far as I can tell the full model doesn’t do significantly better and does worse in some ways (though, I don’t know much about how to evaluate this, and Claude’s metrics,
including a test set log loss of 0.2527 for the full model and 0.2511 for the simple model, are for a separately generated version which I am not all that confident are actually the same models, though they “should be” up to the restricted training set if Claude was doing it right). * see edit belowBut the red/black variations seen below for the full model seem likely to me (given my prior that red and black are likely to be symmetrical) to be an indication that what the full model is finding that isn’t in the full model is at least partially overfitting. Though actually, if it’s overfitting a lot, maybe it’s surprising that the test set log loss wouldn’t be a lot worse than found (though it is at least worse than the simple model)? Hmm—what if there are actual red/black difference? (something to look into perhaps, as well as try to duplicate abstractapplic’s report regarding sign(speed difference) not exhausting the benefits of speed info
… but for now I’m more likely to leave the machine learning aside and switch to looking at distributions of gladiator characteristics, I think.)Predictions for individual matchups for my and abstractapplic’s solutions:
My matchups:
Varina Dourstone (+0 boots, +3 gauntlets) vs House Cadagal Champion (+2 boots, +3 gauntlets)
Full Model: Red: 91.1%, Black: 96.7%
Simple Model: Red: 94.3%, Black: 94.6%
Willow Brown (+3 boots, +0 gauntlets) vs House Adelon Champion (+3 boots, +1 gauntlets)
Full Model: Red: 94.3%, Black: 95.1%
Simple Model: Red: 94.3%, Black: 94.6%
Xerxes III of Calantha (+2 boots, +2 gauntlets) vs House Deepwrack Champion (+3 boots, +2 gauntlets)
Full Model: Red: 95.2%, Black: 93.7%
Simple Model: Red: 94.3%, Black: 94.6%
Zelaya Sunwalker (+1 boots, +1 gauntlets) vs House Bauchard Champion (+3 boots, +2 gauntlets)
Full Model: Red: 95.3%, Black: 93.9%
Simple Model: Red: 94.3%, Black: 94.6%
(all my matchups have 4 effective power difference in my favour as noted in an above comment)
abstractapplic’s matchups:
Matchup 1:
Uzben Grimblade (+3 boots, +0 gauntlets) vs House Adelon Champion (+3 boots, +1 gauntlets)
Win Probabilities:
Full Model: Red: 72.1%, Black: 62.8%
Simple Model: Red: 65.4%, Black: 65.7%
Stats:
Speed: 18 vs 14 (diff: 4)
Power: 11 vs 18 (diff: −7)
Effective Power Difference: 1
--------------------------------------------------------------------------------
Matchup 2:
Xerxes III of Calantha (+2 boots, +1 gauntlets) vs House Bauchard Champion (+3 boots, +2 gauntlets)
Win Probabilities:
Full Model: Red: 46.6%, Black: 43.9%
Simple Model: Red: 49.4%, Black: 50.6%
Stats:
Speed: 16 vs 12 (diff: 4)
Power: 13 vs 21 (diff: −8)
Effective Power Difference: 0
--------------------------------------------------------------------------------
Matchup 3:
Varina Dourstone (+0 boots, +3 gauntlets) vs House Cadagal Champion (+2 boots, +3 gauntlets)
Win Probabilities:
Full Model: Red: 91.1%, Black: 96.7%
Simple Model: Red: 94.3%, Black: 94.6%
Stats:
Speed: 7 vs 25 (diff: −18)
Power: 22 vs 10 (diff: 12)
Effective Power Difference: 4
--------------------------------------------------------------------------------
Matchup 4:
Yalathinel Leafstrider (+1 boots, +2 gauntlets) vs House Deepwrack Champion (+3 boots, +2 gauntlets)
Win Probabilities:
Full Model: Red: 35.7%, Black: 39.4%
Simple Model: Red: 34.3%, Black: 34.6%
Stats:
Speed: 20 vs 15 (diff: 5)
Power: 9 vs 18 (diff: −9)
Effective Power Difference: −1
--------------------------------------------------------------------------------
Overall Statistics:
Full Model Average: Red: 61.4%, Black: 60.7%
Simple Model Average: Red: 60.9%, Black: 61.4%
Edit: so I checked the actual code to see if Claude was using the same hyperparameters for both, and wtf wtf wtf wtf. The code has 6 functions that all train models (my fault for at one point renaming a function since Claude gave me a new version that didn’t have all the previous functionality (only trained the full model instead of both—this was when doing the great bughunt for the misshaped matrix and a problem was suspected in the full model), then Claude I guess picked up on this and started renaming updated versions spontaneously, and I was adding Claude’s new features in instead of replacing things and hadn’t cleaned up the code or asked Claude to do so). Each one has it’s own hardcoded hyperparameter set. Of these, there are one pair of functions that have matching hyperparameters. Everything else has a unique set. Of course, most of these weren’t being used anymore, but the functions for actually generating the models I used for my results, and the function for generating the models used for comparing results on a train/test split, weren’t among the matching pair. Plus another function that returns a (hardcoded, also unique) updated parameter set, but wasn’t actually used. Oh and all this is not counting the hyperparameter tuning function that I assumed was generating a set of tuned hyperparameters to be used by other functions, but in fact was just printing results for different tunings. I had been running this every time before training models! Obviously I need to be more vigilant (or maybe asking Claude to do so might help?).
edit:
Had Claude clean up the code and tune for more overfitting, still didn’t see anything not looking like overfitting for the full model. Could still be missing something, but not high enough in subjective probability to prioritize currently, so have now been looking at other aspects of the data.
further edit:
My (what I think is) highly overfitted version of my full model really likes Yonge’s proposed solution. In fact it predicts a
higher winrate than forequal winrate to the best possible configuration not using the +4 boots (I didn’t have Claude code the situation where +4 boots are a possibility). I still think that’s probably because they are picking up the same random fluctuations … but it will be amusing if Yonge’s “manual scan” solution turns out to be exactly right.Now using julia with Claude to look at further aspects of the data, particularly in view of other commenters’ observations:
First, thanks to SarahSrinivasan for the key observation that the data is organized into tournaments and non-tournament encounters. The tournaments skew the overall data to higher winrate gladiators, so restricting to the first round is essential for debiasing this (todo: check what is up with non-tournament fights).
Also, thanks to abstractapplic and Lorxus for pointing out that their are some persistent high level gladiators. It seems to me all the level 7 gladiators are persistent (up to the two item changes remarked on by abstractapplic and Lorxus). I’m assuming for now level 6 and below likely aren’t persistent (other than in the same tournament).
(btw there are a couple fights where the +4 gauntlets holder is on both sides. I’m assuming this is likely a bug in the dataset generation rather than an indication that there are two of them (e.g. didn’t check that both sides, drawn randomly from some pool, were not equal)).
For gladiators of levels 1 to 6, the boots and gauntlets in tournament first rounds seem to be independently and randomly assigned as follows:
+1 and +2 gauntlets are equally likely at 10⁄34 chance each;
+3 gauntlets have probability (4 + level)/34
+0 (no) gauntlets have probability (10 - level)/34
and same, independently, for boots.
I didn’t notice obvious deviations for particular races and classes (only did a few checks).
I don’t have a simple formula for level distribution yet. It is clearly much more favouring lower levels in tournament first rounds as compared with non-tournament fights, and level 1 gladiators don’t show up at all in non-tournament fights. Will edit to add more as I find more.
edit: boots/gauntlets distribution seems to be about the same for each level in the non-tournament distribution as in the tournament first rounds. This suggests that the level distribution differences in non-tournament rounds is not due to win/winrate selection (which the complete absence of level 1′s outside of tournaments already suggested).
edit2: race/class distribution for levels 1-6 seems equal in first round data (same probabilities of each, independent). Same in non-tournament data. I haven’t checked for particular levels within that range. edit3: there seems to be more level 1 fencers than other level 1 classes by an amount that is technically statistically significant if Claude’s test is correct, though still probably random I assume.