I am currently modeling the win ratio as dependent on a single number, the effective power difference. The effective power difference is the power difference plus 8*sign(speed difference).
Power and speed are calculated as:
Power = level + gauntlet number + race power + class power
Speed = level + boots number + race speed + class speed
where race speed and power contributions are determined by each increment on the spectrum:
Dwarf—Human—Elf
increasing speed by 3 and lowering power by 3
and class speed and power contributions are determined by each increment on the spectrum:
Knight—Warrior—Ranger—Monk—Fencer—Ninja
increasing speed by 2 and lower power by 2.
So, assuming this is correct, what function of the effective power determines the win rate? I don’t have a plausible exact formula yet, but:
If the effective power difference is 6 or greater, victory is guaranteed.
If the effective power difference is low, it seems a not-terrible fit that the odds of winning are about exponential in the effective power difference (each +1 effective power just under doubling odds of winning)
It looks like it is trending faster than exponential as the effective power difference increases. At an effective power difference of 4, the odds of the higher effective power character winning are around 17 to 1.
edit: it looks like there is a level dependence when holding effective power difference constant at non-zero values (lower/higher level → winrate imbalance lower/higher than implied by effective power difference). Since I don’t see this at 0 effective power difference, it is presumably not due to an error in the effective power calculation, but an interaction with the effective power difference to determine the final winrate. Our fights are likely “high level” for this purpose implying better odds of winning than the 17 to 1 in each fight mentioned above. Todo: find out more about this effect quantitatively. edit2: whoops that wasn’t a real effect, just me doing the wrong test to look for one.
Inspired by abstractapplic’s machine learning and wanting to get some experience in julia, I got Claude (3.5 sonnet) to write me an XGBoost implementation in julia. Took a long time especially with some bugfixing (took a long time to find that a feature matrix was the wrong shape—a problem with insufficient type explicitness, I think). Still way way faster than doing it myself! Not sure I’m learning all that much julia, but am learning how to get Claude to write it for me, I hope.
Anyway, I used a simple model that
only takes into account 8 * sign(speed difference) + power difference, as in the comment this is a reply to
and a full model that
takes into account all the available features including the base data, the number the simple model uses, and intermediate steps in the calculation of that number (that would be, iirc: power (for each), speed (for each), speed difference, power difference, sign(speed difference))
Results:
Rank 1 Full model scores: Red: 94.0%, Black: 94.9% Combined full model score: 94.4% Simple model scores: Red: 94.3%, Black: 94.6% Combined simple model score: 94.5%
Matchups: Varina Dourstone (+0 boots, +3 gauntlets) vs House Cadagal Champion Willow Brown (+3 boots, +0 gauntlets) vs House Adelon Champion Xerxes III of Calantha (+2 boots, +2 gauntlets) vs House Deepwrack Champion Zelaya Sunwalker (+1 boots, +1 gauntlets) vs House Bauchard Champion
This is the top scoring scoring result with either the simplified model or the full model. It was found by a full search of every valid item and hero combination available against the house champions.
It is also my previously posted, found w/o machine learning, proposal for the solution. Which is reassuring. (Though, I suppose there is some chance that my feeding the models this predictor, if it’s good enough, might make them glom on to it while they don’t find some hard-to learn additional pattern.)
My theory though is that giving the models the useful metric mostly just helps them—they don’t need to learn the metric from the data, and I mostly think that if there was a significant additional pattern the full model would do better.
(for Cadagal, I haven’t changed the champion’s boots to +4, though I don’t expect that to make a significant difference)
As far as I can tell the full model doesn’t do significantly better and does worse in some ways (though, I don’t know much about how to evaluate this, and Claude’s metrics, including a test set log loss of 0.2527 for the full model and 0.2511 for the simple model, are for a separately generated version which I am not all that confident are actually the same models, though they “should be” up to the restricted training set if Claude was doing it right). * see edit below
But the red/black variations seen below for the full model seem likely to me (given my prior that red and black are likely to be symmetrical) to be an indication that what the full model is finding that isn’t in the full model is at least partially overfitting. Though actually, if it’s overfitting a lot, maybe it’s surprising that the test set log loss wouldn’t be a lot worse than found (though it is at least worse than the simple model)? Hmm—what if there are actual red/black difference? (something to look into perhaps, as well as try to duplicate abstractapplic’s report regarding sign(speed difference) not exhausting the benefits of speed info … but for now I’m more likely to leave the machine learning aside and switch to looking at distributions of gladiator characteristics, I think.)
Predictions for individual matchups for my and abstractapplic’s solutions:
My matchups:
Varina Dourstone (+0 boots, +3 gauntlets) vs House Cadagal Champion (+2 boots, +3 gauntlets) Full Model: Red: 91.1%, Black: 96.7% Simple Model: Red: 94.3%, Black: 94.6%
Willow Brown (+3 boots, +0 gauntlets) vs House Adelon Champion (+3 boots, +1 gauntlets) Full Model: Red: 94.3%, Black: 95.1% Simple Model: Red: 94.3%, Black: 94.6%
Xerxes III of Calantha (+2 boots, +2 gauntlets) vs House Deepwrack Champion (+3 boots, +2 gauntlets) Full Model: Red: 95.2%, Black: 93.7% Simple Model: Red: 94.3%, Black: 94.6%
Zelaya Sunwalker (+1 boots, +1 gauntlets) vs House Bauchard Champion (+3 boots, +2 gauntlets) Full Model: Red: 95.3%, Black: 93.9% Simple Model: Red: 94.3%, Black: 94.6%
(all my matchups have 4 effective power difference in my favour as noted in an above comment)
abstractapplic’s matchups:
Matchup 1: Uzben Grimblade (+3 boots, +0 gauntlets) vs House Adelon Champion (+3 boots, +1 gauntlets)
Stats: Speed: 20 vs 15 (diff: 5) Power: 9 vs 18 (diff: −9) Effective Power Difference: −1 --------------------------------------------------------------------------------
Overall Statistics: Full Model Average: Red: 61.4%, Black: 60.7% Simple Model Average: Red: 60.9%, Black: 61.4%
Edit: so I checked the actual code to see if Claude was using the same hyperparameters for both, and wtf wtf wtf wtf. The code has 6 functions that all train models (my fault for at one point renaming a function since Claude gave me a new version that didn’t have all the previous functionality (only trained the full model instead of both—this was when doing the great bughunt for the misshaped matrix and a problem was suspected in the full model), then Claude I guess picked up on this and started renaming updated versions spontaneously, and I was adding Claude’s new features in instead of replacing things and hadn’t cleaned up the code or asked Claude to do so). Each one has it’s own hardcoded hyperparameter set. Of these, there are one pair of functions that have matching hyperparameters. Everything else has a unique set. Of course, most of these weren’t being used anymore, but the functions for actually generating the models I used for my results, and the function for generating the models used for comparing results on a train/test split, weren’t among the matching pair. Plus another function that returns a (hardcoded, also unique) updated parameter set, but wasn’t actually used. Oh and all this is not counting the hyperparameter tuning function that I assumed was generating a set of tuned hyperparameters to be used by other functions, but in fact was just printing results for different tunings. I had been running this every time before training models! Obviously I need to be more vigilant (or maybe asking Claude to do so might help?).
edit:
Had Claude clean up the code and tune for more overfitting, still didn’t see anything not looking like overfitting for the full model. Could still be missing something, but not high enough in subjective probability to prioritize currently, so have now been looking at other aspects of the data.
further edit:
My (what I think is) highly overfitted version of my full model really likes Yonge’s proposed solution. In fact it predicts a higher winrate than forequal winrate to the best possible configuration not using the +4 boots (I didn’t have Claude code the situation where +4 boots are a possibility). I still think that’s probably because they are picking up the same random fluctuations … but it will be amusing if Yonge’s “manual scan” solution turns out to be exactly right.
Now using julia with Claude to look at further aspects of the data, particularly in view of other commenters’ observations:
First, thanks to SarahSrinivasan for the key observation that the data is organized into tournaments and non-tournament encounters. The tournaments skew the overall data to higher winrate gladiators, so restricting to the first round is essential for debiasing this (todo: check what is up with non-tournament fights).
Also, thanks to abstractapplic and Lorxus for pointing out that their are some persistent high level gladiators. It seems to me all the level 7 gladiators are persistent (up to the two item changes remarked on by abstractapplic and Lorxus). I’m assuming for now level 6 and below likely aren’t persistent (other than in the same tournament).
(btw there are a couple fights where the +4 gauntlets holder is on both sides. I’m assuming this is likely a bug in the dataset generation rather than an indication that there are two of them (e.g. didn’t check that both sides, drawn randomly from some pool, were not equal)).
For gladiators of levels 1 to 6, the boots and gauntlets in tournament first rounds seem to be independently and randomly assigned as follows:
+1 and +2 gauntlets are equally likely at 10⁄34 chance each;
+3 gauntlets have probability (4 + level)/34
+0 (no) gauntlets have probability (10 - level)/34
and same, independently, for boots.
I didn’t notice obvious deviations for particular races and classes (only did a few checks).
I don’t have a simple formula for level distribution yet. It is clearly much more favouring lower levels in tournament first rounds as compared with non-tournament fights, and level 1 gladiators don’t show up at all in non-tournament fights. Will edit to add more as I find more.
edit: boots/gauntlets distribution seems to be about the same for each level in the non-tournament distribution as in the tournament first rounds. This suggests that the level distribution differences in non-tournament rounds is not due to win/winrate selection (which the complete absence of level 1′s outside of tournaments already suggested).
edit2: race/class distribution for levels 1-6 seems equal in first round data (same probabilities of each, independent). Same in non-tournament data. I haven’t checked for particular levels within that range. edit3: there seems to be more level 1 fencers than other level 1 classes by an amount that is technically statistically significant if Claude’s test is correct, though still probably random I assume.
updated model for win chance:
I am currently modeling the win ratio as dependent on a single number, the effective power difference. The effective power difference is the power difference plus 8*sign(speed difference).
Power and speed are calculated as:
Power = level + gauntlet number + race power + class power
Speed = level + boots number + race speed + class speed
where race speed and power contributions are determined by each increment on the spectrum:
Dwarf—Human—Elf
increasing speed by 3 and lowering power by 3
and class speed and power contributions are determined by each increment on the spectrum:
Knight—Warrior—Ranger—Monk—Fencer—Ninja
increasing speed by 2 and lower power by 2.
So, assuming this is correct, what function of the effective power determines the win rate? I don’t have a plausible exact formula yet, but:
If the effective power difference is 6 or greater, victory is guaranteed.
If the effective power difference is low, it seems a not-terrible fit that the odds of winning are about exponential in the effective power difference (each +1 effective power just under doubling odds of winning)
It looks like it is trending faster than exponential as the effective power difference increases. At an effective power difference of 4, the odds of the higher effective power character winning are around 17 to 1.
edit: it looks like there is a level dependence when holding effective power difference constant at non-zero values (lower/higher level → winrate imbalance lower/higher than implied by effective power difference). Since I don’t see this at 0 effective power difference, it is presumably not due to an error in the effective power calculation, but an interaction with the effective power difference to determine the final winrate. Our fights are likely “high level” for this purpose implying better odds of winning than the 17 to 1 in each fight mentioned above. Todo: find out more about this effect quantitatively.edit2: whoops that wasn’t a real effect, just me doing the wrong test to look for one.Inspired by abstractapplic’s machine learning and wanting to get some experience in julia, I got Claude (3.5 sonnet) to write me an XGBoost implementation in julia. Took a long time especially with some bugfixing (took a long time to find that a feature matrix was the wrong shape—a problem with insufficient type explicitness, I think). Still way way faster than doing it myself! Not sure I’m learning all that much julia, but am learning how to get Claude to write it for me, I hope.
Anyway, I used a simple model that
only takes into account 8 * sign(speed difference) + power difference, as in the comment this is a reply to
and a full model that
takes into account all the available features including the base data, the number the simple model uses, and intermediate steps in the calculation of that number (that would be, iirc: power (for each), speed (for each), speed difference, power difference, sign(speed difference))
Results:
Rank 1
Full model scores: Red: 94.0%, Black: 94.9%
Combined full model score: 94.4%
Simple model scores: Red: 94.3%, Black: 94.6%
Combined simple model score: 94.5%
Matchups:
Varina Dourstone (+0 boots, +3 gauntlets) vs House Cadagal Champion
Willow Brown (+3 boots, +0 gauntlets) vs House Adelon Champion
Xerxes III of Calantha (+2 boots, +2 gauntlets) vs House Deepwrack Champion
Zelaya Sunwalker (+1 boots, +1 gauntlets) vs House Bauchard Champion
This is the top scoring scoring result with either the simplified model or the full model. It was found by a full search of every valid item and hero combination available against the house champions.
It is also my previously posted, found w/o machine learning, proposal for the solution. Which is reassuring. (Though, I suppose there is some chance that my feeding the models this predictor, if it’s good enough, might make them glom on to it while they don’t find some hard-to learn additional pattern.)
My theory though is that giving the models the useful metric mostly just helps them—they don’t need to learn the metric from the data, and I mostly think that if there was a significant additional pattern the full model would do better.
(for Cadagal, I haven’t changed the champion’s boots to +4, though I don’t expect that to make a significant difference)
As far as I can tell the full model doesn’t do significantly better and does worse in some ways (though, I don’t know much about how to evaluate this, and Claude’s metrics,
including a test set log loss of 0.2527 for the full model and 0.2511 for the simple model, are for a separately generated version which I am not all that confident are actually the same models, though they “should be” up to the restricted training set if Claude was doing it right). * see edit belowBut the red/black variations seen below for the full model seem likely to me (given my prior that red and black are likely to be symmetrical) to be an indication that what the full model is finding that isn’t in the full model is at least partially overfitting. Though actually, if it’s overfitting a lot, maybe it’s surprising that the test set log loss wouldn’t be a lot worse than found (though it is at least worse than the simple model)? Hmm—what if there are actual red/black difference? (something to look into perhaps, as well as try to duplicate abstractapplic’s report regarding sign(speed difference) not exhausting the benefits of speed info
… but for now I’m more likely to leave the machine learning aside and switch to looking at distributions of gladiator characteristics, I think.)Predictions for individual matchups for my and abstractapplic’s solutions:
My matchups:
Varina Dourstone (+0 boots, +3 gauntlets) vs House Cadagal Champion (+2 boots, +3 gauntlets)
Full Model: Red: 91.1%, Black: 96.7%
Simple Model: Red: 94.3%, Black: 94.6%
Willow Brown (+3 boots, +0 gauntlets) vs House Adelon Champion (+3 boots, +1 gauntlets)
Full Model: Red: 94.3%, Black: 95.1%
Simple Model: Red: 94.3%, Black: 94.6%
Xerxes III of Calantha (+2 boots, +2 gauntlets) vs House Deepwrack Champion (+3 boots, +2 gauntlets)
Full Model: Red: 95.2%, Black: 93.7%
Simple Model: Red: 94.3%, Black: 94.6%
Zelaya Sunwalker (+1 boots, +1 gauntlets) vs House Bauchard Champion (+3 boots, +2 gauntlets)
Full Model: Red: 95.3%, Black: 93.9%
Simple Model: Red: 94.3%, Black: 94.6%
(all my matchups have 4 effective power difference in my favour as noted in an above comment)
abstractapplic’s matchups:
Matchup 1:
Uzben Grimblade (+3 boots, +0 gauntlets) vs House Adelon Champion (+3 boots, +1 gauntlets)
Win Probabilities:
Full Model: Red: 72.1%, Black: 62.8%
Simple Model: Red: 65.4%, Black: 65.7%
Stats:
Speed: 18 vs 14 (diff: 4)
Power: 11 vs 18 (diff: −7)
Effective Power Difference: 1
--------------------------------------------------------------------------------
Matchup 2:
Xerxes III of Calantha (+2 boots, +1 gauntlets) vs House Bauchard Champion (+3 boots, +2 gauntlets)
Win Probabilities:
Full Model: Red: 46.6%, Black: 43.9%
Simple Model: Red: 49.4%, Black: 50.6%
Stats:
Speed: 16 vs 12 (diff: 4)
Power: 13 vs 21 (diff: −8)
Effective Power Difference: 0
--------------------------------------------------------------------------------
Matchup 3:
Varina Dourstone (+0 boots, +3 gauntlets) vs House Cadagal Champion (+2 boots, +3 gauntlets)
Win Probabilities:
Full Model: Red: 91.1%, Black: 96.7%
Simple Model: Red: 94.3%, Black: 94.6%
Stats:
Speed: 7 vs 25 (diff: −18)
Power: 22 vs 10 (diff: 12)
Effective Power Difference: 4
--------------------------------------------------------------------------------
Matchup 4:
Yalathinel Leafstrider (+1 boots, +2 gauntlets) vs House Deepwrack Champion (+3 boots, +2 gauntlets)
Win Probabilities:
Full Model: Red: 35.7%, Black: 39.4%
Simple Model: Red: 34.3%, Black: 34.6%
Stats:
Speed: 20 vs 15 (diff: 5)
Power: 9 vs 18 (diff: −9)
Effective Power Difference: −1
--------------------------------------------------------------------------------
Overall Statistics:
Full Model Average: Red: 61.4%, Black: 60.7%
Simple Model Average: Red: 60.9%, Black: 61.4%
Edit: so I checked the actual code to see if Claude was using the same hyperparameters for both, and wtf wtf wtf wtf. The code has 6 functions that all train models (my fault for at one point renaming a function since Claude gave me a new version that didn’t have all the previous functionality (only trained the full model instead of both—this was when doing the great bughunt for the misshaped matrix and a problem was suspected in the full model), then Claude I guess picked up on this and started renaming updated versions spontaneously, and I was adding Claude’s new features in instead of replacing things and hadn’t cleaned up the code or asked Claude to do so). Each one has it’s own hardcoded hyperparameter set. Of these, there are one pair of functions that have matching hyperparameters. Everything else has a unique set. Of course, most of these weren’t being used anymore, but the functions for actually generating the models I used for my results, and the function for generating the models used for comparing results on a train/test split, weren’t among the matching pair. Plus another function that returns a (hardcoded, also unique) updated parameter set, but wasn’t actually used. Oh and all this is not counting the hyperparameter tuning function that I assumed was generating a set of tuned hyperparameters to be used by other functions, but in fact was just printing results for different tunings. I had been running this every time before training models! Obviously I need to be more vigilant (or maybe asking Claude to do so might help?).
edit:
Had Claude clean up the code and tune for more overfitting, still didn’t see anything not looking like overfitting for the full model. Could still be missing something, but not high enough in subjective probability to prioritize currently, so have now been looking at other aspects of the data.
further edit:
My (what I think is) highly overfitted version of my full model really likes Yonge’s proposed solution. In fact it predicts a
higher winrate than forequal winrate to the best possible configuration not using the +4 boots (I didn’t have Claude code the situation where +4 boots are a possibility). I still think that’s probably because they are picking up the same random fluctuations … but it will be amusing if Yonge’s “manual scan” solution turns out to be exactly right.Now using julia with Claude to look at further aspects of the data, particularly in view of other commenters’ observations:
First, thanks to SarahSrinivasan for the key observation that the data is organized into tournaments and non-tournament encounters. The tournaments skew the overall data to higher winrate gladiators, so restricting to the first round is essential for debiasing this (todo: check what is up with non-tournament fights).
Also, thanks to abstractapplic and Lorxus for pointing out that their are some persistent high level gladiators. It seems to me all the level 7 gladiators are persistent (up to the two item changes remarked on by abstractapplic and Lorxus). I’m assuming for now level 6 and below likely aren’t persistent (other than in the same tournament).
(btw there are a couple fights where the +4 gauntlets holder is on both sides. I’m assuming this is likely a bug in the dataset generation rather than an indication that there are two of them (e.g. didn’t check that both sides, drawn randomly from some pool, were not equal)).
For gladiators of levels 1 to 6, the boots and gauntlets in tournament first rounds seem to be independently and randomly assigned as follows:
+1 and +2 gauntlets are equally likely at 10⁄34 chance each;
+3 gauntlets have probability (4 + level)/34
+0 (no) gauntlets have probability (10 - level)/34
and same, independently, for boots.
I didn’t notice obvious deviations for particular races and classes (only did a few checks).
I don’t have a simple formula for level distribution yet. It is clearly much more favouring lower levels in tournament first rounds as compared with non-tournament fights, and level 1 gladiators don’t show up at all in non-tournament fights. Will edit to add more as I find more.
edit: boots/gauntlets distribution seems to be about the same for each level in the non-tournament distribution as in the tournament first rounds. This suggests that the level distribution differences in non-tournament rounds is not due to win/winrate selection (which the complete absence of level 1′s outside of tournaments already suggested).
edit2: race/class distribution for levels 1-6 seems equal in first round data (same probabilities of each, independent). Same in non-tournament data. I haven’t checked for particular levels within that range. edit3: there seems to be more level 1 fencers than other level 1 classes by an amount that is technically statistically significant if Claude’s test is correct, though still probably random I assume.