Inspired by abstractapplic’s machine learning and wanting to get some experience in julia, I got Claude (3.5 sonnet) to write me an XGBoost implementation in julia. Took a long time especially with some bugfixing (took a long time to find that a feature matrix was the wrong shape—a problem with insufficient type explicitness, I think). Still way way faster than doing it myself! Not sure I’m learning all that much julia, but am learning how to get Claude to write it for me, I hope.
Anyway, I used a simple model that
only takes into account 8 * sign(speed difference) + power difference, as in the comment this is a reply to
and a full model that
takes into account all the available features including the base data, the number the simple model uses, and intermediate steps in the calculation of that number (that would be, iirc: power (for each), speed (for each), speed difference, power difference, sign(speed difference))
Results:
Rank 1 Full model scores: Red: 94.0%, Black: 94.9% Combined full model score: 94.4% Simple model scores: Red: 94.3%, Black: 94.6% Combined simple model score: 94.5%
Matchups: Varina Dourstone (+0 boots, +3 gauntlets) vs House Cadagal Champion Willow Brown (+3 boots, +0 gauntlets) vs House Adelon Champion Xerxes III of Calantha (+2 boots, +2 gauntlets) vs House Deepwrack Champion Zelaya Sunwalker (+1 boots, +1 gauntlets) vs House Bauchard Champion
This is the top scoring scoring result with either the simplified model or the full model. It was found by a full search of every valid item and hero combination available against the house champions.
It is also my previously posted, found w/o machine learning, proposal for the solution. Which is reassuring. (Though, I suppose there is some chance that my feeding the models this predictor, if it’s good enough, might make them glom on to it while they don’t find some hard-to learn additional pattern.)
My theory though is that giving the models the useful metric mostly just helps them—they don’t need to learn the metric from the data, and I mostly think that if there was a significant additional pattern the full model would do better.
(for Cadagal, I haven’t changed the champion’s boots to +4, though I don’t expect that to make a significant difference)
As far as I can tell the full model doesn’t do significantly better and does worse in some ways (though, I don’t know much about how to evaluate this, and Claude’s metrics, including a test set log loss of 0.2527 for the full model and 0.2511 for the simple model, are for a separately generated version which I am not all that confident are actually the same models, though they “should be” up to the restricted training set if Claude was doing it right). * see edit below
But the red/black variations seen below for the full model seem likely to me (given my prior that red and black are likely to be symmetrical) to be an indication that what the full model is finding that isn’t in the full model is at least partially overfitting. Though actually, if it’s overfitting a lot, maybe it’s surprising that the test set log loss wouldn’t be a lot worse than found (though it is at least worse than the simple model)? Hmm—what if there are actual red/black difference? (something to look into perhaps, as well as try to duplicate abstractapplic’s report regarding sign(speed difference) not exhausting the benefits of speed info … but for now I’m more likely to leave the machine learning aside and switch to looking at distributions of gladiator characteristics, I think.)
Predictions for individual matchups for my and abstractapplic’s solutions:
My matchups:
Varina Dourstone (+0 boots, +3 gauntlets) vs House Cadagal Champion (+2 boots, +3 gauntlets) Full Model: Red: 91.1%, Black: 96.7% Simple Model: Red: 94.3%, Black: 94.6%
Willow Brown (+3 boots, +0 gauntlets) vs House Adelon Champion (+3 boots, +1 gauntlets) Full Model: Red: 94.3%, Black: 95.1% Simple Model: Red: 94.3%, Black: 94.6%
Xerxes III of Calantha (+2 boots, +2 gauntlets) vs House Deepwrack Champion (+3 boots, +2 gauntlets) Full Model: Red: 95.2%, Black: 93.7% Simple Model: Red: 94.3%, Black: 94.6%
Zelaya Sunwalker (+1 boots, +1 gauntlets) vs House Bauchard Champion (+3 boots, +2 gauntlets) Full Model: Red: 95.3%, Black: 93.9% Simple Model: Red: 94.3%, Black: 94.6%
(all my matchups have 4 effective power difference in my favour as noted in an above comment)
abstractapplic’s matchups:
Matchup 1: Uzben Grimblade (+3 boots, +0 gauntlets) vs House Adelon Champion (+3 boots, +1 gauntlets)
Stats: Speed: 20 vs 15 (diff: 5) Power: 9 vs 18 (diff: −9) Effective Power Difference: −1 --------------------------------------------------------------------------------
Overall Statistics: Full Model Average: Red: 61.4%, Black: 60.7% Simple Model Average: Red: 60.9%, Black: 61.4%
Edit: so I checked the actual code to see if Claude was using the same hyperparameters for both, and wtf wtf wtf wtf. The code has 6 functions that all train models (my fault for at one point renaming a function since Claude gave me a new version that didn’t have all the previous functionality (only trained the full model instead of both—this was when doing the great bughunt for the misshaped matrix and a problem was suspected in the full model), then Claude I guess picked up on this and started renaming updated versions spontaneously, and I hadn’t cleaned up the code or asked Claude to do so). Each one has it’s own hardcoded hyperparameter set. Of these, there are one pair of functions that have matching hyperparameters. Everything else has a unique set. Of course, most of these weren’t being used anymore, but the functions for actually generating the models I used for my results, and generating the models used for comparing, weren’t among the matching pair. Plus another function that returns a (hardcoded, also unique) updated parameter set, but wasn’t actually used. Oh and all this is not counting the hyperparameter tuning function that I assumed was generating a set of tuned hyperparameters to be used by other functions, but in fact was just printing results for different tunings. I had been running this every time before training models! Obviously I need to be more vigilant (or maybe asking Claude to do so might help?).
Inspired by abstractapplic’s machine learning and wanting to get some experience in julia, I got Claude (3.5 sonnet) to write me an XGBoost implementation in julia. Took a long time especially with some bugfixing (took a long time to find that a feature matrix was the wrong shape—a problem with insufficient type explicitness, I think). Still way way faster than doing it myself! Not sure I’m learning all that much julia, but am learning how to get Claude to write it for me, I hope.
Anyway, I used a simple model that
only takes into account 8 * sign(speed difference) + power difference, as in the comment this is a reply to
and a full model that
takes into account all the available features including the base data, the number the simple model uses, and intermediate steps in the calculation of that number (that would be, iirc: power (for each), speed (for each), speed difference, power difference, sign(speed difference))
Results:
Rank 1
Full model scores: Red: 94.0%, Black: 94.9%
Combined full model score: 94.4%
Simple model scores: Red: 94.3%, Black: 94.6%
Combined simple model score: 94.5%
Matchups:
Varina Dourstone (+0 boots, +3 gauntlets) vs House Cadagal Champion
Willow Brown (+3 boots, +0 gauntlets) vs House Adelon Champion
Xerxes III of Calantha (+2 boots, +2 gauntlets) vs House Deepwrack Champion
Zelaya Sunwalker (+1 boots, +1 gauntlets) vs House Bauchard Champion
This is the top scoring scoring result with either the simplified model or the full model. It was found by a full search of every valid item and hero combination available against the house champions.
It is also my previously posted, found w/o machine learning, proposal for the solution. Which is reassuring. (Though, I suppose there is some chance that my feeding the models this predictor, if it’s good enough, might make them glom on to it while they don’t find some hard-to learn additional pattern.)
My theory though is that giving the models the useful metric mostly just helps them—they don’t need to learn the metric from the data, and I mostly think that if there was a significant additional pattern the full model would do better.
(for Cadagal, I haven’t changed the champion’s boots to +4, though I don’t expect that to make a significant difference)
As far as I can tell the full model doesn’t do significantly better and does worse in some ways (though, I don’t know much about how to evaluate this, and Claude’s metrics,
including a test set log loss of 0.2527 for the full model and 0.2511 for the simple model, are for a separately generated version which I am not all that confident are actually the same models, though they “should be” up to the restricted training set if Claude was doing it right). * see edit belowBut the red/black variations seen below for the full model seem likely to me (given my prior that red and black are likely to be symmetrical) to be an indication that what the full model is finding that isn’t in the full model is at least partially overfitting. Though actually, if it’s overfitting a lot, maybe it’s surprising that the test set log loss wouldn’t be a lot worse than found (though it is at least worse than the simple model)? Hmm—what if there are actual red/black difference? (something to look into perhaps, as well as try to duplicate abstractapplic’s report regarding sign(speed difference) not exhausting the benefits of speed info
… but for now I’m more likely to leave the machine learning aside and switch to looking at distributions of gladiator characteristics, I think.)Predictions for individual matchups for my and abstractapplic’s solutions:
My matchups:
Varina Dourstone (+0 boots, +3 gauntlets) vs House Cadagal Champion (+2 boots, +3 gauntlets)
Full Model: Red: 91.1%, Black: 96.7%
Simple Model: Red: 94.3%, Black: 94.6%
Willow Brown (+3 boots, +0 gauntlets) vs House Adelon Champion (+3 boots, +1 gauntlets)
Full Model: Red: 94.3%, Black: 95.1%
Simple Model: Red: 94.3%, Black: 94.6%
Xerxes III of Calantha (+2 boots, +2 gauntlets) vs House Deepwrack Champion (+3 boots, +2 gauntlets)
Full Model: Red: 95.2%, Black: 93.7%
Simple Model: Red: 94.3%, Black: 94.6%
Zelaya Sunwalker (+1 boots, +1 gauntlets) vs House Bauchard Champion (+3 boots, +2 gauntlets)
Full Model: Red: 95.3%, Black: 93.9%
Simple Model: Red: 94.3%, Black: 94.6%
(all my matchups have 4 effective power difference in my favour as noted in an above comment)
abstractapplic’s matchups:
Matchup 1:
Uzben Grimblade (+3 boots, +0 gauntlets) vs House Adelon Champion (+3 boots, +1 gauntlets)
Win Probabilities:
Full Model: Red: 72.1%, Black: 62.8%
Simple Model: Red: 65.4%, Black: 65.7%
Stats:
Speed: 18 vs 14 (diff: 4)
Power: 11 vs 18 (diff: −7)
Effective Power Difference: 1
--------------------------------------------------------------------------------
Matchup 2:
Xerxes III of Calantha (+2 boots, +1 gauntlets) vs House Bauchard Champion (+3 boots, +2 gauntlets)
Win Probabilities:
Full Model: Red: 46.6%, Black: 43.9%
Simple Model: Red: 49.4%, Black: 50.6%
Stats:
Speed: 16 vs 12 (diff: 4)
Power: 13 vs 21 (diff: −8)
Effective Power Difference: 0
--------------------------------------------------------------------------------
Matchup 3:
Varina Dourstone (+0 boots, +3 gauntlets) vs House Cadagal Champion (+2 boots, +3 gauntlets)
Win Probabilities:
Full Model: Red: 91.1%, Black: 96.7%
Simple Model: Red: 94.3%, Black: 94.6%
Stats:
Speed: 7 vs 25 (diff: −18)
Power: 22 vs 10 (diff: 12)
Effective Power Difference: 4
--------------------------------------------------------------------------------
Matchup 4:
Yalathinel Leafstrider (+1 boots, +2 gauntlets) vs House Deepwrack Champion (+3 boots, +2 gauntlets)
Win Probabilities:
Full Model: Red: 35.7%, Black: 39.4%
Simple Model: Red: 34.3%, Black: 34.6%
Stats:
Speed: 20 vs 15 (diff: 5)
Power: 9 vs 18 (diff: −9)
Effective Power Difference: −1
--------------------------------------------------------------------------------
Overall Statistics:
Full Model Average: Red: 61.4%, Black: 60.7%
Simple Model Average: Red: 60.9%, Black: 61.4%
Edit: so I checked the actual code to see if Claude was using the same hyperparameters for both, and wtf wtf wtf wtf. The code has 6 functions that all train models (my fault for at one point renaming a function since Claude gave me a new version that didn’t have all the previous functionality (only trained the full model instead of both—this was when doing the great bughunt for the misshaped matrix and a problem was suspected in the full model), then Claude I guess picked up on this and started renaming updated versions spontaneously, and I hadn’t cleaned up the code or asked Claude to do so). Each one has it’s own hardcoded hyperparameter set. Of these, there are one pair of functions that have matching hyperparameters. Everything else has a unique set. Of course, most of these weren’t being used anymore, but the functions for actually generating the models I used for my results, and generating the models used for comparing, weren’t among the matching pair. Plus another function that returns a (hardcoded, also unique) updated parameter set, but wasn’t actually used. Oh and all this is not counting the hyperparameter tuning function that I assumed was generating a set of tuned hyperparameters to be used by other functions, but in fact was just printing results for different tunings. I had been running this every time before training models! Obviously I need to be more vigilant (or maybe asking Claude to do so might help?).