I tried fitting my ML model without access to speed variables other than sign(speed diff) and got slightly but non-negligibly worse metrics on an outsample. This suggests that sign(speed diff) tells you most of the information you need about speed but if you rely solely on it you’re still missing useful and relevant information.
(. . . either that or my code has another error, I guess. Looking forward to finding out in seven days.)
Very interesting, this would certainly cast doubt on
my simplified model
But so far I haven’t been noticing
any affects not accounted for by it.
After reading your comments I’ve been getting Claude to write up an XGBoost implementation for me, I should have made this reply comment when I started, but will post my results under my own comment chain.
I have not (but should) try to duplicate (or fail to do so) your findings—I haven’t been quite testing the same thing.
I tried fitting a model with only “Strength diff plus 8 times sign(speed diff)” as an explanatory variable, got (impressively, only moderately!) worse results. My best guess is that your model is underfitting, and over-attaching to the (good!) approximation you fed it, because it doesn’t have enough Total Learning to do anything better . . . in which case you might see different outcomes if you increased your number of trees and/or your learning rate.
You may well be right, I’ll look into my hyperparameters. I looked at the code Claude had generated with my interference and that greatly lowered my confidence in them, lol (see edit to this comment).
Update:
I tried fitting my ML model without access to speed variables other than sign(speed diff) and got slightly but non-negligibly worse metrics on an outsample. This suggests that sign(speed diff) tells you most of the information you need about speed but if you rely solely on it you’re still missing useful and relevant information.
(. . . either that or my code has another error, I guess. Looking forward to finding out in seven days.)
Very interesting, this would certainly cast doubt on
my simplified model
But so far I haven’t been noticing
any affects not accounted for by it.
After reading your comments I’ve been getting Claude to write up an XGBoost implementation for me, I should have made this reply comment when I started, but will post my results under my own comment chain.
I have not (but should) try to duplicate (or fail to do so) your findings—I haven’t been quite testing the same thing.
I tried fitting a model with only “Strength diff plus 8 times sign(speed diff)” as an explanatory variable, got (impressively, only moderately!) worse results. My best guess is that your model is underfitting, and over-attaching to the (good!) approximation you fed it, because it doesn’t have enough Total Learning to do anything better . . . in which case you might see different outcomes if you increased your number of trees and/or your learning rate.
Alternatively
I might just have screwed up my code somehow.
Still . . .
I’m sticking with my choices for now.
You may well be right, I’ll look into my hyperparameters. I looked at the code Claude had generated with my interference and that greatly lowered my confidence in them, lol (see edit to this comment).