LightGBM and its kin are tools for creating decision forests, not decision trees. If you use standard hyperparameters while creating a single-tree model then they will under-train, resulting in the “predict in a way that’s correlated with reality but ridiculously conservative in its deviations from the average” behavior you see here. Setting num_boost_round (or whatever parameter decides the number of trees) to 200 or so should go some way to fixing that problem (while giving you the new problem of having produced an incomprehensible-to-humans black-box model which can only be evaluated by its output).
(I would have said this sooner but helping a player while the challenge was still running seemed like a bad look.)
I think this is because
LightGBM and its kin are tools for creating decision forests, not decision trees. If you use standard hyperparameters while creating a single-tree model then they will under-train, resulting in the “predict in a way that’s correlated with reality but ridiculously conservative in its deviations from the average” behavior you see here. Setting num_boost_round (or whatever parameter decides the number of trees) to 200 or so should go some way to fixing that problem (while giving you the new problem of having produced an incomprehensible-to-humans black-box model which can only be evaluated by its output).
(I would have said this sooner but helping a player while the challenge was still running seemed like a bad look.)