I took a fairly black-box approach to this problem. Basically, we want a function f(str, dex, con, int, wis, cha) which outputs a chance of success, and then we want to optimize our selection so that we have the highest chance. The optimization part is easy because it’s discrete; once we have a function, we can simply evaluate it at all of the possible inputs and select the best one.
I used a number of different ML models to estimate f, and I got pretty consistent brier scores on reserved test data of ~0.2, which isn’t great, but isn’t awful. I used scikit-learn, and used a MLPClassifier, LogisticRegression, GaussianNB, and RandomForestClassifier, along with CalibratedClassifierCV so that they had calibrated probability scores. Most of them I left on their defaults, but I played around with the layers in the MLPClassifier until it had a pretty good brier score.
Despite the fact that these models all had similar brier scores, they had surprisingly different recommendations. The Neural Net wanted to give small bumps to strength, wisdom, and charisma. Logistic Regression wanted to go all-in on wisdom, and putting any remaining points into charisma. Gaussian Naive Bayes wanted to put most of the points into charisma, but oddly, not all; it wanted to also sprinkle a few points into wisdom. The Random Forest Classifier wanted to bring strength and charisma up a little, but mostly sink points into wisdom, and occasionally scatter points into constitution or intelligence.
The top recommendation for each method is as follows:
I took a fairly black-box approach to this problem. Basically, we want a function f(str, dex, con, int, wis, cha) which outputs a chance of success, and then we want to optimize our selection so that we have the highest chance. The optimization part is easy because it’s discrete; once we have a function, we can simply evaluate it at all of the possible inputs and select the best one.
I used a number of different ML models to estimate f, and I got pretty consistent brier scores on reserved test data of ~0.2, which isn’t great, but isn’t awful. I used scikit-learn, and used a MLPClassifier, LogisticRegression, GaussianNB, and RandomForestClassifier, along with CalibratedClassifierCV so that they had calibrated probability scores. Most of them I left on their defaults, but I played around with the layers in the MLPClassifier until it had a pretty good brier score.
Despite the fact that these models all had similar brier scores, they had surprisingly different recommendations. The Neural Net wanted to give small bumps to strength, wisdom, and charisma. Logistic Regression wanted to go all-in on wisdom, and putting any remaining points into charisma. Gaussian Naive Bayes wanted to put most of the points into charisma, but oddly, not all; it wanted to also sprinkle a few points into wisdom. The Random Forest Classifier wanted to bring strength and charisma up a little, but mostly sink points into wisdom, and occasionally scatter points into constitution or intelligence.
The top recommendation for each method is as follows:
Neural Net: 8, 14, 13, 13, 15, 9
Logistic Regression: 6, 14, 13, 13, 20, 6
Naive Bayes: 6, 14, 13, 13, 14, 12
Random Forest: 8, 14, 13, 13, 15, 9