At least for me, I’ve found that studying some machine learning has kind of broadened my perspectives on rationality in general. Even if we humans don’t apply the algorithms that we find in machine learning textbooks ourselves, I still find it illuminating to study how we try make machines perform rational inference. The field also concerns itself with more general, if you will philosophical questions relating to e.g. how to properly evaluate the performance of predictive agents, the trade-off between model complexity and generality and the issue of overfitting. These kind of questions are very general in nature and should probably be of some interest to students of any kind of learning agents, be they human or machine.
At least for me, I’ve found that studying some machine learning has kind of broadened my perspectives on rationality in general. Even if we humans don’t apply the algorithms that we find in machine learning textbooks ourselves, I still find it illuminating to study how we try make machines perform rational inference. The field also concerns itself with more general, if you will philosophical questions relating to e.g. how to properly evaluate the performance of predictive agents, the trade-off between model complexity and generality and the issue of overfitting. These kind of questions are very general in nature and should probably be of some interest to students of any kind of learning agents, be they human or machine.