They would study artificial intelligence to learn the algorithms, the math, the laws of how an ideal agent would acquire true beliefs.
Really? The others make sense, but it’s not clear this will be useful to a human trying to learn things themselves. If I want to notice patterns, “plug all of your information into a matrix and perform eigenvector decompositions” is probably not going to get me very far.
The mathematical techniques like eigenstuff and particle methods and so on can’t be directly applied by humans, but the field is still useful.
I think the big gain from AI is that you get practice in understanding and debugging mental processes, which can be applied to your own reasoning. AI theory is philosophy that’s at least true if not optimally relevant.
At least for me, I’ve found that studying some machine learning has kind of broadened my perspectives on rationality in general. Even if we humans don’t apply the algorithms that we find in machine learning textbooks ourselves, I still find it illuminating to study how we try make machines perform rational inference. The field also concerns itself with more general, if you will philosophical questions relating to e.g. how to properly evaluate the performance of predictive agents, the trade-off between model complexity and generality and the issue of overfitting. These kind of questions are very general in nature and should probably be of some interest to students of any kind of learning agents, be they human or machine.
True in a way: for example, emulating a planning algorithm in your mind is a terribly inefficient way of making decisions. However, in order to understand the concept of “how an algorithm feels from inside”, you need to think of yourself too as an algorithm, which is (I guess) very hard if you have no idea how agents like you might work at all.
So, as I see it, AI gives you a better grasp of “map vs. territory”. Compared to “the map is the equations, the territory is what I see” you get “my mind is also a map, so where I see a pattern, maybe there is none”. (See confirmation bias.)
Really? The others make sense, but it’s not clear this will be useful to a human trying to learn things themselves. If I want to notice patterns, “plug all of your information into a matrix and perform eigenvector decompositions” is probably not going to get me very far.
The mathematical techniques like eigenstuff and particle methods and so on can’t be directly applied by humans, but the field is still useful.
I think the big gain from AI is that you get practice in understanding and debugging mental processes, which can be applied to your own reasoning. AI theory is philosophy that’s at least true if not optimally relevant.
At least for me, I’ve found that studying some machine learning has kind of broadened my perspectives on rationality in general. Even if we humans don’t apply the algorithms that we find in machine learning textbooks ourselves, I still find it illuminating to study how we try make machines perform rational inference. The field also concerns itself with more general, if you will philosophical questions relating to e.g. how to properly evaluate the performance of predictive agents, the trade-off between model complexity and generality and the issue of overfitting. These kind of questions are very general in nature and should probably be of some interest to students of any kind of learning agents, be they human or machine.
True in a way: for example, emulating a planning algorithm in your mind is a terribly inefficient way of making decisions. However, in order to understand the concept of “how an algorithm feels from inside”, you need to think of yourself too as an algorithm, which is (I guess) very hard if you have no idea how agents like you might work at all.
So, as I see it, AI gives you a better grasp of “map vs. territory”. Compared to “the map is the equations, the territory is what I see” you get “my mind is also a map, so where I see a pattern, maybe there is none”. (See confirmation bias.)