True in a way: for example, emulating a planning algorithm in your mind is a terribly inefficient way of making decisions. However, in order to understand the concept of “how an algorithm feels from inside”, you need to think of yourself too as an algorithm, which is (I guess) very hard if you have no idea how agents like you might work at all.
So, as I see it, AI gives you a better grasp of “map vs. territory”. Compared to “the map is the equations, the territory is what I see” you get “my mind is also a map, so where I see a pattern, maybe there is none”. (See confirmation bias.)
True in a way: for example, emulating a planning algorithm in your mind is a terribly inefficient way of making decisions. However, in order to understand the concept of “how an algorithm feels from inside”, you need to think of yourself too as an algorithm, which is (I guess) very hard if you have no idea how agents like you might work at all.
So, as I see it, AI gives you a better grasp of “map vs. territory”. Compared to “the map is the equations, the territory is what I see” you get “my mind is also a map, so where I see a pattern, maybe there is none”. (See confirmation bias.)