This is a great intuition pump, thanks! It makes me appreciate just how, in a sense, weird it is that abstractions work at all. It seems like the universe could just not be constructed this way (though one could then argue that probably intelligence couldn’t exist in such chaotic universes, which is in itself interesting). This makes me wonder if there is a set of “natural abstractions” that are a property of the universe itself, not of whatever learning algorithm is used to pick up on them. Seems highly relevant to value learning and the like.
I wrote this post mainly as background for the sort of questions my research is focused on, in hopes that it would make it more obvious why the relevant hypotheses seem plausible at all. And this:
This makes me wonder if there is a set of “natural abstractions” that are a property of the universe itself, not of whatever learning algorithm is used to pick up on them. Seems highly relevant to value learning and the like.
… is possibly the best two-sentence summary I have seen of exactly those hypotheses. You’ve perfectly hit the nail on the head.
This is a great intuition pump, thanks! It makes me appreciate just how, in a sense, weird it is that abstractions work at all. It seems like the universe could just not be constructed this way (though one could then argue that probably intelligence couldn’t exist in such chaotic universes, which is in itself interesting). This makes me wonder if there is a set of “natural abstractions” that are a property of the universe itself, not of whatever learning algorithm is used to pick up on them. Seems highly relevant to value learning and the like.
I wrote this post mainly as background for the sort of questions my research is focused on, in hopes that it would make it more obvious why the relevant hypotheses seem plausible at all. And this:
… is possibly the best two-sentence summary I have seen of exactly those hypotheses. You’ve perfectly hit the nail on the head.