I am aware that my definition of Occam’s razor is not the “official” definition. However, it is the definition which I see used most often in discussions and arguments, which is why I chose it. The fact that this definition of Occam’s razor is common supports my claim that humans consider it a good heuristic.
Forgive me for my ignorance, as I have not studied Kolmogorov complexity in detail. As you suggest, it seems that human understanding of a “simple description” is not in line with Kolmogorov complexity.
I think the intention of my post may have been unclear. I am not trying to argue that natural language is a good way of measuring the complexity of statements. (I’m also not trying to argue that it’s bad.) My intention was merely to explore how humans understand the complexity of ideas, and to investigate how such judgements of complexity influence the way typical humans build models of the world.
The fact that human understanding of complexity is so far from Kolmogorov complexity indicates to me that if an AI were to model its environment using Kolmogorov complexity as a criterion for selecting models, the model it developed would be different from the models developed by typical humans. My concern is that this disparity in understanding of the world would make it difficult for most humans to communicate with the AI.
As you suggest, it seems that human understanding of a “simple description” is not in line with Kolmogorov complexity.
Rather than this, I’m suggesting that natural language is not in line with complexity of the “minimum description length” sort. Human understanding in general is pretty good at it, actually—it’s good enough to intuit, with a little work, that gravity really is a simpler explanation than “intelligent falling, ” and that the world is simpler than solipsism that just happens to replicate the world. Although humans may consider verbal complexity “a good heuristic,” humans can still reason well about complexity even when the heuristic doesn’t apply.
I am aware that my definition of Occam’s razor is not the “official” definition. However, it is the definition which I see used most often in discussions and arguments, which is why I chose it. The fact that this definition of Occam’s razor is common supports my claim that humans consider it a good heuristic.
Forgive me for my ignorance, as I have not studied Kolmogorov complexity in detail. As you suggest, it seems that human understanding of a “simple description” is not in line with Kolmogorov complexity.
I think the intention of my post may have been unclear. I am not trying to argue that natural language is a good way of measuring the complexity of statements. (I’m also not trying to argue that it’s bad.) My intention was merely to explore how humans understand the complexity of ideas, and to investigate how such judgements of complexity influence the way typical humans build models of the world.
The fact that human understanding of complexity is so far from Kolmogorov complexity indicates to me that if an AI were to model its environment using Kolmogorov complexity as a criterion for selecting models, the model it developed would be different from the models developed by typical humans. My concern is that this disparity in understanding of the world would make it difficult for most humans to communicate with the AI.
Rather than this, I’m suggesting that natural language is not in line with complexity of the “minimum description length” sort. Human understanding in general is pretty good at it, actually—it’s good enough to intuit, with a little work, that gravity really is a simpler explanation than “intelligent falling, ” and that the world is simpler than solipsism that just happens to replicate the world. Although humans may consider verbal complexity “a good heuristic,” humans can still reason well about complexity even when the heuristic doesn’t apply.