Generally, in such disagreements between the articulate and inarticulate part of yourself, either part could be right.
Your verbal-articulation part could be right, the gut wrong; the gut could be right, the verbal articulation part wrong. And even if one is more right than the other, the more-wrong one might have seen something true the other one did not.
LLMs sometimes do better when they think through things in chain-of-thought; sometimes they do worse. Humans are the same.
Don’t try to crush one side down with the other. Try to see what’s going on.
(Not going to comment too much on the object level issue about AI, but, uh, try to be aware of the very very strong filtering and selection effects what arguments you encounter about this. See this for instance.)
Generally, in such disagreements between the articulate and inarticulate part of yourself, either part could be right.
Your verbal-articulation part could be right, the gut wrong; the gut could be right, the verbal articulation part wrong. And even if one is more right than the other, the more-wrong one might have seen something true the other one did not.
LLMs sometimes do better when they think through things in chain-of-thought; sometimes they do worse. Humans are the same.
Don’t try to crush one side down with the other. Try to see what’s going on.
(Not going to comment too much on the object level issue about AI, but, uh, try to be aware of the very very strong filtering and selection effects what arguments you encounter about this. See this for instance.)