When you’re communicating with people who know more than you, you have two options. You can accept their greater state of knowledge, causing you to speak more honestly about the pertinent topics. Or, you could reject their credibility, claiming that they really don’t know more than you. Many people who know less than you both may believe you over them.
A third option is to claim epistemic learned helplessness. You can believe someone knows more than you, but reject their claims because there are incentives to deceive. It’s even possible to openly coordinate based on this. This seems like something I’ve seen people do, maybe even frequently. I can’t think of anything specific, but one method would be to portray the more knowledgeable person as “using their power [in the form of knowledge] for evil”.
Tentatively:
Getting stuck solving a problem should ideally trigger open curiosity. I was thinking about this in the context of solving a Project Euler problem (math problems that usually require some programming). There seem to often be alternating phases in solving where you find some low-hanging fruit, and then get stuck. Stuckness can be for example conceptual (you need to speed up your algorithm; you haven’t found an algorithm that works at all; you don’t understand the problem) or related to code (you have a natural-language framework for your problem but not code; the only code you can think to write is really ugly; there is a bug).
The thing I call “stuckness” perhaps often indicates there is no clear path to go on—if there is, I would be going on it. Sometimes this should trigger taking a break to rest. Other times it should trigger open curiosity about the problem. Even if I am remaining openly curious about the problem, it seems more likely that I will do something like get up from where I am sitting at the trigger point.
A common failure mode is to continue being actively curious when stuck; this is associated with treating the situation like something it’s not.