Endorsed. I think what you should do about deferral depends on what role you wish to play in the research community. Knowledge workers intending to make frontier progress should be especially skeptical of deferring to others on the topics they intend to specialise in. That may mean holding off on deferring on a wide range of topics, because curious scientists should keep a broad horizon early on. Deferring early on could lead to habits-of-thought that can be hard to override later on (sorta like curse of knowledge), and you might miss out on opportunities to productively diverge or even discover a flaw in the paradigm.
Explorers should mostly defer on value of information, not object-level beliefs. When someone I trust says they’re confident in some view I’m surprised by, I’m very reluctant to try to tweak my models to output what I believe they believe; instead I make a note to investigate what they’ve investigated, using my own judgment of things all the way through.
Yeah, VoI seems like a better place to defer. Another sort of general solution, which I find difficult but others might find workable, is to construct theories of other perspectives. That lets there be sort of unlimited space to defer: you can do something that looks like deferring, but is more precisely described as creating a bunch of inconsistent theories in your head, and deferring to people about what their theory is, rather than what’s true. (I run into trouble because I’m not so willing to accept others’s languages if I don’t see how they’re using words consistently.)
Endorsed. I think what you should do about deferral depends on what role you wish to play in the research community. Knowledge workers intending to make frontier progress should be especially skeptical of deferring to others on the topics they intend to specialise in. That may mean holding off on deferring on a wide range of topics, because curious scientists should keep a broad horizon early on. Deferring early on could lead to habits-of-thought that can be hard to override later on (sorta like curse of knowledge), and you might miss out on opportunities to productively diverge or even discover a flaw in the paradigm.
Explorers should mostly defer on value of information, not object-level beliefs. When someone I trust says they’re confident in some view I’m surprised by, I’m very reluctant to try to tweak my models to output what I believe they believe; instead I make a note to investigate what they’ve investigated, using my own judgment of things all the way through.
Yeah, VoI seems like a better place to defer. Another sort of general solution, which I find difficult but others might find workable, is to construct theories of other perspectives. That lets there be sort of unlimited space to defer: you can do something that looks like deferring, but is more precisely described as creating a bunch of inconsistent theories in your head, and deferring to people about what their theory is, rather than what’s true. (I run into trouble because I’m not so willing to accept others’s languages if I don’t see how they’re using words consistently.)