Oh, I actually 70% agree with this. I think there’s an important distinction between legibility to laypeople vs legibility to other domain experts. Let me lay out my beliefs:
In the modern history of fields you mentioned, more than 70% of discoveries are made by people trying to discover the thing, rather than serendipitously.
Other experts in the field, if truth-seeking, are able to understand the theory of change behind the research direction without investing huge amounts of time.
In most fields, experts and superforecasters informed by expert commentary will have fairly strong beliefs about which approaches to a problem will succeed. The person working on something will usually have less than 1 bit advantage about whether their framework will be successful than the experts, unless they have private information (e.g. already did the crucial experiment). This is the weakest belief and I could probably be convinced otherwise just by anecdotes.
The successful researchers might be confident they will succeed, but unsuccessful ones could be almost as confident on average. So it’s not that the research is illegible, it’s just genuinely hard to predict who will succeed.
People often work on different approaches to the problem even if they can predict which ones will work. This could be due to irrationality, other incentives, diminishing returns to each approach, comparative advantage, etc.
If research were illegible to other domain experts, I think you would not really get Kuhnian paradigms, which I am pretty confident exist. Paradigm shifts mostly come from the track record of an approach, so maybe this doesn’t count as researchers having an inside view of others’ work though.
Thank you, Thomas. I believe we find ourselves in broad agreement.
The distinction you make between lay-legibility and expert-legibility is especially well-drawn.
One point: the confidence of researchers in their own approach may not be the right thing to look at. Perhaps a better measure is seeing who can predict not only their own approach will succed but explain in detail why other approaches won’t work. Anecdotally, very succesful researchers have a keen sense of what will work out and what won’t—in private conversation many are willing to share detailed models why other approaches will not work or are not as promising. I’d have to think about this more carefully but anecdotally the most succesful researchers have many bits of information over their competitors not just one or two.
(Note that one bit of information means that their entire advantage could be wiped out by answering a single Y/N question. Not impossible, but not typical for most cases)
Oh, I actually 70% agree with this. I think there’s an important distinction between legibility to laypeople vs legibility to other domain experts. Let me lay out my beliefs:
In the modern history of fields you mentioned, more than 70% of discoveries are made by people trying to discover the thing, rather than serendipitously.
Other experts in the field, if truth-seeking, are able to understand the theory of change behind the research direction without investing huge amounts of time.
In most fields, experts and superforecasters informed by expert commentary will have fairly strong beliefs about which approaches to a problem will succeed. The person working on something will usually have less than 1 bit advantage about whether their framework will be successful than the experts, unless they have private information (e.g. already did the crucial experiment). This is the weakest belief and I could probably be convinced otherwise just by anecdotes.
The successful researchers might be confident they will succeed, but unsuccessful ones could be almost as confident on average. So it’s not that the research is illegible, it’s just genuinely hard to predict who will succeed.
People often work on different approaches to the problem even if they can predict which ones will work. This could be due to irrationality, other incentives, diminishing returns to each approach, comparative advantage, etc.
If research were illegible to other domain experts, I think you would not really get Kuhnian paradigms, which I am pretty confident exist. Paradigm shifts mostly come from the track record of an approach, so maybe this doesn’t count as researchers having an inside view of others’ work though.
Thank you, Thomas. I believe we find ourselves in broad agreement. The distinction you make between lay-legibility and expert-legibility is especially well-drawn.
One point: the confidence of researchers in their own approach may not be the right thing to look at. Perhaps a better measure is seeing who can predict not only their own approach will succed but explain in detail why other approaches won’t work. Anecdotally, very succesful researchers have a keen sense of what will work out and what won’t—in private conversation many are willing to share detailed models why other approaches will not work or are not as promising. I’d have to think about this more carefully but anecdotally the most succesful researchers have many bits of information over their competitors not just one or two. (Note that one bit of information means that their entire advantage could be wiped out by answering a single Y/N question. Not impossible, but not typical for most cases)