Randomness in Science
I.
Humans have a randomness problem. We are bad at generating randomness as individuals (try to out-random a computer in Man vs. Machine) and in the aggregate—ask a group of people to choose a “random” number between 1-20 and the most common number will be 17. We are also bad at detecting randomness, that is we find patterns where there is none, known as patternicity—also see apophenia, pareidolia (no, the burn pattern in your toast that looks like Jesus is not a sign from the Holy Spirit), the clustering illusion, and the hot hand fallacy.
In addition to being “bad” at randomness, we also have an aversion to it. It’s not hard to see why from an evolutionary perspective – randomness is the antithesis of life’s imperative to minimize risk by controlling and predicting the environment. However, there are situations in which the best strategy is to utilize an element of randomness. In the modern world, we can precisely calculate when a particular decision might benefit from randomness and use computers to generate it (or at least pseudo-generate it), but of course our ancestors did not have this luxury. Nevertheless, cultures around the world have evolved certain practices that enabled them to harness the power of randomness. From The Secret of Our Success (2015):
“When hunting caribou, Naskapi foragers in Labrador, Canada, had to decide where to go. Common sense might lead one to go where one had success before or to where friends or neighbors recently spotted caribou.
However, this situation is like the Matching Pennies game. The caribou are mismatchers and the hunters are matchers. That is, hunters want to match the locations of caribou while caribou want to mismatch the hunters, to avoid being shot and eaten. If a hunter shows any bias to return to previous spots, where he or others have seen caribou, then the caribou can benefit (survive better) by avoiding those locations (where they have previously seen humans). Thus, the best hunting strategy requires randomizing.
Can cultural evolution compensate for our cognitive inadequacies? Traditionally, Naskapi hunters decided where to go to hunt using divination and believed that the shoulder bones of caribou could point the way to success. To start the ritual, the shoulder blade was heated over hot coals in a way that caused patterns of cracks and burnt spots to form. This patterning was then read as a kind of map, which was held in a pre-specified orientation. The cracking patterns were (probably) essentially random from the point of view of hunting locations, since the outcomes depended on myriad details about the bone, fire, ambient temperature, and heating process. Thus, these divination rituals may have provided a crude randomizing device that helped hunters avoid their own decision-making biases.”
As this passage demonstrates, there are some situations in which randomness can serve to compensate for biases in our decision making. Divination practices can provide a kind of metaphysical cover story for injecting chance into a strategic decision (e.g. the gods speak to us through the shoulder bones), yet it would seem that we have no such disguise in the modern secular world, and that any suggestion to use a randomness-based strategy is at an inherent disadvantage (i.e. randomness represents a “blind spot” in contemporary culture). Given that science consists of various activities in which the aim is to minimize randomness and unpredictability, we wonder if it is especially difficult to overcome randomness aversion when it comes to the organization and practice of science. This begs the question—could we improve science by exploring new ways to inject randomness into the research process?
II.
One application of randomness is the use of lotteries for grant funding. Numerous agencies are already experimenting with random allocation of funds (see ”Science funders gamble on lotteries”); we won’t recapitulate the arguments for funding lotteries (in brief, peer review of grants is biased and unreliable, costly in terms of time and effort for both reviewers and applicants) as these have already been discussed extensively in the literature (see “Mavericks and Lotteries” (Avin, 2019) for a recent comprehensive review), but suffice it to say that there is good reason to think that the scientific community would benefit from further experimentation with models of random funding allocation.
Scientific publishing may also benefit from journals that use randomness as a part of their peer review procedures as current standard practices suffer from many of the same issues described above for grant applications. A 2009 study that modeled scientific practice and different publication selection strategies for journals found that, “Surprisingly, it appears that the best selection method for journals is to publish relatively few papers and to select those papers it publishes at random from the available “above threshold” papers it receives″ (Zollman, 2009). Such a threshold + random selection model could help overcome many of the bad incentives and norms inherent in academic “publish or perish” culture (see “The Natural Selection of Bad Science” (2016) for discussion on how the current publishing landscape harms the quality of research). Randomness can also be used by authors as a tool for protesting unjust practices and conventions in science, particularly those surrounding the assignment of credit for research activities. Penders and Shaw (2020) discuss the nature of civil disobedience in science and highlight various examples of deviant author assignment strategies that involve randomness (e.g. flipping a coin, brownie bake-off, free throw shooting contest, authorial order by height, utilizing random fluctuation in the euro/dollar exchange rate).
The scientific job market may suffer from many of the same issues that we see in funding and publishing—excessive competition that incentivizes individual scientists towards activities and practices that may ultimately be harmful for science as a whole (see “Competition Science: Is competition ruining science?”). Again, one solution may be to incorporate randomness into hiring decisions for scientific positions in academia and industry (e.g. a lottery amongst candidates that meet a certain threshold). To our knowledge, there are no examples of universities or private industry using such a method for hiring or promotion, however there are modeling studies which suggest that random promotion can outperform merit-based promotion in some situations (Phelan et al., 2001; Pluchino et al., 2011; Pluchino et al., 2011; Pluchino et al. 2018).
The history of science is filled with serendipitous events that lead to significant breakthroughs (penicillin, radioactivity, and the cosmic microwave background being some of the more famous examples). This suggests serendipity itself as a topic for further research; perhaps we can move beyond the anecdotal level and gain a more systematic understanding of how chance events drive discovery, thereby enabling a form of “serendipity engineering” in the sciences. Social scientist Ohid Yaqub has launched such a research project (see “Serendipity: Towards a taxonomy and a theory”, 2018).
“Starting in the archive of US sociologist Robert K. Merton, Yaqub gathered hundreds of historical examples. After studying these, he says, he has pinned down some of the mechanisms by which serendipity comes about. These include astute observation, errors and “controlled sloppiness” (which lets unexpected events occur while still allowing their source to be traced). He also identifies how the collaborative action of networks of people can generate serendipitous findings.”
(“The Serendipity Test”)
III.
Why does chance seem to play such a role in many significant scientific discoveries? In the previous sections, we suggested that randomness can be used to overcome biases and flaws in judgement, however the exact nature of these biases were not fully spelled out. While randomness can be used to compensate for the personal biases (e.g. a bias towards one’s own topic of study or a racial bias) of any individual decision maker (a grant reviewer, journal editor, head of department), the reason that “happy accidents” are involved in so many scientific discoveries is that randomness serves as a corrective for a more global conservative bias inherent in the structures of organized science. Numerous empirical and simulation studies evidence this conservative bias; below we provide quotes from two such studies in order to better elucidate the nature of this bias.
“By analyzing millions of biomedical articles published over 30 years, we find that biomedical scientists pursue conservative research strategies exploring the local neighborhood of central, important molecules. Although such strategies probably serve scientific careers, we show that they slow scientific advance, especially in mature fields, where more risk and less redundant experimentation would accelerate discovery of the network.”
— Rzhetsky et al. (2015)Those who comment on modern scientific institutions are often quick to praise institutional structures that leave scientists to their own devices. These comments reveal an underlying presumption that scientists do best when left alone—when they operate in what we call the ‘scientific state of nature’. Through computer simulation, we challenge this presumption by illustrating an inefficiency that arises in the scientific state of nature. This inefficiency suggests that one cannot simply presume that science is most efficient when institutional control is absent. In some situations, actively encouraging unpopular, risky science would improve scientific outcomes.
— Kummerfeld and Zollman (2016)
In addition, metascientific research demonstrates that the most novel and impactful research often results from “unusual individual scientist backgrounds, atypical collaborations, or unexpected expeditions where scientists and inventors reach across disciplines and address problems framed by a distant audience” (Shi and Evans, 2020; also see Uzzi et al. 2013, and Lin et al., 2021). Overall, this paints the picture of a scientific community in need of more novelty and greater risk taking. One way to achieve this goal is to modify the incentives and norms of modern science, however systemic change of this kind is often difficult and only attainable in the long term. Alternatively, we may seek to increase randomness in all its forms as this will increase the generation of the unusual and atypical, thereby reducing redundancy and shifting the scientific community towards riskier research strategies.
At a collective level, enhancing randomness means (amongst other things) a greater number of chance encounters between scientists. In a post-COVID-19 world where remote work and virtual conferences become more common, we should be concerned that serendipitous meetings between researchers will become fewer and farther between. To some degree, we may be able to compensate for this reduction in collective randomness, by increasing the role of chance in the lives of individual scientists. One method for doing so would be increased interest in dreams. A recent hypothesis (the overfitted brain hypothesis), suggests that dreams are essentially random combinations of our daily experience:
“Research on artificial neural networks has shown that during learning, such networks face a ubiquitous problem: that of overfitting to a particular dataset, which leads to failures in generalization and therefore performance on novel datasets. Notably, the techniques that researchers employ to rescue overfitted artificial neural networks generally involve sampling from an out-of-distribution or randomized dataset. The overfitted brain hypothesis is that the brains of organisms similarly face the challenge of fitting too well to their daily distribution of stimuli, causing overfitting and poor generalization. By hallucinating out-of-distribution sensory stimulation every night, the brain is able to rescue the generalizability of its perceptual and cognitive abilities and increase task performance.” (Hoel, 2021)
Anecdotal evidence for the effectiveness of dreams as a tool for scientific creativity comes from the well-known examples of discoveries made in dreams, such as the benzene ring (Kekulé), The structure of the atom (Bohr), and the periodic table of elements (Mendeleev). In order to improve memory of dreams and thereby reap the benefits of increased randomness in their lives, we recommend that scientists begin a dream journaling practice. Lastly, we recommend (in all sincerity) that scientists adopt divination practices such as the burning of caribou shoulder blades, the reading of entrails, bird augury, or the I Ching. For a full list of possibilities, see the “Methods of Divination” Wikipedia page.
Postscript:
Between 1997 and 2012, Jussieu’s campus in Paris’s Left Bank (Paris Jussieu — the largest medical research complex in France) reshuffled its labs’ locations five times due to ongoing asbestos removal, giving the faculty no control and little warning of where they would end up. An MIT professor named Christian Catalini later catalogued the 55,000 scientific papers they published during this time and mapped the authors’ locations across more than a hundred labs. Instead of having their life’s work disrupted, Jussieu’s researchers were three to five times more likely to collaborate with their new odd-couple neighbors than their old colleagues, did so nearly four to six times more often, and produced better work because of it (as measured by citations)…Even an institution like Paris Jussieu, which presumably places a premium on collaboration across disciplines, couldn’t do better than scattering its labs at random. (“Engineering Serendipity”)
This essay was originally posted at Secretum Secretorum and is also one of the example articles for Seeds of Science (PDF found here). Seeds of Science is a new OA and fee-free scientific journal that publishes speculative or non-traditional articles with peer review by our community of “gardeners”. If you like weird science writing like this then please consider joining us as a gardener or becoming an author (See “How to Publish”). As a gardener, we will email you submitted manuscripts and you can vote/comment at your leisure. It is free to join and participation is entirely at will — for more information and sign up instructions visit the gardeners page on our website.
Works Cited
Avin, S. (2019). Mavericks and lotteries. Studies in History and Philosophy of Science Part A, 76, 13-23.
Fang, F. C., & Casadevall, A. (2015). Competitive science: is competition ruining science?.
Henrich, Joseph. The Secret of our Success. Princeton University Press, 2015.
Hoel, E. (2021). The overfitted brain: Dreams evolved to assist generalization. Patterns, 2(5), 100244.
Kummerfeld, E., & Zollman, K. J. (2015). Conservatism and the scientific state of nature. The British Journal for the Philosophy of Science, 67(4), 1057-1076.
Lin, Y., Evans, J. A., & Wu, L. (2021). Novelty, Disruption, and the Evolution of Scientific Impact. arXiv preprint arXiv:2103.03398.
Penders, B., & Shaw, D. M. (2020). Civil disobedience in scientific authorship: Resistance and insubordination in science. Accountability in research, 27(6), 347-371.
Phelan, S. E., & Lin, Z. (2001). Promotion systems and organizational performance: A contingency model. Computational & Mathematical Organization Theory, 7(3), 207-232.
Pluchino, A., Biondo, A. E., & Rapisarda, A. (2018). Talent versus luck: The role of randomness in success and failure. Advances in Complex systems, 21(03n04), 1850014.
Pluchino, A., Garofalo, C., Rapisarda, A., Spagano, S., & Caserta, M. (2011). Accidental politicians: How randomly selected legislators can improve parliament efficiency. Physica A: Statistical Mechanics and Its Applications, 390(21-22), 3944-3954.
Pluchino, A., Rapisarda, A., & Garofalo, C. (2011). Efficient promotion strategies in hierarchical organizations. Physica A: Statistical Mechanics and its Applications, 390(20), 3496-3511.
Rzhetsky, A., Foster, J. G., Foster, I. T., & Evans, J. A. (2015). Choosing experiments to accelerate collective discovery. Proceedings of the National Academy of Sciences, 112(47), 14569-14574.
Shi, F., & Evans, J. (2020). Science and technology advance through surprise. arXiv preprint arXiv:1910.09370.
Uzzi, B., Mukherjee, S., Stringer, M., & Jones, B. (2013). Atypical combinations and scientific impact. Science, 342(6157), 468-472.
Yaqub, O. (2018). Serendipity: Towards a taxonomy and a theory. Research Policy, 47(1), 169-179.
Zollman, K. J. (2009). Optimal publishing strategies. Episteme, 6(2), 185-199.
Hyperlinks might make this post more easily navigated.
https://rogersbacon.substack.com/p/randomness-in-science-edit
I checked and they don’t appear there either.
What’s this?
“Open Access”, not “OpenAI”.
It should be noted that randomness can very easily be overdone. Randomness is good for checking your work (find out what biases you really do have), and sometimes tricking your foes (such as the animals trying to avoid hunting grounds), but it is easy to not sufficiently even begin to optimize your usage of the usually far superior algorithm you naturally formed just by looking at the data in the first place.
I’m not in favor of randomness in most contexts, and I find the idea of your suggested use cases for randomness in science as being highly easy to game themselves. If it is a lottery, just put together something slapdash, and get in as many entries in as many of them as possible, in as many lotteries as possible. The cursory review you have to have for the benefit you spoke of (reducing the amount of review work), would likely be very easy to game. It seems likely that everyone would have a perverse incentive to not do well if this was a significant fraction of the money available or slots in prestigious journals.
Rather than that, you could take random subsets of the submitted papers you might publish or grant applications you might pay, and extremely thoroughly vet them, still picking the best from the subset. That would be much harder to game. Ideally, you would do this as a second chance from amongst those that didn’t pass the first time (and directly score the results [again?], both for the evaluation, and to check how you did the first time). (Did you miss a gem? Almost certainly.)
What you have hear is a search problem, and an evaluation problem. Reducing either of them to the other is not appropriate, or likely to have the desired results. I’m effectively suggesting supplementing a normal search and evaluation with guess-and-check, while your suggestion appears to just be guess. (Checking is usually much easier than finding, and skimping on it is unwise.)
Your point is well taken, and we should definitely keep in mind that randomness can also create perverse incentives and can easily be overdone. However, I would argue that there is virtually no randomness in science now and ample evidence that we are bad at evaluating grants, papers, applicants and are generally overly conservative when we do evaluate (see Conservatism in Science for a review). In rare cases, I might advocate for pure randomness but, like you suggest, I think some kind of mixed strategy is probably the way to go in most cases. For example, with grants we can imagine a strategy where there is a quick review to rule out obvious nonsense and then maybe place grants in high quality and low quality with the number of slots allocated to those categories accordingly (you could also just limit people to one submission to get rid of spamming problem).
A few examples of us being bad at evaluating things:
“I just did a retrospective analysis of 2014 NeurIPS … There was no correlation between reviewer quality scores and paper’s eventual impact.”
https://twitter.com/lawrennd/status/1406380063596089346?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1406380063596089346%7Ctwgr%5E%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.theseedsofscience.org%2Fmanifesto
“Analysing data from 4,000 social science grant proposals and 15,000 reviews, this paper illustrates how the peer-review scores assigned by different reviewers have only low levels of consistency (a correlation between reviewer scores of only 0.2). From: Are peer-reviews of grant proposals reliable? An analysis of Economic and Social Research Council (ESRC) funding applications
For hiring decisions, it might be even worse—is this person truly a better scientist or did they just happen to land in a more productive research lab for their PhD? Will this person make a better graduate student or did they just go to a better undergraduate college? I would advocate for a threshold (we are fine with hiring any of these people) and then randomness in some hiring situations.
One good question would be what kinds of randomness are useful. “Greatness cannot be planned”, but there’s still a lot of different plans going on. Obviously, there are countless ways to ‘add randomness to science’, differing in how much randomness (both in distribution and size of said distribution—do we want ‘randomness’ which looks more like normal noise or is heavy tails key?), what level the randomness is applied at (inside an experiment, the experiment, the scientist, theories of subject, the subject, individual labs or colleges, community, country...?), how many times it’s applied and so on. In evolutionary computation, for example, how and how much randomness you use is practically the entire area of research: how much do you mutate individuals, how many populations, how do you intermix the populations, how do you reintroduce old mutants, how hard do you select post-mutation, and if you don’t tune this right, it may not work at all, while a well-tuned solution will rapidly home in on a diversity of excellent results. We often observe that the solutions found by genetic algorithms, or NNs, or cats, are strange, perverse, unexpected, and trigger a reaction of ‘how did it come up with that?‘; one reason is just that they are very thorough about exploring the possibility space, where a human would have long since gotten bored, said “this is stupid”, and moved on—it was stupid, but it wasn’t stupid enough, and if they had persisted long enough, it would’ve wrapped around from idiocy to genius. Our satisficing nature undermines our search for truly novel solutions; we aren’t inhumanly patient enough to find them. There are also many examples of people solving problems they didn’t know were supposed to be hard, like the famous Dantzig one, but it’s been noted that just knowing that a problem has been solved is sometimes enough to trigger a new solution (eg the critical mass of an atomic bomb—the Nazi scientists ‘knew’ it was big, but once they heard about Hiroshima, they were immediately able to fix their mistake; Chollet claims that in Kaggle competitions, merely seeing a competitor jump is enough to trigger a wave of improvements, even without knowing anything else). The weird part about this trick is, as Manuel Blum notes, “you can always give it to yourself”, as a cheap motivational hack well worth one’s while… so why don’t we?
This all sounds like classic explore vs exploit territory: most scientists are doing mostly just epsilon-greedy-style exploration where one knob is, fairly arbitrarily, tweaked at random, whereas a lot of progress comes from bold giant leaps into the unknown by a marginal thinker or theory. ‘Deep exploration’ to borrow a DRL term: not jittering one action at a time inside episodes, but constructing an agent with a ‘hypothesis’ about the environment, and letting it explore deeply to the end of the game, possibly discovering something totally new. Tweaking a good strategy usually produces a worse strategy; and averaging two good strategies, a horrible strategy—like tossing a hot grilled steak and a scoop of ice cream into a blender, two delicious flavors that decidedly do not go great together. (We probably don’t want to randomize scientists’ brains so that some are convinced that the earth is flat: that’s too random. It has to be more targeted. Imagine if you could copy Einstein and brainwash each copy: one copy is utterly irrationally convinced that the ether exists, and the other is equally fanatically convinced that it doesn’t exist; send them off for a decade, then force them into an adversarial collaboration where they generate their best predictions and a decisive experiment, and the physics community evaluates the results. And you could do this for every research topic. Things might go a lot faster!)
https://www.gwern.net/docs/reinforcement-learning/exploration/index https://www.gwern.net/notes/Small-groups https://www.gwern.net/Timing https://www.gwern.net/Backstop#internet-community-design https://www.gwern.net/reviews/Bakewell#social-contagion
Do you have any specific examples in mind here that you are willing to share? None are coming to mind off the top of my head and I’d love to have some examples for future reference.
https://www.gwern.net/Tanks#alternative-examples wasn’t really intended to compile funny cat stories, but should help you out in terms of perverse creativity like the famous radio circuits.
Thanks
Ha I like the Einstein example! I think about the “bold leaps” thing a lot—we may be in kind of “epistemic hell” with respect to certain ideas/theories i.e. all small steps in that direction will seem completely false/irrational (the valley between us and the next peak is deep and wide). Maybe not perfect but I think the problem of inheritance as you describe in the Bakewell article fits as an example here. Heredity was much more complex than we thought and the problem was complicated by the fact that we had lots of wrong but vaguely reasonable ideas that came from essentially mythical figures like Aristotle. The idea that we should study a very simple system and collect huge amounts of data until a pattern emerges and then go from there instead of armchair theorizing was kind of a crazy idea, which is why a monk was the one to do it and no one realized how important it was until 40 years later.
The question is how do we create individuals that are capable of making huge jumps in knowledge space and environments that encourage them to do so. Anything that sounds super reasonable is probably not radical enough (which is why this is so difficult). Like you say, it can’t be too crazy, but we need people who will go incredibly far in one direction while starting with a premise that is highly speculative but not outright wrong. One example might be panpsychism—we need an Einstein who takes panpsychism as brute fact and then attempts to reconstruct physics from there. My own wild offering is that ideas are alive, not in the trivial sense of a meme, but as complex spatiotemporal organisms, or maybe they are endosymbionts that are made of consciousness in the same way we are made of matter (see Ideas are Alive and You are Dead). Before the microscope we couldn’t really conceive how a life form could be that small, maybe there is something like that going on here as well and new tools/theories will lead to the discovery of an entirely new domain of life. Obviously this is crazy but maybe this is an example of the general flavor of crazy we need to explore.
One reason that people might persist in something way past boredom or reasonable justification is religious faith or some kind of irrational conviction arising from a spiritual experience. From a different angle, Tyler Cowen also offers some thoughts on why the important thinkers of the future will be religious:
I don’t think Mendel was particularly inspired by his religious faith to study heredity (I might be wrong) but it certainly didn’t stop him and in the broad sense it enabled him to be an outsider who could dedicate extended study to something seemingly trivial. As you pointed towards, being an outsider is crucial if someone is to take these kinds of bold leaps. Among other things, being an insider makes it harder to get past what you described at the end of the Origins of Innovation article:
This is the fundamental reasoning behind an article I wrote that was recently published in New Ideas in Psychology – “Amateur hour: Improving knowledge diversity in psychological and behavioral science by harnessing contributions from amateurs” (author access link). Amateurs can think and do research in ways that professionals can’t by virtue of not facing the incentives and constraints that come with having a career in academia. We identify six “blind spots” in academia that amateurs might focus on – long-term research, interdisciplinary, speculative, uncommon or taboo topics, basic observational research, and aimless projects). This led us to write:
I actually just posted about the article here because we mention LessWrong as an example of a community where amateurs make novel research contributions in psychology – “LessWrong discussed in New Ideas in Psychology article”.
So if I had to guess – the next Darwin/Einstein/Newton will be an amateur/outsider, religious or for some reason have some weird idea that they pursue to the extreme, and have some kind of life circumstance that allows them to do this (maybe like Darwin they come from money).
I also touch on this theme in my article “The Myth of Myth of the Lone Genius”. Briefly, we have put too much cultural emphasis in science on incrementalism, on standing on the shoulders of giants. Sure, most discoveries come from armies of scientists making small contributions, but we need to also cultivate the belief that you can make a radical discovery by yourself if you try really really hard. I also quote you at the beginning of the article.
I believe you do make one substantial error in this post. It isn’t that academics can’t do it, it’s that they won’t. You see, if you say can’t, you are inherently supposing the incentives can’t be changed, but the structure of these incentives is not fixed as they are now. They can change, and they will change, though likely not in a useful way anytime soon.
I’m a little confused by what you are referring to here so if you are willing to spell it out I would appreciate it but no worries either way. Many very fascinating ideas in your other comment, I’ll try to respond in a day or two.
I admit that the details of how science works these days is far from my area of expertise. I am neither in science, nor a dedicated layman. I informally experiment with things all the time (as do most intellectual types, I imagine), but not in a rigorous way.
I agree that people tend to be bad at evaluating things, but it isn’t just biased thinking; there is true randomness in a number of the decisions that go into an evaluation. Both bias and randomness are noise in the signal. I don’t believe impact is a good metric for actual quality (widely cited does not mean that each of those references was actually valuable.), though I don’t have something better to replace it with. As far as inter-rater reliability goes, 0.2 does seem quite low, but I’m sure it could be substantially improved (perhaps with teams of professional reviewers instead of simply other scientists in the field? That does, of course, have it’s own sources of bias, but you can use both.)
I don’t think you can eliminate the spamming problem by only allowing a single entry. They can enter into many different lotteries as they like if the lotteries become popular, with minimal effort, and you wouldn’t want to rule out a person participating in multiple sequential lotteries of yours (unless you knew they were making garbage proposals . . . which the limited review would be much less likely to catch.)
Sometimes random noise is good, such as in simulated annealing (is simulated annealing widely known?), but they make sure to tamp down the noise quite a bit before doing most of the search for solutions. This is precisely what the solution I suggested would do if you still want randomness. Additionally, this search could be run through a number of times by separate processes, to have greater chance of finding the signal while still using the noise.
Another thing that could be borrowed from computer science for science in general is the idea of a depth first search. This is what I think you are really looking for here. In depth first search adapted to science, a scientist would follow their ideas to their conclusions, again and again, before looking around. This would likely ensure that they were well of the beaten path, but could be done rigorously, doing good science along the way. This would likely result in fewer papers, but more impactful ones. This is what many scientists did in the old days, back when scientific progress was much more individual, and progress was faster, but things like ‘publish or perish’ are not very compatible.
The strengths of going depth first are two fold. One, original work would happen by default. Two, there would be much less that each individual scientist would have to keep in mind to advance the state of the art, much like the advantage of ridiculously narrow sub-specialties, but without the narrow focus being so insular. There is simply too much to process to advance the state of the art, and that, I suspect, is largely why people don’t.
The disadvantages are simple as well. First, a large number of scientists would end up duplicating each other’s work unintentionally . . . but that isn’t a big problem (built in replication, though very inefficiently so.) Second, and much more importantly, a very large fraction of scientists might never actually contribute something of even the slightest value. Some would say that is true now, though I think their contribution just tend to be small. (Depth first search will miss a great many things that are just slightly off the path taken, though a slight variation, depth limited search would work better at that. It is like a depth limited search, but at a preset depth, you switch to checking out the other possibilities, starting as far down along the path as is new. This is usually implemented recursively. This can be improved further with iterated deepening search, which changes the depth limit when the broader search failed too.) Third, it would require a change in culture (back to valorizing the lone genius, or the small team.)
For hiring, I don’t think that the prestige of the school/lab is a good signal, but rather the quality of their theories and experimental designs. I have heard ideas that the best way to determine if someone could do a job is to have them do it. You could have them explain one of their theories, and explain how they would test one of yours. If it sounds good, let them try (perhaps with a shorter term contract that can become a long term one). I’ve never hired someone, so I don’t have too much to say about whether my ideas there would be useful.
You are right about the use of impact as a metric, definitely not perfect, and I think both of those sources probably oversell how poor scientific evaluation is in general. Some of the problem is that people are not incentivized to really care that much and they don’t specialize in grant/paper evaluation, the idea of having “professional reviewers” is interesting, but not sure how practically achievable it is.
I hadn’t heard about the idea of depth first search but it is exactly what I am talking about and you explained it very well, thank you for sharing.
Loved this post! I’ve been thinking about some tangential ideas lately but probably won’t end up writing something myself. Here are a few other ways I’ve found that randomness can benefit science:
Randomness may result in faster algorithms for some computations
I don’t (yet!) have the background to understand the mathematics behind this quanta article, but apparently randomness can provide algorithmic speedup for computations as orderly/un-random seeming as linear systems. I don’t think it has been proven that the current fastest algorithm for this computation (which uses randomness in its solution) is the best possible, but this solution seems to point in that direction.
Also consider Shor’s algorithm’s speedup on finding prime numbers, which is inherently random.
Acting out of sync with respect to outside patterns is usually best (especially when you want accurate measurements of averages).
A medieval lord wants to know about how much food each of his serfs have throughout the year to ensure they are not overstuffed nor starving. However, he has so many serfs he can only visit each one once a year. If he checked each serf the same day each year, the springtime-checked serf would always have a lot less food than the fall-checked serf due to the time of harvest, even if there was no real difference between them. A much better approach would be to randomize which day you see each serf every year.
Also see Cicada’s, which brood for 17 years before hatching to get out of sync with the life cycle of predators/competitors.
Averages of large group’s guesses are much better than most people’s individual guesses
See this article about how accurate group average guesses are at guessing the number of Jellybeans in a jar / more applicably how accurate market predictions guess the real value of a company. Importantly, when the independence of individual’s guess was broken (i.e. the students had time to talk to each other about the number of Jellybeans in the jar), the average guess became much worse.
See also the Group Rationality and Efficient Market Hypothesis tags on Less Wrong.
On a grammar note, this sentence isn’t finished in the article: “(e.g. the gods speak to us through the , yet...”
Thanks for catching the grammar mistake—fixed! These are interesting extensions of the basic idea of using more randomness in science, thanks for sharing. Your last point makes me think about the use of prediction markets to guess which studies will replicate, something that people have successfully done.
https://www.pnas.org/content/112/50/15343