One good question would be what kinds of randomness are useful. “Greatness cannot be planned”, but there’s still a lot of different plans going on. Obviously, there are countless ways to ‘add randomness to science’, differing in how much randomness (both in distribution and size of said distribution—do we want ‘randomness’ which looks more like normal noise or is heavy tails key?), what level the randomness is applied at (inside an experiment, the experiment, the scientist, theories of subject, the subject, individual labs or colleges, community, country...?), how many times it’s applied and so on. In evolutionary computation, for example, how and how much randomness you use is practically the entire area of research: how much do you mutate individuals, how many populations, how do you intermix the populations, how do you reintroduce old mutants, how hard do you select post-mutation, and if you don’t tune this right, it may not work at all, while a well-tuned solution will rapidly home in on a diversity of excellent results. We often observe that the solutions found by genetic algorithms, or NNs, or cats, are strange, perverse, unexpected, and trigger a reaction of ‘how did it come up with that?‘; one reason is just that they are very thorough about exploring the possibility space, where a human would have long since gotten bored, said “this is stupid”, and moved on—it was stupid, but it wasn’t stupid enough, and if they had persisted long enough, it would’ve wrapped around from idiocy to genius. Our satisficing nature undermines our search for truly novel solutions; we aren’t inhumanly patient enough to find them. There are also many examples of people solving problems they didn’t know were supposed to be hard, like the famous Dantzig one, but it’s been noted that just knowing that a problem has been solved is sometimes enough to trigger a new solution (eg the critical mass of an atomic bomb—the Nazi scientists ‘knew’ it was big, but once they heard about Hiroshima, they were immediately able to fix their mistake; Chollet claims that in Kaggle competitions, merely seeing a competitor jump is enough to trigger a wave of improvements, even without knowing anything else). The weird part about this trick is, as Manuel Blum notes, “you can always give it to yourself”, as a cheap motivational hack well worth one’s while… so why don’t we?
This all sounds like classic explore vs exploit territory: most scientists are doing mostly just epsilon-greedy-style exploration where one knob is, fairly arbitrarily, tweaked at random, whereas a lot of progress comes from bold giant leaps into the unknown by a marginal thinker or theory. ‘Deep exploration’ to borrow a DRL term: not jittering one action at a time inside episodes, but constructing an agent with a ‘hypothesis’ about the environment, and letting it explore deeply to the end of the game, possibly discovering something totally new. Tweaking a good strategy usually produces a worse strategy; and averaging two good strategies, a horrible strategy—like tossing a hot grilled steak and a scoop of ice cream into a blender, two delicious flavors that decidedly do not go great together. (We probably don’t want to randomize scientists’ brains so that some are convinced that the earth is flat: that’s too random. It has to be more targeted. Imagine if you could copy Einstein and brainwash each copy: one copy is utterly irrationally convinced that the ether exists, and the other is equally fanatically convinced that it doesn’t exist; send them off for a decade, then force them into an adversarial collaboration where they generate their best predictions and a decisive experiment, and the physics community evaluates the results. And you could do this for every research topic. Things might go a lot faster!)
“We often observe that the solutions found by genetic algorithms, or NNs, or cats, are strange, perverse, unexpected, and trigger a reaction of ‘how did it come up with that?’; one reason is just that they are very thorough about exploring the possibility space”
Do you have any specific examples in mind here that you are willing to share? None are coming to mind off the top of my head and I’d love to have some examples for future reference.
Ha I like the Einstein example! I think about the “bold leaps” thing a lot—we may be in kind of “epistemic hell” with respect to certain ideas/theories i.e. all small steps in that direction will seem completely false/irrational (the valley between us and the next peak is deep and wide). Maybe not perfect but I think the problem of inheritance as you describe in the Bakewell article fits as an example here. Heredity was much more complex than we thought and the problem was complicated by the fact that we had lots of wrong but vaguely reasonable ideas that came from essentially mythical figures like Aristotle. The idea that we should study a very simple system and collect huge amounts of data until a pattern emerges and then go from there instead of armchair theorizing was kind of a crazy idea, which is why a monk was the one to do it and no one realized how important it was until 40 years later.
The question is how do we create individuals that are capable of making huge jumps in knowledge space and environments that encourage them to do so. Anything that sounds super reasonable is probably not radical enough (which is why this is so difficult). Like you say, it can’t be too crazy, but we need people who will go incredibly far in one direction while starting with a premise that is highly speculative but not outright wrong. One example might be panpsychism—we need an Einstein who takes panpsychism as brute fact and then attempts to reconstruct physics from there. My own wild offering is that ideas are alive, not in the trivial sense of a meme, but as complex spatiotemporal organisms, or maybe they are endosymbionts that are made of consciousness in the same way we are made of matter (see Ideas are Alive and You are Dead). Before the microscope we couldn’t really conceive how a life form could be that small, maybe there is something like that going on here as well and new tools/theories will lead to the discovery of an entirely new domain of life. Obviously this is crazy but maybe this is an example of the general flavor of crazy we need to explore.
…one reason is just that they are very thorough about exploring the possibility space, where a human would have long since gotten bored, said “this is stupid”, and moved on—it was stupid, but it wasn’t stupid enough, and if they had persisted long enough, it would’ve wrapped around from idiocy to genius. Our satisficing nature undermines our search for truly novel solutions; we aren’t inhumanly patient enough to find them.
One reason that people might persist in something way past boredom or reasonable justification is religious faith or some kind of irrational conviction arising from a spiritual experience. From a different angle, Tyler Cowen also offers some thoughts on why the important thinkers of the future will be religious:
Third, religious thinkers arguably have more degrees of freedom. I don’t mean to hurt anybody’s feelings here, but…how shall I put it? The claims of the religions are not so closely tied to the experimental method and the randomized control trial. (Narrator: “Neither are the secular claims!”) It would be too harsh to say “they can just make stuff up,” but…arguably there are fewer constraints. That might lead to more gross errors and fabrications in the distribution as a whole, but also more creativity in the positive direction. And right now we seem pretty hungry for some breaks in the previous debates, even if not all of those breaks will be for the better.
I don’t think Mendel was particularly inspired by his religious faith to study heredity (I might be wrong) but it certainly didn’t stop him and in the broad sense it enabled him to be an outsider who could dedicate extended study to something seemingly trivial. As you pointed towards, being an outsider is crucial if someone is to take these kinds of bold leaps. Among other things, being an insider makes it harder to get past what you described at the end of the Origins of Innovation article:
Perhaps there is some sort of psychological barrier, where the mind flinches at any suggestion bubbling up from the subconscious that conflicts with age-old tradition or with higher-status figures. Should any new ideas still manage to come up, they are suppressed; “don’t rock the boat”, don’t stand out (“the innovator has for enemies all those who have done well under the old conditions”)
This is the fundamental reasoning behind an article I wrote that was recently published in New Ideas in Psychology – “Amateur hour: Improving knowledge diversity in psychological and behavioral science by harnessing contributions from amateurs” (author access link). Amateurs can think and do research in ways that professionals can’t by virtue of not facing the incentives and constraints that come with having a career in academia. We identify six “blind spots” in academia that amateurs might focus on – long-term research, interdisciplinary, speculative, uncommon or taboo topics, basic observational research, and aimless projects). This led us to write:
Taken together, our discussion of blind spots highlights one overarching direction in “research-space” that may be especially promising: long, aimless, speculative, and interdisciplinary research on uncommon or taboo subjects. Out of all amateur contributions to sciences so far, Darwin’s achievements may be the primary exemplar of this type of endeavor. As aforementioned, at the time of his departure on the HMS Beagle in 1831 he was an independent scientist—a 22-year-old Cambridge graduate with no advanced publications who had to pay his own way on the voyage (Bowlby, 1990; Keynes & Darwin, 2001). Darwin’s work on evolution certainly took a long time to develop (the Beagle’s voyage took 5 years and he did not publish On the Origin of Species until 23 years after he returned). It was aimless in the sense that he did not set out from the beginning to develop a theory of evolution. His work was highly interdisciplinary (Darwin drew on numerous fields within the biological sciences in addition to geology and economics), was the culmination of a huge amount of basic observational work, and was not necessarily an experimental contribution (though he did make those as well), but primarily theoretical (and sometimes more speculative) in nature. Darwin’s theories were taboo in the sense that they went against the prevailing theological ideas of the time and caused significant controversy (and still do). We speculate that there may one day be a “Charles Darwin of the Mind” who follows a similar path. Indeed, it seems that the state of theorizing in psychology today is at an early stage comparable to evolutionary theorizing at the time of Darwin (Muthukrishna & Henrich, 2019), and the time may be ripe for an equally transformative amateur contribution in psychology. We hope that this paper provides the smallest nudge in this direction.
I actually just posted about the article here because we mention LessWrong as an example of a community where amateurs make novel research contributions in psychology – “LessWrong discussed in New Ideas in Psychology article”.
So if I had to guess – the next Darwin/Einstein/Newton will be an amateur/outsider, religious or for some reason have some weird idea that they pursue to the extreme, and have some kind of life circumstance that allows them to do this (maybe like Darwin they come from money).
I also touch on this theme in my article “The Myth of Myth of the Lone Genius”. Briefly, we have put too much cultural emphasis in science on incrementalism, on standing on the shoulders of giants. Sure, most discoveries come from armies of scientists making small contributions, but we need to also cultivate the belief that you can make a radical discovery by yourself if you try really really hard. I also quote you at the beginning of the article.
“The Great Man theory of history may not be truly believable and Great Men not real but invented, but it may be true we need to believe the Great Man theory of history and would have to invent them if they were not real.”
I believe you do make one substantial error in this post. It isn’t that academics can’t do it, it’s that they won’t. You see, if you say can’t, you are inherently supposing the incentives can’t be changed, but the structure of these incentives is not fixed as they are now. They can change, and they will change, though likely not in a useful way anytime soon.
I’m a little confused by what you are referring to here so if you are willing to spell it out I would appreciate it but no worries either way. Many very fascinating ideas in your other comment, I’ll try to respond in a day or two.
One good question would be what kinds of randomness are useful. “Greatness cannot be planned”, but there’s still a lot of different plans going on. Obviously, there are countless ways to ‘add randomness to science’, differing in how much randomness (both in distribution and size of said distribution—do we want ‘randomness’ which looks more like normal noise or is heavy tails key?), what level the randomness is applied at (inside an experiment, the experiment, the scientist, theories of subject, the subject, individual labs or colleges, community, country...?), how many times it’s applied and so on. In evolutionary computation, for example, how and how much randomness you use is practically the entire area of research: how much do you mutate individuals, how many populations, how do you intermix the populations, how do you reintroduce old mutants, how hard do you select post-mutation, and if you don’t tune this right, it may not work at all, while a well-tuned solution will rapidly home in on a diversity of excellent results. We often observe that the solutions found by genetic algorithms, or NNs, or cats, are strange, perverse, unexpected, and trigger a reaction of ‘how did it come up with that?‘; one reason is just that they are very thorough about exploring the possibility space, where a human would have long since gotten bored, said “this is stupid”, and moved on—it was stupid, but it wasn’t stupid enough, and if they had persisted long enough, it would’ve wrapped around from idiocy to genius. Our satisficing nature undermines our search for truly novel solutions; we aren’t inhumanly patient enough to find them. There are also many examples of people solving problems they didn’t know were supposed to be hard, like the famous Dantzig one, but it’s been noted that just knowing that a problem has been solved is sometimes enough to trigger a new solution (eg the critical mass of an atomic bomb—the Nazi scientists ‘knew’ it was big, but once they heard about Hiroshima, they were immediately able to fix their mistake; Chollet claims that in Kaggle competitions, merely seeing a competitor jump is enough to trigger a wave of improvements, even without knowing anything else). The weird part about this trick is, as Manuel Blum notes, “you can always give it to yourself”, as a cheap motivational hack well worth one’s while… so why don’t we?
This all sounds like classic explore vs exploit territory: most scientists are doing mostly just epsilon-greedy-style exploration where one knob is, fairly arbitrarily, tweaked at random, whereas a lot of progress comes from bold giant leaps into the unknown by a marginal thinker or theory. ‘Deep exploration’ to borrow a DRL term: not jittering one action at a time inside episodes, but constructing an agent with a ‘hypothesis’ about the environment, and letting it explore deeply to the end of the game, possibly discovering something totally new. Tweaking a good strategy usually produces a worse strategy; and averaging two good strategies, a horrible strategy—like tossing a hot grilled steak and a scoop of ice cream into a blender, two delicious flavors that decidedly do not go great together. (We probably don’t want to randomize scientists’ brains so that some are convinced that the earth is flat: that’s too random. It has to be more targeted. Imagine if you could copy Einstein and brainwash each copy: one copy is utterly irrationally convinced that the ether exists, and the other is equally fanatically convinced that it doesn’t exist; send them off for a decade, then force them into an adversarial collaboration where they generate their best predictions and a decisive experiment, and the physics community evaluates the results. And you could do this for every research topic. Things might go a lot faster!)
https://www.gwern.net/docs/reinforcement-learning/exploration/index https://www.gwern.net/notes/Small-groups https://www.gwern.net/Timing https://www.gwern.net/Backstop#internet-community-design https://www.gwern.net/reviews/Bakewell#social-contagion
Do you have any specific examples in mind here that you are willing to share? None are coming to mind off the top of my head and I’d love to have some examples for future reference.
https://www.gwern.net/Tanks#alternative-examples wasn’t really intended to compile funny cat stories, but should help you out in terms of perverse creativity like the famous radio circuits.
Thanks
Ha I like the Einstein example! I think about the “bold leaps” thing a lot—we may be in kind of “epistemic hell” with respect to certain ideas/theories i.e. all small steps in that direction will seem completely false/irrational (the valley between us and the next peak is deep and wide). Maybe not perfect but I think the problem of inheritance as you describe in the Bakewell article fits as an example here. Heredity was much more complex than we thought and the problem was complicated by the fact that we had lots of wrong but vaguely reasonable ideas that came from essentially mythical figures like Aristotle. The idea that we should study a very simple system and collect huge amounts of data until a pattern emerges and then go from there instead of armchair theorizing was kind of a crazy idea, which is why a monk was the one to do it and no one realized how important it was until 40 years later.
The question is how do we create individuals that are capable of making huge jumps in knowledge space and environments that encourage them to do so. Anything that sounds super reasonable is probably not radical enough (which is why this is so difficult). Like you say, it can’t be too crazy, but we need people who will go incredibly far in one direction while starting with a premise that is highly speculative but not outright wrong. One example might be panpsychism—we need an Einstein who takes panpsychism as brute fact and then attempts to reconstruct physics from there. My own wild offering is that ideas are alive, not in the trivial sense of a meme, but as complex spatiotemporal organisms, or maybe they are endosymbionts that are made of consciousness in the same way we are made of matter (see Ideas are Alive and You are Dead). Before the microscope we couldn’t really conceive how a life form could be that small, maybe there is something like that going on here as well and new tools/theories will lead to the discovery of an entirely new domain of life. Obviously this is crazy but maybe this is an example of the general flavor of crazy we need to explore.
One reason that people might persist in something way past boredom or reasonable justification is religious faith or some kind of irrational conviction arising from a spiritual experience. From a different angle, Tyler Cowen also offers some thoughts on why the important thinkers of the future will be religious:
I don’t think Mendel was particularly inspired by his religious faith to study heredity (I might be wrong) but it certainly didn’t stop him and in the broad sense it enabled him to be an outsider who could dedicate extended study to something seemingly trivial. As you pointed towards, being an outsider is crucial if someone is to take these kinds of bold leaps. Among other things, being an insider makes it harder to get past what you described at the end of the Origins of Innovation article:
This is the fundamental reasoning behind an article I wrote that was recently published in New Ideas in Psychology – “Amateur hour: Improving knowledge diversity in psychological and behavioral science by harnessing contributions from amateurs” (author access link). Amateurs can think and do research in ways that professionals can’t by virtue of not facing the incentives and constraints that come with having a career in academia. We identify six “blind spots” in academia that amateurs might focus on – long-term research, interdisciplinary, speculative, uncommon or taboo topics, basic observational research, and aimless projects). This led us to write:
I actually just posted about the article here because we mention LessWrong as an example of a community where amateurs make novel research contributions in psychology – “LessWrong discussed in New Ideas in Psychology article”.
So if I had to guess – the next Darwin/Einstein/Newton will be an amateur/outsider, religious or for some reason have some weird idea that they pursue to the extreme, and have some kind of life circumstance that allows them to do this (maybe like Darwin they come from money).
I also touch on this theme in my article “The Myth of Myth of the Lone Genius”. Briefly, we have put too much cultural emphasis in science on incrementalism, on standing on the shoulders of giants. Sure, most discoveries come from armies of scientists making small contributions, but we need to also cultivate the belief that you can make a radical discovery by yourself if you try really really hard. I also quote you at the beginning of the article.
I believe you do make one substantial error in this post. It isn’t that academics can’t do it, it’s that they won’t. You see, if you say can’t, you are inherently supposing the incentives can’t be changed, but the structure of these incentives is not fixed as they are now. They can change, and they will change, though likely not in a useful way anytime soon.
I’m a little confused by what you are referring to here so if you are willing to spell it out I would appreciate it but no worries either way. Many very fascinating ideas in your other comment, I’ll try to respond in a day or two.