Research is polygamous! The importance of what you do needn’t be proportional to your awesomeness
In a recent discussion a friend was telling me how he felt he was not as smart as the people he thinks are doing the best research on the most important topics. He said a few jaw-dropping names, which indeed are smarter than him, and mentioned their research agenda, say, A B and C.
From that, a remarkable implication followed, in his cognitive algorithm:
Therefore I should research thing D or thing E.
Which made me pause for a moment. Here is a hypothetical schematic of this conception of the world. Arrows stand for “Ought to research”
Humans by Level of Awesome (HLA) Research Agenda by Level of Importance. (RALI)
HLA RALI
Mrs 1 --------> X-risk #1
2 --------> X-risk #2
3 --------> Longevity
4 --------> Malaria Reduction
5 --------> Enhancement
1344 --------> Increasing Puppies Cuteness
Etc…
It made me think of the problem of creating match making algorithms for websites where people want to pair to do stuff, such as playing tennis, chess or having a romantic relationship.
This reasoning is profoundly mistaken, and I can look back into my mind, and remember dozens of times I have made the exact same mistake. So I thought it would be good to spell out 10 times in different ways for the unconscious bots in my mind that didn’t get it yet:
1) Research agenda topics are polygamous, they do not mind if there is someone else researching them, besides the very best people who could be doing such research.
2) The function above should not be one-to-one (biunivocal), but many-to-one.
3) There is no relation of overshadowing based on someone’s awesomeness to everyone else who researches the same topic, unless they are researching the same narrow minimal sub-type of the same question coming from the same background.
4) Overdetermination doesn’t happen at the “general topic level”.
5) Awesome people do not obfuscate what less awesome people do in their area, they catapult it, by creating resources.
6) Being in an area where the most awesome people are is not asking to “lose the game” it is being in an environment that cultivates greatness.
7) The amount of awesomeness in a field does not supervene on the amount of awesomeness in it’s best explorer.
8) The Best person in each area would never be able to cause progress alone.
9) To want to be the best in something has absolutely no precedence over doing something that matters.
10) If you believe in monogamous research, you’d be in the akward situation where finding out that no one gives a flying fuck about X-risk should make you ecstatic, and that can’t be right. That there are people doing something that matters so well that you currently estimate you can’t beat them should be fantastic news!
Well, I hope every last cortical column I have got it now, and the overall surrounding being may be a little less wrong.
Also, this text by Michael Vassar is magnificent, and makes a related set of points.
- Finding interesting communities by 30 May 2013 15:31 UTC; 19 points) (
- 31 May 2013 17:31 UTC; 4 points) 's comment on The Paucity of Elites Online by (
- Compiling my writings for Lesswrong and others. by 22 Jul 2014 8:11 UTC; 3 points) (
- 31 May 2013 17:10 UTC; 0 points) 's comment on The Paucity of Elites Online by (
There’s someone who once said that the most popular/prestigious research topics, such as theoretical physics, are oversaturated with really smart people; if one person didn’t make a certain discovery, someone else would have done it soon afterward, so smart people should try to go into fields that are useful but don’t normally attract geniuses. I don’t remember where I read it, though.
You’re thinking of Aubrey de Grey.
Thanks for tracking down the quote; I had trouble locating it.
Well, that’s what spaced repetition is for.
Normally Google works for finding quotes but my Google-fu failed me this time.
Which fields are not that competitive yet would yield useful results? What are optimal fields for bright people to enter?
I suggest Singularity Strategies or Meta-Philosophy.
A good example would be the kind of work CFAR does to learn to teach people to be more rational. It’s an important topic but there isn’t that much competition for practical rationality training.
GiveWell would be another example from this community. It’s an important project for which there weren’t real competitors when it got started.
If you want to pick a topic for yourself, I find that spaced repetition learning is an area with a lot of potential where more work is needed. I don’t think the Anki algorithms is optimal. If you are strong at math you could go and take the memosync data set and develop a better algorithm for estimating the timing of cards.
When it comes to teaching people how to create good spaced repetition cards the resources that are out there are pretty limited.
It’s an area where you can do genuine work that helps the world without having to be an genius.
From my perspective there are tons of interesting projects that could use more work but are outside of prestigious academic interest.
I don’t know.
I think this is more of a selfish thing, I imagine that this person probably did not think it would be better for the world that he research D or E instead of A, B, and C, merely that it would be better for himself if he did so. Better to be a big fish in a small pond than a small fish in a big pond. See also: http://www.gwern.net/The%20Melancholy%20of%20Subculture%20Society
There is such a thing as diminishing marginal returns. If there were 5 Eliezer Yudkowskys all working on FAI, the sixth Eliezer Yudkowsky ought to go off and direct their efforts to CFAR instead (this is a calculation specifically for copies of Eliezer Yudkowsky, and not precise).
I would say that having one smart non-Eliezer person working on FAI together with you would be preferable to having two or maybe even three copies of EY. The copies are likely to have similar thought patterns, have similar biases and make similar mistakes.
It depends on the topic. If the problem can be solved by lots of hard work and this hard work can be done in parallel, then copies are a good idea. If the problem requires inspiration, perhaps in many incremental steps, then more diverse minds would be preferable.
I think what I am trying to say is that marginal returns diminish faster for copies compared to other people.
Yes, obviously.
On the whole I agree.
Nitpick: I think this post would have been better if you’d used “most capable researcher” rather than “most awesome person”. One can be awesome without being a skilled researcher, or a skilled researcher without being in general awesome. I think the term is too much of a mushy applause-light to do much serious analytical work in any case.
I agree, but it seems to me that the concept with which many minds are working is exactly the mushy applausy one.
I entirely agree with this point, but suspect that actually following this advice would make people uncomfortable.
Since different occupations/goals have some amount of status associated with them (nonprofits, skilled trades, professions) many people seem to take statements about what you’re working on to be status claims in addition to their denotational content.
As a result, working on something “outside of your league” will often sound to a person like you’re claiming more status than they would necessarily give you.
Beware using status as a universal explanation.
How could you demonstrate that status is adequate or inadequate as an explanation, in this case? Or any case?
Beware of universal explanations everywhere!
Fair, but at least some component of this working in practice seems to be a status issue. Once we’re talking about awesomeness and importance, and the representativeness of a person’s awesomeness and the importance of what they’re working on, and how different people evaluate importance and awesomeness, it seems decently likely that status will come into play.
That is Terrible!!!
I never thought of that. But yes, it does sound very much like that is the case. What can we do? I mean, sincerely, how can we get people to work on what matters for them even if they don’t think they have the status for it?
Are you sure? How can you easily tell that something is out of someones league? I can imagine that if you talk to someone at a party it is more impressive to say that you work in rocket surgery than it is to say that you work as a carpenter. Even though you might be lousy at the first and great at the second.
Good point, I did summarize a bit fast.
There’s two issues at hand, one asserting that you’re doing something that’s high status within your community, and asserting that your community’s goals are more important (and higher status) than the goals of the listener’s community.
If there’s a large inferential distance in justifying your claims of importance, but the importance is clear, then it’s difficult to distinguish you from say, cranks and conspiracy theorists.
(The dialogues are fairly unrealistic, but trying to gesture at the pattern)
A within culture issue:
Between cultures:
It depends on which (sub)culture the people at the party are in.
Please don’t, but if you need to read a shorter summary of Vassar’s text: http://lesswrong.com/lw/gdz/michael_vassars_edge_contribution_summary/
Also there is a text by Paul Graham which gives very interesting advice in the same direction:
When I read this:
I immediately thought of this.
On a more serious note, I have the impression that while some people (with conservative values?) do agree that doing something that matters is more important than anything else (although “something that matters” is usually something not very interesting), most creatively intelligent people go through their lives trying to optimize fun. And while it’s certainly fun to hang out with people smarter than you and learn from them, it’s much less fun to work with them.
I’d like to know why you think this is the case.
If you mean the less-fun-to-work-with part, it’s fairly obvious. You have a good idea, but the smarter person A has already thought about it (and rejected it after having a better idea). You manage to make a useful contribution, and it is immediately generalized and improved upon by the smarter persons B and C. It’s like playing a game where you have almost no control over the outcome. This problem seems related to competence and autonomy, which are two of the three basic needs involved in intrinsic motivation.
If you mean the issue of why fun is valued more than doing something that matters, it is less clear. My guess is that’s because boredom is a more immediate and pressing concern than meaningless existence (where “something that matters” is a cure for meaningless existence, and “fun” is a cure for boredom). Smart people also seem to get bored more easily, so the need to get away from boredom is probably more important for them.
How on earth do I actually download that file without selling my soul via my cell phone bill?
I went through piratebay, bittorrent or some other p2p device. It is hard but worth it, since it is a whole course. The teaching company or The Great Courses Series, is the name of the set. As most here have seen, I nearly never link for anything but a downloadable free pdf file when linking for books in lesswrong, I made an exception this time because I don’t remember how I found it.
“Monogamous research” makes a lot of sense for narrowly defined and relatively brute force research, but much less for vague fields. For example, within X-risk, we may need someone to write a popular book summary. For this it makes a lot of sense for a few experts to do this while the less knowledgable work on other things for the time being.
The fields you mention are of course much more vague though. My guess is that X-risk can be “polygamous-optimal”, but perhaps only so far that it can be broken up into “monogamous” pieces.
All of this is true in a world where people do research because it’s fun and they want to do so. Which is a good assumption if you’re sitting on a few million dollars. Unfortunately, that’s not the case for most of the people doing research.
That is, research topics are polygamous, but research funds aren’t.
Also, for the same reason, choosing to write mediocre papers is a better choice to look productive than attempting to have really cool insights and results and failing doing so: it’s hard to tell apart people who failed at stuff from people who did nothing if you are the one to pay them.
You left off the most important point. If you think a topic is important and that someone smarter than you is already working on it, it would seem like your best move is to try and help.
One argument in favor of leaving RALI A to HLA 1 and going off to fund your own puppy cuteness augmentation program is that you might not want to make a redundant effort. But I think the risk depends on how much progress has been made on the topic in question. If the big fish already have a pretty solid direction, ask someone who knows more about it than you do where the gaps are and go work on that. If you think you might get a response, go ahead and ask the big fish what needs working on. Fortune favors the bold.
Maybe they already have good lab assistants, and the best way for you to help is to work at the coffee shop that gives them their afternoon caffeine jolt, or the nuclear plant that powers their lab, or the daycare where their kids go—in other words, have a normal job in the non-research economy. Those kinds of jobs are absolutely necessary to support more blue-sky stuff, so many people will have to do them. Why assume you are so much smarter than that entire group?
Well, you might think that you don’t have anything meaningful to contribute on top of their abilities.
You might be pretty smart, but believe they’re so much smarter that your mental faculties won’t be a significant asset to them, in which case you wouldn’t be any more help than a non-smart assistant performing grunt labor for them, whereas your intelligence would be a greater asset in a field not already dominated by such a great intellect.
As someone with limited mental faculties myself, I can see where that would present a problem. My usual approach is to ask for lots of feedback so I can get a sense of a) whether the ROI is worth it for my efforts and b) whether I’m just getting in the way. Feedback can come from a variety of sources, including independent observers.
This is the same mistaken pattern of thinking that leads people not to give to charitable causes on the grounds that poverty, or malaria, or whatever, is such a huge problem that anything they could do would be just a drop in the bucket. Of course what matters is the actual amount of good done, not what fraction it is of all the good there is to do or of the good others are doing.
I wouldn’t call it the same pattern at all. There’s no difference in comparative advantage between one monetary donation and another, and charities targeting causes such as malaria and poverty don’t suffer diminishing returns on donations within the range they’re likely to receive. On the other hand the differences in comparative advantage between one researcher and another within a particular field can be quite large, and a research subject can quite plausibly suffer diminishing returns on new researchers of similar abilities (see this quote already linked to in this topic.)
Think about X-risk in particular, just as a case point for your idea. Do you really think that in the entire, broad range of “all that relates to the important X-risks, there is nothing you can do?
Here is one think someone with a lot of balls but not much brainpower could do. Deliver the “Existential Risk as the Most Important Problem” paper to everyone they think suitable in a celebrity locator website of their choice. It’s not like the best brains will do it, so you as well might.
How about something different? Spend three days learning about the most effective legal cognitive enhancements, and send over a detailed email to researchers in that area saying you admire their work and think you can contribute by telling them about mind-sharpening pharmaceuticals.
I could go on, but I feel like this could be a new topic, something very similar to the munchkin idea topic, or my “pandemize vegetarianism” one…
For it not to be a worthwhile avenue for you to pursue, there need not be nothing you can do. If you have no comparative advantage over other willing candidates in the activities which you can do, you might as well leave it to them, and go do something where you have a comparative advantage.
Having been in the position of the smart person dealing with someone less smart trying to “help” him, I can tell you this is not necessarily the case.
There’s lots of ways to help. For example, if I wanted to help you work on an important problem and you were so much smarter (or otherwise more competent) than I that I could never contribute any useful assistance to that project, I could instead offer to pay your rent or do your laundry or cook your meals or otherwise perform tasks better-suited to my comparatively limited skills, thereby freeing up your intellect to work full-time on the problem.
Or, in other words, comparative advantage for the win!
This reminds me of a work of fiction I’m (intermittently) working on, which features a number of colonial overseers in a vast galactic empire, who’re so overwhelmingly dimwitted that the main character speculates that it’s as if the colonial power is assigning people with abilities in accordance with the importance of their posts, so the really unimportant colonies receive overseers who’re commensurately incompetent.