LessWrong has a relatively strong anti-academic bias, and I’m worried that this is reflected in the comments.
I work as a PhD student in machine learning, and yes, there is a minimum bar of intelligence, perseverance, etc. below which doing high-quality research is unlikely. However, in my experience I have seen many people who are clearly above that bar who nevertheless go into industry. This is not to say that their choice is incorrect, but on balance I think the argument “don’t go into academia unless you’ll be one of the smartest people in your field” does more harm than good. It also seems to me that the effective altruist movement, in particularly, mostly overlooks academia as an altruistic career option, even though I personally think that for many intelligent people (including myself), working on the right research problems is the most valuable contribution they can make to society.
If you go into a field like mathematics or theoretical physics, yes, you’re unlikely to make a meaningful contribution unless you’re one of the best people in the field. This is because these fields have basically become an attractor for bright undergrads looking to “prove themselves” intellectually. I’m not trying to argue that these fields are not useful; I am trying to argue that the marginal usefulness of an additional researcher is low barring extraordinary circumstances.
In other fields, especially newer fields, this is far less true. Machine learning has plenty of low-hanging fruit. My impression is that bioinstrumentation and computational neuroscience do as well (not to mention many other fields that I just don’t happen to be as familiar with). This is not to say that working in these fields will be a cake-walk, or that there isn’t lots of competition for faculty jobs. It is to say that there are huge amounts of value to be created by working in these fields. Even if you don’t like pure research as a career option, you can create huge amounts of value by attaching yourself to a good lab as a software engineer.
It’s also worth noting that “doing research” isn’t some sort of magic skill that you do or don’t have. It’s something you acquire over time, and the meta-skills learned seem fairly valuable to me.
How do you know this? Have there been a lot of findings made by a lot of people without any indication that this stream of discoveries is slowing down? When I looked up e.g. Deep learning it seemed to be a relatively old technique (1980′s and early 90′s). What are some examples of recent discoveries you would describe as low-hanging fruits?
It’s worth noting that deep learning has made a huge resurgence lately, and is seeing applications all over the place.
There’s tons of active work in online learning, especially under resource constraints.
Structured prediction is older but still an active and important area of research.
Spectral learning / method of moments is a relatively new technique that seems very promising.
Conditional gradient techniques for optimization have had a lot of interest recently, although that may slow down in the next couple years. Similarly for submodular optimization.
There are many other topics that I think are important but haven’t been quite as stylish lately; e.g. improved MCMC algorithms, coarse-to-fine inference / cascades, dual decomposition techniques for inference.
LessWrong has a relatively strong anti-academic bias, and I’m worried that this is reflected in the comments.
I work as a PhD student in machine learning, and yes, there is a minimum bar of intelligence, perseverance, etc. below which doing high-quality research is unlikely. However, in my experience I have seen many people who are clearly above that bar who nevertheless go into industry. This is not to say that their choice is incorrect, but on balance I think the argument “don’t go into academia unless you’ll be one of the smartest people in your field” does more harm than good. It also seems to me that the effective altruist movement, in particularly, mostly overlooks academia as an altruistic career option, even though I personally think that for many intelligent people (including myself), working on the right research problems is the most valuable contribution they can make to society.
If you go into a field like mathematics or theoretical physics, yes, you’re unlikely to make a meaningful contribution unless you’re one of the best people in the field. This is because these fields have basically become an attractor for bright undergrads looking to “prove themselves” intellectually. I’m not trying to argue that these fields are not useful; I am trying to argue that the marginal usefulness of an additional researcher is low barring extraordinary circumstances.
In other fields, especially newer fields, this is far less true. Machine learning has plenty of low-hanging fruit. My impression is that bioinstrumentation and computational neuroscience do as well (not to mention many other fields that I just don’t happen to be as familiar with). This is not to say that working in these fields will be a cake-walk, or that there isn’t lots of competition for faculty jobs. It is to say that there are huge amounts of value to be created by working in these fields. Even if you don’t like pure research as a career option, you can create huge amounts of value by attaching yourself to a good lab as a software engineer.
It’s also worth noting that “doing research” isn’t some sort of magic skill that you do or don’t have. It’s something you acquire over time, and the meta-skills learned seem fairly valuable to me.
How do you know this? Have there been a lot of findings made by a lot of people without any indication that this stream of discoveries is slowing down? When I looked up e.g. Deep learning it seemed to be a relatively old technique (1980′s and early 90′s). What are some examples of recent discoveries you would describe as low-hanging fruits?
It’s worth noting that deep learning has made a huge resurgence lately, and is seeing applications all over the place.
There’s tons of active work in online learning, especially under resource constraints.
Structured prediction is older but still an active and important area of research.
Spectral learning / method of moments is a relatively new technique that seems very promising.
Conditional gradient techniques for optimization have had a lot of interest recently, although that may slow down in the next couple years. Similarly for submodular optimization.
There are many other topics that I think are important but haven’t been quite as stylish lately; e.g. improved MCMC algorithms, coarse-to-fine inference / cascades, dual decomposition techniques for inference.