I am in very close to the same position as you (applied math grad student with almost the same interests) and I am quite sanguine about the future, barring worries about my own risk of failure.
Mainly because I may be less far along in my research career and I don’t yet feel precommitted to any research methods that look like they’re not working. Also because I have no real aversion to crass commercialism.
Thought 1: as far as I know, they still use a lot of PDE’s in computer graphics. Nobody’s going to write an SVM that can replace Pixar.
Thought 2: I don’t really believe pure dumb ML can solve the serious vision problems in the long run. It just looks like it works for now because you can throw a lot of processing power at a question. But this is not how your brain does it; there’s built-in structure and actual geometric information based on the assumption that we live in a physical world where images come from light illuminating objects. I have heard a few professors lament the shortsightedness of so-called machine vision researchers. If you want to do the deep stuff, maybe the best thing to do is work with one of the contrarian professors. That’s (approximately) what I’m doing, though I’m not working on vision at the moment. Or, more speculatively—there is a trend for some Silicon Valley types to invest in long-term basic research that universities don’t support. Maybe you could see if something like that could work for you.
Thought 3: if you’re interested in hacking yourself to be okay with not working in academia, consider that it’s more altruistic. A professor benefits from taxpayer dollars and the security of tenure (which protects him from competition by newcomers.) A developer in the private sector produces value for the rest of society, without accepting any non-free-market perks.
there is a trend for some Silicon Valley types to invest in long-term basic research that universities don’t support.
Can you point me to any specific examples of this? I have a grad student colleague here who is very involved with face detection and tracking and his work has essentially blown the state-of-the-art performance out of the water. Because of this, he’s heavily involved with various startups and web businesses looking to use his better face detection methods. When I queried him for advice, he basically said that not only is long-term, basic research very risky (especially if the researcher has a tendency to look for elegant mathematical solutions), but literally no one will pay you for it. He insisted that you won’t find any companies doing long term basic research because it won’t benefit them more than competitors in the long run.
One counter example to this might be Willow Garage. However, I think they still are not doing the very basic theoretical math research that they will actually wish they had once a personal robotics industry does start booming. I’ve really racked my brain trying to come up with places that actually pursue the theory because of the long term practical benefits.
Moreover, I am very discouraged about the state of academicpublishing right now. The main reason I want to hack myself is to change my preferences about being a university researcher. Currently, I see only two alternatives: university researcher or corporate/industrial/government researcher. I had always thought that in the former, people paid you grant money because of your ingenuity and foresight and the whole point was to use tax money to allow researchers to conduct high-risk research that commercial entities could not afford to risk their money on. As it turns out though, both of these options require you to pander to whatever the popular commercial interests of your day happen to be, even if you think that the popular commercial interests have got it all wrong and are going down the wrong track.
It makes me feel that I need to hack myself to want to want to just be a programmer for some company somewhere. Make enough money to do cryonics and have an internet connection and just float along. I feel very discouraged to really try anything else. And since my current preferences hold that computer programming for the sake of widget building is soul-crushingly terrible, I feel like it’s a big Catch-22 putting me in an ambivalent stalemate.
Lastly, the altruism argument you mention doesn’t appeal to me. I think society should have a tenure class of professors able (and required) to do riskier / theoretical research. The amount of work it takes to get to that position in life ought to outweigh whatever solely-free-market altruism a corporate scientist might be prideful of. But the reality is that by the time I am in a position to seriously apply for a tenure track job (2 more years of school, 2 years of post-doc, 5 years as assistant professors, so roughly 9 years from now), tenured positions just simply will not exist. It’s already a dying business model, which makes me feel like all of the time I’ve already spent training non-practical skills into myself was a massive unforeseen waste.
It wasn’t that I know of an existing organization that does what you want. It’s more that there exist things out there (like the SENS foundation, or Halcyon Labs, or SIAI itself) designed to do science in other fields. I agree that it would be hard to move into the “computer vision start-up” space with a more long-term focus, at least these days.
I am in very close to the same position as you (applied math grad student with almost the same interests) and I am quite sanguine about the future, barring worries about my own risk of failure.
Mainly because I may be less far along in my research career and I don’t yet feel precommitted to any research methods that look like they’re not working. Also because I have no real aversion to crass commercialism.
Thought 1: as far as I know, they still use a lot of PDE’s in computer graphics. Nobody’s going to write an SVM that can replace Pixar.
Thought 2: I don’t really believe pure dumb ML can solve the serious vision problems in the long run. It just looks like it works for now because you can throw a lot of processing power at a question. But this is not how your brain does it; there’s built-in structure and actual geometric information based on the assumption that we live in a physical world where images come from light illuminating objects. I have heard a few professors lament the shortsightedness of so-called machine vision researchers. If you want to do the deep stuff, maybe the best thing to do is work with one of the contrarian professors. That’s (approximately) what I’m doing, though I’m not working on vision at the moment. Or, more speculatively—there is a trend for some Silicon Valley types to invest in long-term basic research that universities don’t support. Maybe you could see if something like that could work for you.
Thought 3: if you’re interested in hacking yourself to be okay with not working in academia, consider that it’s more altruistic. A professor benefits from taxpayer dollars and the security of tenure (which protects him from competition by newcomers.) A developer in the private sector produces value for the rest of society, without accepting any non-free-market perks.
Can you point me to any specific examples of this? I have a grad student colleague here who is very involved with face detection and tracking and his work has essentially blown the state-of-the-art performance out of the water. Because of this, he’s heavily involved with various startups and web businesses looking to use his better face detection methods. When I queried him for advice, he basically said that not only is long-term, basic research very risky (especially if the researcher has a tendency to look for elegant mathematical solutions), but literally no one will pay you for it. He insisted that you won’t find any companies doing long term basic research because it won’t benefit them more than competitors in the long run.
One counter example to this might be Willow Garage. However, I think they still are not doing the very basic theoretical math research that they will actually wish they had once a personal robotics industry does start booming. I’ve really racked my brain trying to come up with places that actually pursue the theory because of the long term practical benefits.
Moreover, I am very discouraged about the state of academic publishing right now. The main reason I want to hack myself is to change my preferences about being a university researcher. Currently, I see only two alternatives: university researcher or corporate/industrial/government researcher. I had always thought that in the former, people paid you grant money because of your ingenuity and foresight and the whole point was to use tax money to allow researchers to conduct high-risk research that commercial entities could not afford to risk their money on. As it turns out though, both of these options require you to pander to whatever the popular commercial interests of your day happen to be, even if you think that the popular commercial interests have got it all wrong and are going down the wrong track.
It makes me feel that I need to hack myself to want to want to just be a programmer for some company somewhere. Make enough money to do cryonics and have an internet connection and just float along. I feel very discouraged to really try anything else. And since my current preferences hold that computer programming for the sake of widget building is soul-crushingly terrible, I feel like it’s a big Catch-22 putting me in an ambivalent stalemate.
Lastly, the altruism argument you mention doesn’t appeal to me. I think society should have a tenure class of professors able (and required) to do riskier / theoretical research. The amount of work it takes to get to that position in life ought to outweigh whatever solely-free-market altruism a corporate scientist might be prideful of. But the reality is that by the time I am in a position to seriously apply for a tenure track job (2 more years of school, 2 years of post-doc, 5 years as assistant professors, so roughly 9 years from now), tenured positions just simply will not exist. It’s already a dying business model, which makes me feel like all of the time I’ve already spent training non-practical skills into myself was a massive unforeseen waste.
It wasn’t that I know of an existing organization that does what you want. It’s more that there exist things out there (like the SENS foundation, or Halcyon Labs, or SIAI itself) designed to do science in other fields. I agree that it would be hard to move into the “computer vision start-up” space with a more long-term focus, at least these days.