Go work for Google (Research if possible) for two years, and at the end of it, you will be in a strictly better position to do 2A, and if you are in the right group 1B. I have no idea if they do much of 1A but they’re basically the biggest AI company there is. I’m not confident in my naive interpretation of what collaborative learning is, but recommendation and reputation systems are also part of Google’s bag, as well as Amazon’s.
As far as 2B, online education goes, I suspect you’d be more effective combining that with 3A, by going to work for Khan Academy, or doing a kind of clone, making money, then spending it on getting poor grad students to do videos for the content they’re teaching anyway. OCW is great and all, but imagine someone taking the core MIT/CMU/Berkeley CS curriculum, and putting an integrated course for the intersection of those online.
As far as 3C goes; this community is pretty new but people have been doing data driven thinking for a long time and some of them have gotten very good at it. Seriously, Google analyses everything from the traits of successful managers to attempting to predict which employees will leave.
in many subfields of AI, the stuff that’s locked up in Google proprietary information is light years beyond what’s available in academia
What is your evidence for this? (Sorry if it’s somewhere in the reddit thread, I didn’t read too far down.)
I have heard this claimed by multiple sources but looking at the webpages of most google research scientists indicates that they aren’t even working on new theory so much as applying what’s already out there, so I’m curious what’s causing our beliefs to diverge so much.
I don’t have any evidence beyond Jonathan Tang’s say-so. And the fact that Peter Norvig works at Google. There may be something useful in Reddit’s video interview with him, but it’s been ages since I watched it, so I don’t know.
Go work for Google (Research if possible) for two years, and at the end of it, you will be in a strictly better position to do 2A, and if you are in the right group 1B. I have no idea if they do much of 1A but they’re basically the biggest AI company there is. I’m not confident in my naive interpretation of what collaborative learning is, but recommendation and reputation systems are also part of Google’s bag, as well as Amazon’s.
As far as 2B, online education goes, I suspect you’d be more effective combining that with 3A, by going to work for Khan Academy, or doing a kind of clone, making money, then spending it on getting poor grad students to do videos for the content they’re teaching anyway. OCW is great and all, but imagine someone taking the core MIT/CMU/Berkeley CS curriculum, and putting an integrated course for the intersection of those online.
As far as 3C goes; this community is pretty new but people have been doing data driven thinking for a long time and some of them have gotten very good at it. Seriously, Google analyses everything from the traits of successful managers to attempting to predict which employees will leave.
For some perspective on Google you could pm cousin_it, or check these comments out. NLP, information retrieval, and entity extraction are often in Search Quality, not Research. Machine learning is everywhere in the company, even intranet tools.
in many subfields of AI, the stuff that’s locked up in Google proprietary information is light years beyond what’s available in academia
Most companies are nothing like Google
What is your evidence for this? (Sorry if it’s somewhere in the reddit thread, I didn’t read too far down.)
I have heard this claimed by multiple sources but looking at the webpages of most google research scientists indicates that they aren’t even working on new theory so much as applying what’s already out there, so I’m curious what’s causing our beliefs to diverge so much.
I don’t have any evidence beyond Jonathan Tang’s say-so. And the fact that Peter Norvig works at Google. There may be something useful in Reddit’s video interview with him, but it’s been ages since I watched it, so I don’t know.