So, obviously my estimates of the chance of success for any given person will depend on the person, for all I know an hour-long talk with you would completely convince me that this is the right decision for you, but for the median ivy-leaguer CS student, I am really not convinced, and for the median CS student outside good schools, it’s definitely a bad idea, most CS students really are much dumber than you’d think. If you have a track record of being able to complete long personal projects with no outside motivation or enforcement mechanism, under the stress of your livelihood depending on the success of this project (and the stress of your parents and friends not understanding what the hell you’re doing and thinking you’re ruining your life), then this is evidence that going off-track wouldn’t be disastrous. I have tried it, found it more difficult than I expected, and subsequently regretted it and went back to the normal track (though still with plenty of weird experiments within the track).
It is very likely that becoming highly skilled at AI outside of college will make you both useful (to saving the world) and non-homeless.
In the minds of hiring managers at normal companies, “AI experts” are a dime-a-dozen, because now those words have been devalued to mean whoever took Andrew Ng’s course on ML. You can’t get a data scientist job without a degree (which would presumably be the non-homeless fallback position), you certainly can’t get a research position at any of the good labs without a PhD, you can try publishing alone, but again this basically never happens. I suppose you could try winning Kaggle championships, but those have almost no relevance to AI safety, you could try making money by doing stock prediction with the numer.ai project, and making money that way (which is what I did), and that would provide some freedom to study what you want, but that’s again really hard. If you want to get grants from openPhil to do AI safety, that might be something, but really the skills you learn from getting good at AI safety have almost no marketable value, there is a very narrow road you can walk in this direction, and if anything goes wrong there isn’t much of a fallback position.
People can certainly handle more risk and more weirdness than they think, but there are many levels of risk increase between what the average student does and dropping out of school to focus on studying AI on your own.
So, obviously my estimates of the chance of success for any given person will depend on the person, for all I know an hour-long talk with you would completely convince me that this is the right decision for you, but for the median ivy-leaguer CS student, I am really not convinced, and for the median CS student outside good schools, it’s definitely a bad idea, most CS students really are much dumber than you’d think. If you have a track record of being able to complete long personal projects with no outside motivation or enforcement mechanism, under the stress of your livelihood depending on the success of this project (and the stress of your parents and friends not understanding what the hell you’re doing and thinking you’re ruining your life), then this is evidence that going off-track wouldn’t be disastrous. I have tried it, found it more difficult than I expected, and subsequently regretted it and went back to the normal track (though still with plenty of weird experiments within the track).
In the minds of hiring managers at normal companies, “AI experts” are a dime-a-dozen, because now those words have been devalued to mean whoever took Andrew Ng’s course on ML. You can’t get a data scientist job without a degree (which would presumably be the non-homeless fallback position), you certainly can’t get a research position at any of the good labs without a PhD, you can try publishing alone, but again this basically never happens. I suppose you could try winning Kaggle championships, but those have almost no relevance to AI safety, you could try making money by doing stock prediction with the numer.ai project, and making money that way (which is what I did), and that would provide some freedom to study what you want, but that’s again really hard. If you want to get grants from openPhil to do AI safety, that might be something, but really the skills you learn from getting good at AI safety have almost no marketable value, there is a very narrow road you can walk in this direction, and if anything goes wrong there isn’t much of a fallback position.
People can certainly handle more risk and more weirdness than they think, but there are many levels of risk increase between what the average student does and dropping out of school to focus on studying AI on your own.