As a secondary opinion (from someone with a formal CS education), John is correct—self-education is easier with CS than math, as is signalling that you’re good with CS. However, some random suggestions, assuming you still want to be good at CS:
Self-educating in CS is likely to leave a few holes. To the degree you can do so cheaply, consider trying for a CS degree while learning math. It’s entirely possible that you’d be able to accomplish the CS degree with only a few weeks of work per semester. This depends on local conditions and the cost of formal education, though.
Alternately (or additionally), an internship in a good technology company is a nice way of rounding yourself out, while also paying off economically. Avoid the ones that would want you to work free or very cheaply; avoid non-tech companies that happen to have a large tech division. Google is a good alternative, but there are quite a few others. This has large potential downsides if you pick the wrong company, though, so be careful; it’s also a large time investment.
Practically every coder/computer person believes their skills are better than they actually are until they’ve worked a few years in the field. A lot still believe that afterwards. Take this into account; you wouldn’t believe the number of really terrible interviewee subjects we get. On the flip side, if you’re genuinely good then you don’t need to worry about getting a job; the high rates of refusals on job interviews with e.g. Google are mostly because the interviewees are, as previously mentioned, really terrible.
Don’t underestimate the value of holistic understanding of computers. A lot of the people who get rejected (at Google) are rejected because their understanding is too narrow. However… that may not be such a problem if your plan is to work for MIRI. You’ll want a plan B, though.
The more I think about it, the more I think I have to say “no comment”. Sorry, I’d have liked to answer.
Instead, have some thoughts:
Curiosity is good. Once you get above the dregs, the #1 problem with programmers is if they feel comfortable not learning anything not important to their job.
There’s a difference between “knowing how to do something” and “understanding how it works, therefore knowing how to do something”. Everyone wants the latter, some don’t realise they want it, but a lot of people settle for the former.
With that said, I imagine we’d hire anyone with either outstanding depth of knowledge or breadth of knowledge, but don’t quote me on that. It’s general advice, definitely not a comment on hiring practices that I’m not that familiar with.
I didn’t mean to ask for actual responses. Fictitious works fine too. I was just trying to identify the boundary.
There’s a difference between “knowing how to do something” and “understanding how it works, therefore knowing how to do something”.
The difference between having a lookup table of solutions, and having an accurate model of the system upon which solutions can be simulated and verified.
I didn’t mean to ask for actual responses. Fictitious works fine too. I was just trying to identify the boundary.
I’d never have provided actual responses. However, even vague descriptions of what we’re looking for risks getting more candidates who have optimised to pass the interview instead of actually doing well in the job. Proxies for ability are necessary, but proxies only work so long as they aren’t common knowledge.
The difference between having a lookup table of solutions, and having an accurate model of the system upon which solutions can be simulated and verified.
That’s part of it, but especially with computers—which are built on abstractions—you can have an accurate model of single layers without really understanding how the layers beneath (or above) work. Ideally, we want people who have reasonably accurate models of all layers.
A lot—possibly most—programmers don’t have that, and therefore fall prey to leaky abstractions on occasion. We can’t afford that, not when potential losses are measured in thousands of dollars per second.
As a secondary opinion (from someone with a formal CS education), John is correct—self-education is easier with CS than math, as is signalling that you’re good with CS. However, some random suggestions, assuming you still want to be good at CS:
Self-educating in CS is likely to leave a few holes. To the degree you can do so cheaply, consider trying for a CS degree while learning math. It’s entirely possible that you’d be able to accomplish the CS degree with only a few weeks of work per semester. This depends on local conditions and the cost of formal education, though.
Alternately (or additionally), an internship in a good technology company is a nice way of rounding yourself out, while also paying off economically. Avoid the ones that would want you to work free or very cheaply; avoid non-tech companies that happen to have a large tech division. Google is a good alternative, but there are quite a few others. This has large potential downsides if you pick the wrong company, though, so be careful; it’s also a large time investment.
Practically every coder/computer person believes their skills are better than they actually are until they’ve worked a few years in the field. A lot still believe that afterwards. Take this into account; you wouldn’t believe the number of really terrible interviewee subjects we get. On the flip side, if you’re genuinely good then you don’t need to worry about getting a job; the high rates of refusals on job interviews with e.g. Google are mostly because the interviewees are, as previously mentioned, really terrible.
Don’t underestimate the value of holistic understanding of computers. A lot of the people who get rejected (at Google) are rejected because their understanding is too narrow. However… that may not be such a problem if your plan is to work for MIRI. You’ll want a plan B, though.
Can you give some examples of responses at the threshold of terrible?
The more I think about it, the more I think I have to say “no comment”. Sorry, I’d have liked to answer.
Instead, have some thoughts:
Curiosity is good. Once you get above the dregs, the #1 problem with programmers is if they feel comfortable not learning anything not important to their job.
There’s a difference between “knowing how to do something” and “understanding how it works, therefore knowing how to do something”. Everyone wants the latter, some don’t realise they want it, but a lot of people settle for the former.
With that said, I imagine we’d hire anyone with either outstanding depth of knowledge or breadth of knowledge, but don’t quote me on that. It’s general advice, definitely not a comment on hiring practices that I’m not that familiar with.
I didn’t mean to ask for actual responses. Fictitious works fine too. I was just trying to identify the boundary.
The difference between having a lookup table of solutions, and having an accurate model of the system upon which solutions can be simulated and verified.
I’d never have provided actual responses. However, even vague descriptions of what we’re looking for risks getting more candidates who have optimised to pass the interview instead of actually doing well in the job. Proxies for ability are necessary, but proxies only work so long as they aren’t common knowledge.
That’s part of it, but especially with computers—which are built on abstractions—you can have an accurate model of single layers without really understanding how the layers beneath (or above) work. Ideally, we want people who have reasonably accurate models of all layers.
A lot—possibly most—programmers don’t have that, and therefore fall prey to leaky abstractions on occasion. We can’t afford that, not when potential losses are measured in thousands of dollars per second.