These questions are way too ‘Eureka!‘/trivia for my taste. The first question relies on language specifics and then is really much more of a ‘do you know the moderately weird sorting algs’ question than an actual algorithms question. The second involves an oddly diluting resistant test. The third, again seems to test moderately well known crypto triva.
I’ve conducted ~60 interviews for Google and Waymo. To be clear, when I or most other interviewers I’ve seen say that a question covers ‘algorithms’ it means that it covers algorithm design. Ex. how do you find all pairs in an array whose sum is a given value? Such a question can be answered in several different ways, and the ‘best’ way involves using some moderately interesting data structures.
I’m super curious what kind of rubric you use in grading these questions.
If you happen to know the answer already then the question is ruined. In this way, every algorithm puzzle in the world can be ruined by trivia. For an algorithm question to be interesting, I hope the reader doesn’t already know the answer and has to figure it out her/himself. So in questions #1 and #3, I’m hoping the reader doesn’t know the relevant trivia and will instead independently derive the answers without looking them up.
I’m not trying to ask “Do you know how Poland cracked the Enigma?” I’m trying to ask “Can you figure out how Poland cracked the Enigma?”
I don’t grade these questions. These questions are for fun and self-improvement. Though I could imagine a timed written test with dozens of questions like this where the testee gets one point for each correct answer and loses one point for each incorrect answer. A sufficiently large number of questions might help counteract the individual variance.
These questions are way too ‘Eureka!‘/trivia for my taste. The first question relies on language specifics and then is really much more of a ‘do you know the moderately weird sorting algs’ question than an actual algorithms question. The second involves an oddly diluting resistant test. The third, again seems to test moderately well known crypto triva.
I’ve conducted ~60 interviews for Google and Waymo. To be clear, when I or most other interviewers I’ve seen say that a question covers ‘algorithms’ it means that it covers algorithm design. Ex. how do you find all pairs in an array whose sum is a given value? Such a question can be answered in several different ways, and the ‘best’ way involves using some moderately interesting data structures.
I’m super curious what kind of rubric you use in grading these questions.
If you happen to know the answer already then the question is ruined. In this way, every algorithm puzzle in the world can be ruined by trivia. For an algorithm question to be interesting, I hope the reader doesn’t already know the answer and has to figure it out her/himself. So in questions #1 and #3, I’m hoping the reader doesn’t know the relevant trivia and will instead independently derive the answers without looking them up.
I’m not trying to ask “Do you know how Poland cracked the Enigma?” I’m trying to ask “Can you figure out how Poland cracked the Enigma?”
I don’t grade these questions. These questions are for fun and self-improvement. Though I could imagine a timed written test with dozens of questions like this where the testee gets one point for each correct answer and loses one point for each incorrect answer. A sufficiently large number of questions might help counteract the individual variance.