I’d adjust the “breadth over depth” maxim in one particular way: Pick one (maybe two or three, but few) small-ish sub-fields / topics to go through in depth, taking them to an extreme. Past a certain point, something funny tends to happen, where what’s normally perceived as boundaries starts to warp and the whole space suddenly looks completely different.
When doing this, the goal is to observe that “funny shift” and the “shape” of that change as good as you can, to identify the signs of it and get as good a feeling for it as you can. I believe that being able to (at least sometimes) notice when that’s about to happen has been quite valuable for me, and I suspect it would be useful for AI and general rat topics too.
As a relatively detailed example: grammars, languages, and complexity classes are normally a topic of theoretical computer science. But if you actually look at all inputs through that lens, it gives you a good guesstimate for how exploitable parsers for certain file formats will be. If something is context-free but not regular, you know that you’ll have indirect access to some kind of stack. If it’s context sensitive, it’s basically freely programmable. For every file format (/ protocol / …), there’s a latent abstract machine that’s going to run your input, so your input will essentially be a program and—within the boundaries set by the creator of that machine—you decide what it’s going to do. (Turns out those boundaries are often uncomfortably loose...)
Some other less detailed examples: Working extensively with Coq / dependently typed programming languages shifted my view of axioms as something vaguely mystical/dangerous/special to a much more mundane “eh if it’s inconsistent it’ll crash”, I’m much more happy to just experiment with stuff and see what happens. Lambda calculus made me realize how data can be seen as “suspended computations”, how different data types have different “computational potential”. (Lisp teaches “code is data”, this is sorta-kinda the opposite.) More generally, “going off to infinity” in “theory land” for me often leads to “overflows” that wrap around into arcane deeply practical stuff. (E.g. using the algebra of algebraic data types to manually compress an ASM function by splitting it up into a lookup/jump table of small functions indexed by another simple outer function, thereby reducing total byte count and barely squeezing it into the available space.)
You’re unlikely to get these kinds of perspective shifts if you look only at the basics. So every once in a while, dare to just run with it, and see what happens.
(Another aspect of that which I noticed only after posting: If you always look only at the basics / broad strokes, to some degree you learn/reinforce not looking at the details. This may not be a thing that you want to learn.)
I’d adjust the “breadth over depth” maxim in one particular way: Pick one (maybe two or three, but few) small-ish sub-fields / topics to go through in depth, taking them to an extreme. Past a certain point, something funny tends to happen, where what’s normally perceived as boundaries starts to warp and the whole space suddenly looks completely different.
When doing this, the goal is to observe that “funny shift” and the “shape” of that change as good as you can, to identify the signs of it and get as good a feeling for it as you can. I believe that being able to (at least sometimes) notice when that’s about to happen has been quite valuable for me, and I suspect it would be useful for AI and general rat topics too.
As a relatively detailed example: grammars, languages, and complexity classes are normally a topic of theoretical computer science. But if you actually look at all inputs through that lens, it gives you a good guesstimate for how exploitable parsers for certain file formats will be. If something is context-free but not regular, you know that you’ll have indirect access to some kind of stack. If it’s context sensitive, it’s basically freely programmable. For every file format (/ protocol / …), there’s a latent abstract machine that’s going to run your input, so your input will essentially be a program and—within the boundaries set by the creator of that machine—you decide what it’s going to do. (Turns out those boundaries are often uncomfortably loose...)
Some other less detailed examples: Working extensively with Coq / dependently typed programming languages shifted my view of axioms as something vaguely mystical/dangerous/special to a much more mundane “eh if it’s inconsistent it’ll crash”, I’m much more happy to just experiment with stuff and see what happens. Lambda calculus made me realize how data can be seen as “suspended computations”, how different data types have different “computational potential”. (Lisp teaches “code is data”, this is sorta-kinda the opposite.) More generally, “going off to infinity” in “theory land” for me often leads to “overflows” that wrap around into arcane deeply practical stuff. (E.g. using the algebra of algebraic data types to manually compress an ASM function by splitting it up into a lookup/jump table of small functions indexed by another simple outer function, thereby reducing total byte count and barely squeezing it into the available space.)
You’re unlikely to get these kinds of perspective shifts if you look only at the basics. So every once in a while, dare to just run with it, and see what happens.
(Another aspect of that which I noticed only after posting: If you always look only at the basics / broad strokes, to some degree you learn/reinforce not looking at the details. This may not be a thing that you want to learn.)