Yea, what I meant is that the slides of Full Stack Deep Learning course materials provide a decent outline of all of the significant architectures worth learning.
I would personally not go to that low level of abstraction (e.g. implementing NNs in a new language) unless you really feel your understanding is shaky. Try building an actual side project, e.g. an object classifier for cars, and problems will arise naturally.
Wonderful – I’ll keep that in mind when I get around to reviewing/skimming that outline. Thanks for sharing it.
I have a particularly idiosyncratic set of reasons for the particular kind of ‘yak shaving’ I’m thinking of, but your advice, i.e. to NOT do any yak shaving, is noted and appreciated.
Yea, what I meant is that the slides of Full Stack Deep Learning course materials provide a decent outline of all of the significant architectures worth learning.
I would personally not go to that low level of abstraction (e.g. implementing NNs in a new language) unless you really feel your understanding is shaky. Try building an actual side project, e.g. an object classifier for cars, and problems will arise naturally.
Wonderful – I’ll keep that in mind when I get around to reviewing/skimming that outline. Thanks for sharing it.
I have a particularly idiosyncratic set of reasons for the particular kind of ‘yak shaving’ I’m thinking of, but your advice, i.e. to NOT do any yak shaving, is noted and appreciated.