I still don’t know much about GANs, Gaussian processes, Bayesian neural nets, and convex optimization.
FWIW, I’d recommend investing the time to learn about convex optimization even if you don’t think you need it yet. Unlike the other topics on this list, the ideas from convex optimization are relevant to a wide variety of things the name does not suggest. Some examples:
how to think about optimization in high dimensions (even for non-convex functions)
using constraints as a more-gearsy way of representing relevant information (as opposed to e.g. just wrapping that information into an objective function directly)
quantifying slackness/tautness (which is one of the main benefits of representing information as constraints)
useful intuitions for making numerical algorithms efficient, especially leveraging sparsity and recognizing failure modes of poor condition numbers
Personally, I consider convex optimization ideas fundamentally useful for thinking in a similar way to linear algebra or multivariate calculus.
(Note: I first learned the topic mainly from Boyd’s lectures; he’s unusually good at conveying many useful intuitions in those talks, so you might get less value from other lecturers or the canonical convex optimization book.)
Yeah that’s definitely the one on the list that I think would be most useful.
I may also be understating how much I know about it; I’ve picked up some over time, e.g. linear programming, minimax, some kinds of duality, mirror descent, Newton’s method.
FWIW, I’d recommend investing the time to learn about convex optimization even if you don’t think you need it yet. Unlike the other topics on this list, the ideas from convex optimization are relevant to a wide variety of things the name does not suggest. Some examples:
how to think about optimization in high dimensions (even for non-convex functions)
using constraints as a more-gearsy way of representing relevant information (as opposed to e.g. just wrapping that information into an objective function directly)
quantifying slackness/tautness (which is one of the main benefits of representing information as constraints)
useful intuitions for making numerical algorithms efficient, especially leveraging sparsity and recognizing failure modes of poor condition numbers
Personally, I consider convex optimization ideas fundamentally useful for thinking in a similar way to linear algebra or multivariate calculus.
(Note: I first learned the topic mainly from Boyd’s lectures; he’s unusually good at conveying many useful intuitions in those talks, so you might get less value from other lecturers or the canonical convex optimization book.)
Yeah that’s definitely the one on the list that I think would be most useful.
I may also be understating how much I know about it; I’ve picked up some over time, e.g. linear programming, minimax, some kinds of duality, mirror descent, Newton’s method.