[The Deep Learning Book] takes a purely frequentist perspective
Are you sure? I haven’t read too much of it (though I read some from time to time), but it seems solidly agnostic about the debate. What do you think the book lacks that would be found in an equivalent Bayesian textbook?
Hmh, I interpret standard nerual netwroks (which are the ones it focuses on) to be frequentist, since you are essentially maximising a likelihood without any priors and without an built-in uncertainty.
There’s the whole bayesian nn world where the focus is on being able to easily embed priors and treating every cell as a probability distribution and obtaining a probability distribution for every output cell (which is the important part).
In practice this doesn’t differ much, since you’re essentially just adding a few more terms to every weight and bias, but it seems to be a field that’s picking up speed… then again, I might just be stuck in my own reading bubble.
I guess upon further consideration I could scratch that whole thing, I’m honestly unsure if baesyan/frequentist is even a relevant distinction to be made anymore about modern ML/statistics/
Are you sure? I haven’t read too much of it (though I read some from time to time), but it seems solidly agnostic about the debate. What do you think the book lacks that would be found in an equivalent Bayesian textbook?
Hmh, I interpret standard nerual netwroks (which are the ones it focuses on) to be frequentist, since you are essentially maximising a likelihood without any priors and without an built-in uncertainty.
There’s the whole bayesian nn world where the focus is on being able to easily embed priors and treating every cell as a probability distribution and obtaining a probability distribution for every output cell (which is the important part).
In practice this doesn’t differ much, since you’re essentially just adding a few more terms to every weight and bias, but it seems to be a field that’s picking up speed… then again, I might just be stuck in my own reading bubble.
I guess upon further consideration I could scratch that whole thing, I’m honestly unsure if baesyan/frequentist is even a relevant distinction to be made anymore about modern ML/statistics/