My read was that it’s less an argument for the end-to-end principle and more an argument for modular, composable building blocks of which understanding of internals is not required (not the author though).
(Note that my experience of trying new combinations of deep learning components hasn’t really matched this. E.g., I’ve spent a lot of time and effort trying to get new loss functions to work with various deep learning architectures, often with very limited success and often could not get away with not understanding what was going on “under the hood”.)
My read was that it’s less an argument for the end-to-end principle and more an argument for modular, composable building blocks of which understanding of internals is not required (not the author though).
If it could be construed as me arguing ‘for’ something than yes, this is what I was arguing for. I’m not seeing how the end-to-end principle applies here (as in, the one used in networking), but maybe it’s a different usage of the term I’m unfamiliar with.
My read was that it’s less an argument for the end-to-end principle and more an argument for modular, composable building blocks of which understanding of internals is not required (not the author though).
(Note that my experience of trying new combinations of deep learning components hasn’t really matched this. E.g., I’ve spent a lot of time and effort trying to get new loss functions to work with various deep learning architectures, often with very limited success and often could not get away with not understanding what was going on “under the hood”.)
If it could be construed as me arguing ‘for’ something than yes, this is what I was arguing for. I’m not seeing how the end-to-end principle applies here (as in, the one used in networking), but maybe it’s a different usage of the term I’m unfamiliar with.