It’s interesting to me that the proper linear model example is essentially a stripped down version of a very simple neural network with a linear activation function.
Yes, sort of, but a) a linear classifier is not a Turing-complete model of computation, and b) there is a clear resemblance that can be seen by merely glancing at the equations.
I would argue that neurons, neural nets, SPRs, and everyone else doing linear regression use those techniques because it’s the simplest way to aggregate data.
It’s interesting to me that the proper linear model example is essentially a stripped down version of a very simple neural network with a linear activation function.
Is that really true? Couldn’t one say that of just about any Turing-complete (or less) model of computation?
‘Oh, it’s interesting that they are really just a simple unary fixed-length lambda-calculus function with constant-value parameters.’
‘Oh, it’s interesting that they are really just restricted petri-nets with bounded branching factors.’
‘Oh, it’s interesting that these are modelable by finite automata.’
etc. (Plausible-sounding gobbledygook included to make the point.)
Yes, sort of, but a) a linear classifier is not a Turing-complete model of computation, and b) there is a clear resemblance that can be seen by merely glancing at the equations.
I would argue that neurons, neural nets, SPRs, and everyone else doing linear regression use those techniques because it’s the simplest way to aggregate data.