I didn’t say Perceptrons (the book) was in any way invalidated by backprop. Perceptrons cannot, in fact, learn to recognize XOR. The proof of this is both correct and obvious; and moreover, does not need to be extended to multilayer Perceptrons because multilayer linear = linear.
That no one’s trained a backprop-type system to distinguish connected from unconnected surfaces (in general) is, if true, not too surprising; the space of “connected” versus “unconnected” would cover an incredible number and variety of possible figures, and offhand there doesn’t seem to be a very good match between that global property and the kind of local features detected by most neural nets.
I’m no fan of neurons; this may be clearer from other posts.
I didn’t say Perceptrons (the book) was in any way invalidated by backprop. Perceptrons cannot, in fact, learn to recognize XOR. The proof of this is both correct and obvious; and moreover, does not need to be extended to multilayer Perceptrons because multilayer linear = linear.
That no one’s trained a backprop-type system to distinguish connected from unconnected surfaces (in general) is, if true, not too surprising; the space of “connected” versus “unconnected” would cover an incredible number and variety of possible figures, and offhand there doesn’t seem to be a very good match between that global property and the kind of local features detected by most neural nets.
I’m no fan of neurons; this may be clearer from other posts.