I like your framing that thoughts represents attempts to “fit” the nonlinear dynamics of reality. This might actually be a more clarifying phrasing than the more general term “mapping” that I commonly see used. It makes the failure modes more obvious to imagine the brain as a highly intertwined group of neural networks attempting to find some highly compressive, very high R2 “fit” to the data of the world.
“Classification” is a task we canonically use neural networks for, and it’s not surprising that classification is both fundamental to human thought and potentially highly pathological. Perusing Stove’s list of 40 wrong statements, through the lense of “if this statement were the output of an artificial neural network, what would the neural network be doing wrong?”, I feel like a lot of them are indeed classification errors.
“Three” is a label that is activated by classification circuitry. The neural classification circuitry abstracts “three-ness” from the datastream as a useful compression. I myself have trained a neural network to accurately count the number of balls in a video stream; that neural network has a concept of three-ness. Unlike that particular neural network, humans then introspect on three-ness and get confused about what it is. We get further confused because “three-ness” has other innate properties in the context of mathematics, unlike, say, “duck-ness”. We feel like it must be explained beyond just being a useful compression filter. “Three is a real object.” “There is no real number three.” “Three is an essence.” “There is an ideal three which transcends actual triples of objects.” Almost any of the statements of the form “Three is … ” fall into this trap of being overinterpretations of a classification scheme.
Thanks for sharing the essay.
I like your framing that thoughts represents attempts to “fit” the nonlinear dynamics of reality. This might actually be a more clarifying phrasing than the more general term “mapping” that I commonly see used. It makes the failure modes more obvious to imagine the brain as a highly intertwined group of neural networks attempting to find some highly compressive, very high R2 “fit” to the data of the world.
“Classification” is a task we canonically use neural networks for, and it’s not surprising that classification is both fundamental to human thought and potentially highly pathological. Perusing Stove’s list of 40 wrong statements, through the lense of “if this statement were the output of an artificial neural network, what would the neural network be doing wrong?”, I feel like a lot of them are indeed classification errors.
“Three” is a label that is activated by classification circuitry. The neural classification circuitry abstracts “three-ness” from the datastream as a useful compression. I myself have trained a neural network to accurately count the number of balls in a video stream; that neural network has a concept of three-ness. Unlike that particular neural network, humans then introspect on three-ness and get confused about what it is. We get further confused because “three-ness” has other innate properties in the context of mathematics, unlike, say, “duck-ness”. We feel like it must be explained beyond just being a useful compression filter. “Three is a real object.” “There is no real number three.” “Three is an essence.” “There is an ideal three which transcends actual triples of objects.” Almost any of the statements of the form “Three is … ” fall into this trap of being overinterpretations of a classification scheme.
All of the above can probably be corrected by consistently replacing the symbol with the substance and tabooing words, rather than playing syntactic games with symbols.