Regarding the first part, here’s what comes to mind: Long before brains evolved any higher capacities (for “conscious”, “self-reflective”, etc. thought), they evolved to make their hosts respond to situations in “evolutionarily useful” ways. If you see food, some set of neurons fire and there’s one group of responses; if you see a predator, a different set of neurons fire.
Then you might define “food (as perceived by this organism)” to be “what tends to make this set of neurons fire (when light reflects off it (for certain ranges of light) and reaches the eyes of this organism)”. Boundary conditions (like something having a color that’s on the edge of what is recognized as food) are probably resolved “stochastically”: whether something that’s near the border of “food” actually fires the “food” neurons probably depends significantly on silly little environmental factors that normally don’t make a difference; we tend to call this “random” and say that this almost-food thing has a 30% chance of making the “food” neurons fire.
There probably are some self-reinforcing things that happen, to try[1] to make the neurons resolve one way or the other quickly, and to some extent quick resolution is more important than accuracy. (See Buridan’s principle: “A discrete decision based upon an input having a continuous range of values cannot [always] be made within a bounded length of time.”) Also, extremely rare situations are unimportant, evolutionarily speaking, so “the API does not specify the consequences” for exactly how the brain will respond to strange and contrived inputs.
(“This set of neurons fires” is not a perfectly well-defined and uniform phenomenon either. But that doesn’t prevent evolution from successfully making organisms that make it happen.)
Before brains (and alongside brains), organisms could adapt in other ways. I think the advantage of brains is that they increase your options, specifically by letting you choose and execute complex sequences of muscular responses to situations in a relatively cheap and sensitive way, compared to rigging up Rube Goldberg macroscopic-physical-event machines that could execute the same responses.
Having a brain with different groups of neurons that execute different responses, and having certain groups fire in response to certain kinds of situations, seems like a plausibly useful way to organize the brain. It would mean that, when fine-tuning how group X of neurons responds to situation Y, you don’t have to worry about what impacts your changes might have in completely different situations ABC that don’t cause group X to fire.
I suspect language was ultimately built on top of the above. First you have groups of organisms that recognize certain things (i.e. they have certain groups of neurons that fire in response to perceiving something in the range of that thing) and respond in predictable ways; then you have organisms that notice the predictable behavior of other organisms, and develop responses to that; then you have organisms noticing that others are responding to their behavior, and doing certain things for the sole purpose[1] of signaling others to respond.
Learning plus parent-child stuff might be important here. If your helpless baby responds (by crying) in different ways to different problems, and you notice this and learn the association, then you can do better at helping your baby.
Anyway, I think that at least the original notion of “a thing that I recognize to be an X” is ultimately derived from “a group of neurons that fire (reasonably reliably) when sensory input from something sufficiently like an X enters the brain”. Originally, the neuronal connections (and the concepts we might say they represented) were probably mostly hardcoded by DNA; later they probably developed a lot of “run-time configuration” (i.e. the DNA lays out processes for having the organism learn things, ranging from “what food looks like” [and having those neurons link into the hardcoded food circuit], through learning to associate mostly-arbitrary “language” tokens to concepts that existing neuron-groups recognize, to having general-purpose hardware for describing and pondering arbitrary new concepts). But I suspect that the underlying “concept X <--> a group of neurons that fires in response to perceiving something like X, which gates the organism’s responses to X” organization principle remains mostly intact.
Regarding the first part, here’s what comes to mind: Long before brains evolved any higher capacities (for “conscious”, “self-reflective”, etc. thought), they evolved to make their hosts respond to situations in “evolutionarily useful” ways. If you see food, some set of neurons fire and there’s one group of responses; if you see a predator, a different set of neurons fire.
Then you might define “food (as perceived by this organism)” to be “what tends to make this set of neurons fire (when light reflects off it (for certain ranges of light) and reaches the eyes of this organism)”. Boundary conditions (like something having a color that’s on the edge of what is recognized as food) are probably resolved “stochastically”: whether something that’s near the border of “food” actually fires the “food” neurons probably depends significantly on silly little environmental factors that normally don’t make a difference; we tend to call this “random” and say that this almost-food thing has a 30% chance of making the “food” neurons fire.
There probably are some self-reinforcing things that happen, to try[1] to make the neurons resolve one way or the other quickly, and to some extent quick resolution is more important than accuracy. (See Buridan’s principle: “A discrete decision based upon an input having a continuous range of values cannot [always] be made within a bounded length of time.”) Also, extremely rare situations are unimportant, evolutionarily speaking, so “the API does not specify the consequences” for exactly how the brain will respond to strange and contrived inputs.
(“This set of neurons fires” is not a perfectly well-defined and uniform phenomenon either. But that doesn’t prevent evolution from successfully making organisms that make it happen.)
Before brains (and alongside brains), organisms could adapt in other ways. I think the advantage of brains is that they increase your options, specifically by letting you choose and execute complex sequences of muscular responses to situations in a relatively cheap and sensitive way, compared to rigging up Rube Goldberg macroscopic-physical-event machines that could execute the same responses.
Having a brain with different groups of neurons that execute different responses, and having certain groups fire in response to certain kinds of situations, seems like a plausibly useful way to organize the brain. It would mean that, when fine-tuning how group X of neurons responds to situation Y, you don’t have to worry about what impacts your changes might have in completely different situations ABC that don’t cause group X to fire.
I suspect language was ultimately built on top of the above. First you have groups of organisms that recognize certain things (i.e. they have certain groups of neurons that fire in response to perceiving something in the range of that thing) and respond in predictable ways; then you have organisms that notice the predictable behavior of other organisms, and develop responses to that; then you have organisms noticing that others are responding to their behavior, and doing certain things for the sole purpose[1] of signaling others to respond.
Learning plus parent-child stuff might be important here. If your helpless baby responds (by crying) in different ways to different problems, and you notice this and learn the association, then you can do better at helping your baby.
Anyway, I think that at least the original notion of “a thing that I recognize to be an X” is ultimately derived from “a group of neurons that fire (reasonably reliably) when sensory input from something sufficiently like an X enters the brain”. Originally, the neuronal connections (and the concepts we might say they represented) were probably mostly hardcoded by DNA; later they probably developed a lot of “run-time configuration” (i.e. the DNA lays out processes for having the organism learn things, ranging from “what food looks like” [and having those neurons link into the hardcoded food circuit], through learning to associate mostly-arbitrary “language” tokens to concepts that existing neuron-groups recognize, to having general-purpose hardware for describing and pondering arbitrary new concepts). But I suspect that the underlying “concept X <--> a group of neurons that fires in response to perceiving something like X, which gates the organism’s responses to X” organization principle remains mostly intact.
Anthropomorphic language shorthand for the outputs of evolutionary selection