I don’t think we know to what extent abstract objects need to share isomorphic neural structures that represent them in different individuals.
To present an extreme example, imagine a human and a hypothetical alien, the human and the alien both share an emotion that we can categorise behaviourally as “love”, but their neural structures are unrelated. In both cases we can use the same concept of “love” to make predictions and inductive reasoning. The concept of “love” applies to both.
Should we think they share the same relevant neural structures to share the concept of “love”? I would not expect that, entirely different neural structures could conceivably produce ‘things’ that are clearly analogous, we could say that here there’s no one ‘love’, but two different emotions, but we can say the same thing about many other ‘things’ that we consider one thing or their own, like different designs of table.
This might be the case for mental representations in humans. To what extent should we think that two humans represent things by having the same neural structures? I don’t know.
but there must be a shared property of both physical shapes in order to describe that property’s dynamics and call it love, yeah? as long as we limit ourselves to describing physical data using our abstractions, the impact of being abstract is that the shape of the concept is built out of a grammar that generates and accepts more structural variations—the grammar defining the concept “love” presumably involves component features such as “agentic caring”, which is a feature of action patterns within and between brains, those actions themselves a spacetime shape of energy transfer, or of “enjoyment”, a feature I would guess is defined by the satisfaction of agentic caring, which is to say that the networks of pattern in whatever brain you happen to have implementing agentic caring about another brain are in a state of satisfaction within the range of target shapes they’d like the cared-about object to take, this satisfaction in turn defined by lack of intervening action generated by the other brain’s state according to the evaluating brain.
obviously love has a lot of parts and we’re always hesitant to ensure we don’t miss parts, as love is a fuzzy thing and we’re never totally sure we’ve labeled it right.
but to the degree it’s the same, it has to have some sort of structural similarity. abstraction exists In terms of what patterns we can call the same due to shared features of their grammar.
Do they have to share a structure though? I was trying to present a possible exception.
Part of my point was that evolutionary processes might be able to create two different processes that are behaviorally equivalent. It could be something analogous to two machines giving the same output through entirely different inner processes.
‘Love’ is ultimately just a high-level description of behavior in living systems, people are even arguing how much our folk mental concepts are accurate descriptions of brain states.
There’s also the fact that we can seemingly abstract as much as we want the precision of structures to define ‘things’, specially with artifacts, like tables (because of the metaphysical concept of ‘purpose’). If we unrestrict composition, that any two instances of what we call the same thing share a common descriptive basic structure becomes trivial. I believe this is part of the reason numbers are so ‘useful’, they are such generic/basic/undefined constructs that is difficult to imagine an universe in which there’s no stuff that we can abstract into things to count.
If we don’t want the idea of shared structure to be trivial, we need to restrict the idea of ‘thing’ to something that is ‘natural’ instead of ‘social’. For an emotion to be ‘natural’, it means that it’s an accurate and consistent description of a brain state separate from other so-called ‘emotions’ with somewhat defined boundaries. I’m assuming a computational theory of mind.
Then, if we have ‘human love ‘and ‘alien love’, it might just be that the neural structures that identify ‘alien love’ are not anymore similar to ‘human love’ than to any other human emotions. Then the thing that they would share to be called “love” would be their behavioral effects, their effects on the peripherals of the brain-body system and their subsequent behavior. This might be possible through the process of convergent evolution; if it’s not possible, that means there has to be a shared computational structure for the same behavioral program called that we identify as ‘love’.
In a theory of embodied mind, the peripherals would be part of the mental process of ‘love’, so they would probably need to share a common structure by necessity.
I don’t think we know to what extent abstract objects need to share isomorphic neural structures that represent them in different individuals.
To present an extreme example, imagine a human and a hypothetical alien, the human and the alien both share an emotion that we can categorise behaviourally as “love”, but their neural structures are unrelated. In both cases we can use the same concept of “love” to make predictions and inductive reasoning. The concept of “love” applies to both.
Should we think they share the same relevant neural structures to share the concept of “love”? I would not expect that, entirely different neural structures could conceivably produce ‘things’ that are clearly analogous, we could say that here there’s no one ‘love’, but two different emotions, but we can say the same thing about many other ‘things’ that we consider one thing or their own, like different designs of table.
This might be the case for mental representations in humans. To what extent should we think that two humans represent things by having the same neural structures? I don’t know.
but there must be a shared property of both physical shapes in order to describe that property’s dynamics and call it love, yeah? as long as we limit ourselves to describing physical data using our abstractions, the impact of being abstract is that the shape of the concept is built out of a grammar that generates and accepts more structural variations—the grammar defining the concept “love” presumably involves component features such as “agentic caring”, which is a feature of action patterns within and between brains, those actions themselves a spacetime shape of energy transfer, or of “enjoyment”, a feature I would guess is defined by the satisfaction of agentic caring, which is to say that the networks of pattern in whatever brain you happen to have implementing agentic caring about another brain are in a state of satisfaction within the range of target shapes they’d like the cared-about object to take, this satisfaction in turn defined by lack of intervening action generated by the other brain’s state according to the evaluating brain.
obviously love has a lot of parts and we’re always hesitant to ensure we don’t miss parts, as love is a fuzzy thing and we’re never totally sure we’ve labeled it right.
but to the degree it’s the same, it has to have some sort of structural similarity. abstraction exists In terms of what patterns we can call the same due to shared features of their grammar.
Do they have to share a structure though? I was trying to present a possible exception. Part of my point was that evolutionary processes might be able to create two different processes that are behaviorally equivalent. It could be something analogous to two machines giving the same output through entirely different inner processes.
‘Love’ is ultimately just a high-level description of behavior in living systems, people are even arguing how much our folk mental concepts are accurate descriptions of brain states.
There’s also the fact that we can seemingly abstract as much as we want the precision of structures to define ‘things’, specially with artifacts, like tables (because of the metaphysical concept of ‘purpose’). If we unrestrict composition, that any two instances of what we call the same thing share a common descriptive basic structure becomes trivial. I believe this is part of the reason numbers are so ‘useful’, they are such generic/basic/undefined constructs that is difficult to imagine an universe in which there’s no stuff that we can abstract into things to count.
If we don’t want the idea of shared structure to be trivial, we need to restrict the idea of ‘thing’ to something that is ‘natural’ instead of ‘social’. For an emotion to be ‘natural’, it means that it’s an accurate and consistent description of a brain state separate from other so-called ‘emotions’ with somewhat defined boundaries. I’m assuming a computational theory of mind.
Then, if we have ‘human love ‘and ‘alien love’, it might just be that the neural structures that identify ‘alien love’ are not anymore similar to ‘human love’ than to any other human emotions. Then the thing that they would share to be called “love” would be their behavioral effects, their effects on the peripherals of the brain-body system and their subsequent behavior. This might be possible through the process of convergent evolution; if it’s not possible, that means there has to be a shared computational structure for the same behavioral program called that we identify as ‘love’.
In a theory of embodied mind, the peripherals would be part of the mental process of ‘love’, so they would probably need to share a common structure by necessity.