I’m vaguely worried by the way ‘elementalistic’ structure and ‘non-elementalistic’ structure are separated in part A. It seems to have the connotation (I’m not sure if it was intended or not) that the elementalistic structures are better and the non-elementalistic structures are arbitrary.
However, there’s a reason why science—especially physics—have increasingly moved over towarda mathematical points of view and the sorts of language you’ve included under non-elementalistic. They really are better at describing the natural world: e.g. you lose out on key concepts if you insist on completely dividing ‘space’ and ‘time’ rather than appreciating the way they interact.
This sort of feeds into part (B). He describes languages as being similar or non-similar to the world and our nervous system, but the truth is that once you move beyond the ancestral environment the world is very different to our nervous system. To choose in favour of the languages similar to the nervous system over those similar to the world is ultimately to choose in favour of our own biases.
It seems to have the connotation (I’m not sure if it was intended or not) that the elementalistic structures are better and the non-elementalistic structures are arbitrary.
It seemed to me that Korzybski meant it the other way round.
Elementalistic thinking is focusing on things separately; having a list of nouns and trying to assign adjectives to each of them independently. Non-elementalistic thinking is focusing on relations between things; because sometimes the meaningful explanation requires some interaction between them.
we may have languages of elementalistic structure such as ‘space’ and ‘time’, ‘observer’ and ‘observed’, ‘body’ and ‘soul’, ‘senses’ and ‘mind’, ‘intellect’ and ‘emotions’, ‘thinking’ and ‘feeling’, ‘thought’ and ‘intuition’, etc., which allow verbal division or separation.
That is, in elementalistic thinking we talk about space separately, and time separately, and we cannot invent the theory of relativity. Also we speak about intellect separately (creating the idea of “Vulcan rationality”), and emotions separately, etc. As long as we have “intellect” and “emotions” as separate concepts, we are able to produce wisdom like “well, intellect is important, but emotions are also very important” (i.e. both the noun “intellect” and the noun “emotion” have the attribute “important”). We are “handicapped by semantic blockages” that prevent us from speaking e.g. about rational and irrational emotions.
He describes languages as being similar or non-similar to the world and our nervous system, but the truth is that once you move beyond the ancestral environment the world is very different to our nervous system.
I understood it as: our nervous system is capable of understanding the nature when using the language of math and physics (not just literally the equations, but generally the way the scientifically literate people speak), but we lose that capacity when using the inexact language of metaphors, or insisting on using concepts that don’t correspond to the real world (such as Newton’s absolute time).
I’m vaguely worried by the way ‘elementalistic’ structure and ‘non-elementalistic’ structure are separated in part A. It seems to have the connotation (I’m not sure if it was intended or not) that the elementalistic structures are better and the non-elementalistic structures are arbitrary. However, there’s a reason why science—especially physics—have increasingly moved over towarda mathematical points of view and the sorts of language you’ve included under non-elementalistic. They really are better at describing the natural world: e.g. you lose out on key concepts if you insist on completely dividing ‘space’ and ‘time’ rather than appreciating the way they interact. This sort of feeds into part (B). He describes languages as being similar or non-similar to the world and our nervous system, but the truth is that once you move beyond the ancestral environment the world is very different to our nervous system. To choose in favour of the languages similar to the nervous system over those similar to the world is ultimately to choose in favour of our own biases.
It seemed to me that Korzybski meant it the other way round.
Elementalistic thinking is focusing on things separately; having a list of nouns and trying to assign adjectives to each of them independently. Non-elementalistic thinking is focusing on relations between things; because sometimes the meaningful explanation requires some interaction between them.
That is, in elementalistic thinking we talk about space separately, and time separately, and we cannot invent the theory of relativity. Also we speak about intellect separately (creating the idea of “Vulcan rationality”), and emotions separately, etc. As long as we have “intellect” and “emotions” as separate concepts, we are able to produce wisdom like “well, intellect is important, but emotions are also very important” (i.e. both the noun “intellect” and the noun “emotion” have the attribute “important”). We are “handicapped by semantic blockages” that prevent us from speaking e.g. about rational and irrational emotions.
I understood it as: our nervous system is capable of understanding the nature when using the language of math and physics (not just literally the equations, but generally the way the scientifically literate people speak), but we lose that capacity when using the inexact language of metaphors, or insisting on using concepts that don’t correspond to the real world (such as Newton’s absolute time).