Always know where your abstractions break
General relativity plus quantum field theory can describe almost everything in the universe. There are a few exceptions like cosmic expansion and black holes but to human beings confined to a single solar system, fundamental physics is (for all practical purposes) a solved problem. Yet there are many things we don’t know. It’s as if the universe were a game of chess; we’ve learned the basic rules but are still figuring out the strategies.
All the universe cares about is fundamental physics. The universe always obeys the small-scale fundamental laws of physics. The universe never does anything else. It doesn’t care about evolution or chemistry or orbital mechanics or beauty. All of those things are high-level abstractions we (or evolution) invented to make sense of the world. Most of the time we think about how the world works we don’t think about fundamental physics. We use our higher-level abstractions instead.
Which is fine…most of the time. The problem is that any theory other than “the universe always obeys the fundamental laws of physics” is wrong in the sense that it is not perfectly generalizable.
There are many ways abstractions can malfunction when misapplied.
All abstractions have limited domains of applicability. Modern political ideologies—Marxist revolutionary theory, libertarianism, feminism—were invented in the context of an industrial civilization. Try too hard to apply these ideas to New Guinean hunter-gatherers or to medieval Japan and they’ll cloud your ability to understand what’s actually going on.
All high-level abstractions are, ultimately, probabilistic. Statistical mechanics almost always works. Almost.
Perhaps most importantly, “the concepts we use in everyday life are fuzzy, and break down if pushed too hard”. Even Newton’s Laws of Motion break when you apply them to too small of a scale.
Even general relativity and quantum field theory are not not universally generalizable. General relativity breaks on small-scale phenomena. Quantum field theory breaks on large-scale phenomena.
Postmodernists use the idea of leaky abstractions to dismiss the idea of objective truth entirely. That’s like driving your car into the ocean and then declaring that cars don’t work. Cars do work, but you need to take care of yours and drive it only on the terrain it functions on.
The most dangerous philosophers aren’t the postmodernists who believe nothing is true. The most dangerous philosophers are the ideologues who believe their particular ideology is true.
There is nothing wrong with believing true things are true. One absolutely should believe true things are true. The problem with ideologues is that they believe their personal ideology is absolutely true. If you believe an ideology—any ideology—is absolutely true then you are wrong because high-level abstractions are always imperfect models of reality. Ideologues’ tools malfunction because ideologues don’t know the limits of their own tools. They aren’t even aware their tools have limits. Those who understand ideologies’ limits aren’t ideologues.
Every idea has a domain it can be applied to, beyond which the idea will malfunction. If you don’t understand an idea’s limitations then you don’t understand that idea.
- EA & LW Forums Weekly Summary (28th Nov − 4th Dec 22′) by 6 Dec 2022 9:38 UTC; 36 points) (EA Forum;
- Beyond Reinforcement Learning: Predictive Processing and Checksums by 15 Feb 2023 7:32 UTC; 13 points) (
- EA & LW Forums Weekly Summary (28th Nov − 4th Dec 22′) by 6 Dec 2022 9:38 UTC; 10 points) (
- 30 Apr 2023 22:33 UTC; 1 point) 's comment on money ≠ value by (
What’s the limitations/boundaries to the domain of application to this post?
One sort-of counterexample would be The Unreasonable Effectiveness of Mathematics in the Natural Sciences, where a lot of Math has been surprisingly accurate even when the assumptions where violated.
This post would be a meta analysis to your question; or your question is a meta analysis to this post; either way is fine. He argues that the context/domain of application is dependent on the abstractions/generalization. When the context changes, the abstraction also changes most of the time. This post’s focus is more on being aware of when the context changes and when the abstraction changes.
Rationality.
That’s a very abstract response, could you give a more concrete explanation?
I can give a concrete example.
If you’re writing a novel (novel-writing is not a subfield of rationality) then you don’t need to keep in mind the principle “[i]f you don’t understand an idea’s limitations then you don’t understand that idea.”
On the other hand, if you are trying to figure out whether Capitalism is the best ideological framework to apply to a region (which is a question within the domain of rational analysis) then you absolutely must keep in mind the limits of the Capitalist framework.