You can come up with a theory, grounded on principles that seem reasonable, rather than focusing on gathering evidence. In the end, the theory has to explain not “the facts” but “the observations”.
Pervasive error is any systematic and significant distortion of thought that impacts the whole discourse of a civilization.
...
We can only build a useful theory of pervasive error by working carefully forward from axiomatic principles, not backward from observed reality—thinking logically, not scientifically.
I know this is weird. Let’s start by justifying this unconventional methodology.
So let’s avoid the factual question of whether modern pervasive error exists. Instead we’ll ask a theoretical question: if pervasive error did exist, what would it look like? How would it work? What would we expect its effects to be?
A well-formed answer is a constructive proof. We reason forward—from ultimate cause, to proximate cause, to phenomenon. The way to explain error is to design error.
Once such a design for error is clearly presented, anyone can compare it to the reality they think they see around them. If they see a match, they have an explanation.
If not—maybe there is some other explanation. Or maybe there is no error at all. In the end, everyone makes this call alone. Your horse is a horse. Eventually he will get thirsty.
I appreciate this article very much. I read the whole thing and was disappointed when I realized Curtis Yarvin hadn’t finished the series yet. It already has many great insights and illuminating points. I’ll be digesting the implications for a while.
Diversity of approaches is important in this game. My favorite things about it is how Yarvin attacks a closely-related problem from a different perspective. In particular:
He focuses on the political economy. (I deliberately de-emphasize politics when choosing where to focus.)
He debugs things from first principles. “It is always better to debug forward.” (I prefer to debug backwards.)
I agree with almost everything he says. I disagree with his claim that it is “always” better to debug forward. Debugging forward is better when you have a small dataset, as is the case with the historic sweep of broad political ideologies (the subject of Yavin’s writing). I think when you’re dealing with smaller problems, like niche technical decisions, there’s a greater diversity of data and therefore a greater opportunity to figure things out inductively.
Debugging forward is better when you have a small dataset,
I’d say the difference is about something different (trust/belief/etc.). Maybe it’s hard to find people who, upon seeing a proof whose conclusion they disagree with, actually examine it for the flaw. (Or people who want proofs.) Experimentation may enable finding ways to improve, working through everything logically may enable finding the optimal/closed form solution, Fermi estimates enable finding the order of magnitude of an effect (though this isn’t distinct from experimentation).
You can come up with a theory, grounded on principles that seem reasonable, rather than focusing on gathering evidence. In the end, the theory has to explain not “the facts” but “the observations”.
That’s a good idea. I hadn’t thought about it like that.
Then you might appreciate (at least the first part) of this article: https://americanmind.org/essays/the-clear-pill-part-2-of-5-a-theory-of-pervasive-error/
I appreciate this article very much. I read the whole thing and was disappointed when I realized Curtis Yarvin hadn’t finished the series yet. It already has many great insights and illuminating points. I’ll be digesting the implications for a while.
Diversity of approaches is important in this game. My favorite things about it is how Yarvin attacks a closely-related problem from a different perspective. In particular:
He focuses on the political economy. (I deliberately de-emphasize politics when choosing where to focus.)
He debugs things from first principles. “It is always better to debug forward.” (I prefer to debug backwards.)
I agree with almost everything he says. I disagree with his claim that it is “always” better to debug forward. Debugging forward is better when you have a small dataset, as is the case with the historic sweep of broad political ideologies (the subject of Yavin’s writing). I think when you’re dealing with smaller problems, like niche technical decisions, there’s a greater diversity of data and therefore a greater opportunity to figure things out inductively.
I’d say the difference is about something different (trust/belief/etc.). Maybe it’s hard to find people who, upon seeing a proof whose conclusion they disagree with, actually examine it for the flaw. (Or people who want proofs.) Experimentation may enable finding ways to improve, working through everything logically may enable finding the optimal/closed form solution, Fermi estimates enable finding the order of magnitude of an effect (though this isn’t distinct from experimentation).