[SEQ RERUN] Where Recursive Justification Hits Bottom
Today’s post, Where Recursive Justification Hits Bottom was originally published on 08 July 2008. A summary (taken from the LW wiki):
Ultimately, when you reflect on how your mind operates, and consider questions like “why does occam’s razor work?” and “why do I expect the future to be like the past?”, you have no other option but to use your own mind. There is no way to jump to an ideal state of pure emptiness and evaluate these claims without using your existing mind.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we’ll be going through Eliezer Yudkowsky’s old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Is Morality Given?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day’s sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
Note: the Popperians who were on here a while ago shed a new light on this post by bringing up fallibilism. I’m still sort of sad they ended up in a trollish relationship with us.
I had not realized there was a faction of “Popperians” around here, and thus went googling to try and figure out who they are/were! My tentative conclusion is that the primary authors with this philosophic starting point who evaporated are “HughRistik” and “curi” who respectively have blogs at Feminist Critics and Fallible Ideas. I drew this conclusion based on a hunt for usage of “fallibilism”, arranged chronologically here for the convenience of anyone who wants to see the details...
In Jan 2008 Eliezer wrote But There’s Still A Chance, Right? which hits only because of an April 2012 comment by pnrjulius.
In May 2008 Eliezer wrote Science Isn’t Strict Enough which hits only because of a May 2008 comment by poke that makes a valuable distinction while summarizing Hume’s positions.
In April 2009 HughRistik wrote Heuristic is not a bad word which hits because it is the one and only article under that tag.
In April 2009 (6 days later) Swimmy wrote Awful Austrians which hits because of an April 2009 comment by badger who notes that “Barry Smith tries to inject fallibilism into praxeology in this article (the diagram on the last page is particularly interesting).”
In May 2009 byrnema wrote Dissenting Views which hits due to a May 2009 back and forth with pjeby by HughRistik.
In April 2011 curi wrote two articles one day apart titled Bayesian Epistemology vs Popper and Popperian Decision making. The first actually uses the term, and the second only has comments with it, such as relatively involved comment trees with people like Tetronian, Peteredjones, GuySrinivasan, and Desrtopa.
Edited To Add: Oh interesting… There’s Eliezer over on feminist critics offering counter evidence to a theory.
Wow, good job!
Yes, I was referring to curi and BrianScurfield.
Because you are making a mistake so useful that it usually pays to keep making it. Newtonian physics is a mistake that works in day-to-day life on this planet, so unless you’re going fast or far it’s not a terrible mistake to keep making. Sir Karl Popper’s body of work on the problem of induction is worth a read here.
My theory is because a simple answer that simultaneously does not explain too much is more readily falsified. The most simple answer, ‘God did it,’ isn’t the best one. But a simple answer that explains a few things well can be tested, while a complex answer that explains a few things is harder to test. That’s why Occam’s Razor is a worthy standard. It makes for better poetry of thought, but in itself isn’t a measure of truth.
Self-contradiction is a good jumping-off point for recursive justifications. The only one I know, but probably not the only one. When a recursive justification leads to a self-contradiction, you can stop peeling the layers back because to go forward will not be helpful. Some recursive justifications don’t lead to self-contractions, and that goes back to the problem of definitions, also well-mapped by Sir Popper.
Spend some time with infants, with children, in mental hospitals, prisons or social work settings to get a different perspective on how accommodating nature is to large percentages of humanity who are indeed completely unsuited to reasoning, anti-helpful, anti-Occamian and anti-Laplacian.
Yes. Sir Popper’s theory of falsification does just that.
I know of one proposed bottom to recursive justification. Harold Walsby’s ‘absolute assumption’ first published in his 1947 book “The Domain of Ideologies.” The absolute assumption is that one is distinct from that which is not one. The absolute assumption occurs in infancy and most or all subsequent beliefs are echoes of that belief. ‘Mommy is not me and that’s wonderful / terrible’ is a start. Walsby’s ideas were carried forward by George Walford, and can be found online with minimal effort.
“If there is an underlying oneness of all things, it does not matter where we begin, whether with stars, or laws of supply and demand, or frogs, or Napoleon Bonaparte. One measures a circle, beginning anywhere.”—Charles Fort, “Lo!” (1931)
Assume that science is about “useful model”
If model X and model Y produce equally useful predictions, they have equal value. If model X is simpler, then it gets you more value for your costs (time spent thinking, any exhaustion of “mental muscles”, etc.)
One can also point out that the opposite (seek the most complex model) is impossible, since you can always add additional irrelevant factors.
I’ve been slowly training one of my friends to use Occam’s Razor by complaining about needless complexity. It’s weird realizing how much of a conscious appreciation it has given me :)
I assume science is about falsifiable models, that are also hopefully more humane than not.
Here I think we are saying similar things with different words. If model X is more simple, it can be more readily falsified and thus waste less.
Occam’s Razor is so-so for science, but a real treat for everyday use in deciding between otherwise apparently equal choices or avoiding waste. Hats off to Occam’s Razor for all the non-science things it does!
Not necessarily. There are plenty of cases where models of different complexity have the same predictive power (because they can be shown to be mathematically equivalent). For example, you can use the path integral formalism for QM (complicated), or you can use the Schrodinger equation instead (simpler).
You are correct and I was mistaken—thank you! Up vote for you!
I would say it’s less a question of “falseness” and more “accuracy”—if Newtonian physics can get you to the moon, it doesn’t really matter that Relativity came along later and demonstrated that the model was incomplete. The important thing was that Newtonian physics was the system most likely to correctly answer the question of how to get to the moon at that time.
I emphasize accuracy because sometimes you just can’t get 100% - weather prediction does a great job of statistics, but it can’t guarantee me that it will rain on Tuesday, and not on Friday.
That said, I’d agree that the ability to measure accuracy (and thus to falsify it) is very important :)