Others have given very practical answers, but it sounds to me like you are trying to ground your philosophy in something more concrete than practical advice, and so you might want a more ivory-tower sort of answer.
In theory, it’s best not to assign anything 100% certainty, because it’s impossible to update such a belief if it turns out not to be true. As a consequence, we don’t really have a set of absolutely stable axioms from which to derive everything else. Even “I think therefore I am” makes certain assumptions.
Worse, it’s mathematically provable (via Löb’s Theorem) that no system of logic can prove it’s own validity. It’s not just that we haven’t found the right axioms yet; it’s that it is physically impossible for any axioms to be able to prove that they are valid. We can’t just use induction to prove that induction is valid.
I’m not aware of this being discussed on LW before, but how can anyone function without induction? We couldn’t conclude that anything would happen again, just because it had worked a million times before. Why should I listen to my impulse to breathe, just because it seems like it’s been a good idea the past thousand times? If induction isn’t valid, then I have no reason to believe that the next breath won’t kill me instead. Why should I favor certain patterns of twitching my muscles over others, without inductive reasoning? How would I even conclude that persistent patterns in the universe like “muscles” or concepts like “twitching” existed? Without induction, we’d literally have zero knowledge of anything.
So, if you are looking for a fundamental rationalist presumption from which to build everything else, it’s induction. Once we decide to live with that, induction lets us accept fundamental mathematical truths like 1+1=2, and build up a full metaphysics and epistemology from there. This takes a lot of bootstrapping, by improving on imperfect mathematical tools, but appears possible.
(How, you ask? By listing a bunch of theorems without explaining them, like this: We can observe that simpler theories tend to be true more often, and use induction to conclude Occam’s Razor. We can then mathematically formalize this into Kolmogorov complexity. If we compute the Kolmogrov complexity of all possible hypotheses, we get Solomonof induction, which should be the theoretically optimum set of Bayesian priors. Cruder forms of induction also gives us evidence that statistics is useful, and in particular that Baye’s theorem is the optimal ways of updating existing beliefs. With sufficient computing power, we could theoretically perform Bayesian updates on these universal priors, for all existing evidence, and arrive at a perfectly rational set of beliefs. Developing a practical way of approximating this is left as an exercise for the reader.)
No one is really very happy about having to take induction as a leap of faith, but it appears to be the smallest possible assumption that allows for the development of a coherent and broadly practical philosophy. We’re making a baseless assumption, but it’s the smallest possible assumption, and if it turns out there was a mistake in all the proofs of Löb’s theorem and there is a system of logic that can prove it’s own validity, I’m sure everyone would jump on that. But induction is the best we have.
This, and your links to Lob’s theory, is one of the most fear inducing piece of writing that I have ever read. Now I want to know if I have understand this properly. I found that the best way to do it is to first explain what I understand to myself, and then to other people. My explanation is below:
I suppose that rationalist would have some simple, intuitive and obvious presumptions a foundation (e.g. most of the time, my sensory organs reflect the world accurately). But apparently, it put its foundation on a very specific set of statement, the most powerful, wild and dangerous of them all: self-referential statement:
Rationalist presume Occam’s razor because it proof itself
Rationalist presume Induction razor because it proof itself
*etc.
And a collection of these self-referential statement (if you collect the right elements) would reinforce one another. Upon this collection, the whole field of rationality is built.
To the best of my understanding, this train of thought is nearly identical to the Presuppositionalism school of Reformed Christian Apologetics.
The reformed / Presbyterian understanding of the Judeo-Christian God (from here on simply referred to as God), is that God is a self-referential entity, owing to their interpretation of the famous Tetragrammaton. They believe that God is true for many reasons, but chief among all, is that it attest itself to be the truth.
Now I am not making any statement about rationality or presuppositionalism, but it seems to me that there is a logical veil that we cannot get to the bottom of and it is called self-reference.
The best that we can do is to get a non-contradicting collection of self-referential statement that covers the epistemology and axiology and by that point, everyone is rational.
Very close, but not quite. (Or, at least not quite my understanding. I haven’t dug too deep.)
A reply to Presuppositionalism
Rationalist presume Occam’s razor because it proof itself Rationalist presume Induction razor because it proof itself *etc.
I wouldn’t say that we should presume anything because it proves itself. Emotionally, we may have a general impulse to accept things because of evidence, and so it is natural to accept induction using inductive reasoning. So, that’s likely why the vast majority of people actually accept some form of induction. However, this is not self-consistent, according to Lob’s theorem. We must either accept induction without being able to make a principled argument for doing so, or we must reject it, also without a principled reason.
So, Presuppositionalism appears to be logically false, according to Lob’s theorem.
I could leave it at that, but it’s bad form to fight a straw man, and not the strongest possible form of an argument. The steel man of Presuppositionalism might instead take certain propositions as a matter of faith, and make no attempt to prove them. One might then build much more complex philosophies on top of those assumptions.
Brief detour
Before I reply to that, let me back up for a moment. I Agree Denotationally But Object Connotationally with most of the rest of what you said above. (It seems to me to be technically true, but phrased in such a way that it would be natural to draw false inferences from it.)
If I had merely posited that induction was valid, I suspect it wouldn’t have been disconcerting, even if I didn’t offer any explanation as to why we should start there and not at “I am not dreaming” or any of the examples you listed. You were happy to accept some starting place, so long as it felt reasonable. All I did was add a little rigor to the concept of a starting point.
However, by additionally pointing out the problems with asserting anything from scratch, I’ve weakened my own case, albeit for the larger goal of epistemic rationality. But since all useful philosophies must be based in something, they also can’t prove their own validity. The falling tide lowers all ships, but doesn’t change their hull draft) or mast height.
So, we still can’t then say “the moon is made of blue cheese, because the moon is made of blue cheese”. If we just assume random things to be true, eventually some of them might start to contradict one another. Even if they didn’t, we’d still have made multiple random assertions when it was possible to make fewer. It’s not practically possible not to use induction, so every practical philosophy does so. However, adding additional assertions is unnecessary.
So, I agree connotationally when you say “The best that we can do is to get a non-contradicting collection of self-referential statement that covers the epistemology and axiology”. This infers that all possible sets of starting points are equally valid, which I don’t agree with. I’ll concede that induction is equally as valid as total epistemic nihilism (the position that nothing is knowable, not to be confused with moral nihilism, which has separate problems). I can’t justify accepting induction over rejecting it. However, once I accept at least 1 thing, I can use that as a basis for judging other tools and axioms.
A reply to the Presuppositionalism steel man
Lets go back to the Presuppositionalism steel man. Rather than making a self-referential statement as a proof, it merely accepted certain claims without proof. Any given Presuppositionalist must accept induction to function in the real world. If they also use that induction and accept things that induction proves, then we can claim to have a simpler philosophy. (Simpler being closer to the truth, according to Occam’s razor.)
They might accept induction, but reject Occam’s razor, though. I haven’t thought through the philosophical implications of trying to reject Occam’s Razor, but at first glance it seems like it would make life impractically complicated. It doesn’t necessarily lead to being unable to conclude that one should continue breathing, since it’s always worked in the past. So, it’s not instant death, like truly rejecting induction, but I suspect that truly rejecting Occam’s razor, and completely following through with all the logical implications, would cause problems nearly as bad.
For example, overfitting might prevent drawing meaningful conclusions about how anything works, since trillions of arbitrarily complex function can all be fit to any given data set. (For example, sums of different sine waves.) It may be possible to substitute some other principle for Occam’s razor to minimize this problem, but I suspect that then it would then be possible to compare that method against Occam’s Razor (well, Solomonoff induction) and demonstrate that one produced more accurate results. There may already be a proof that Solomonoff induction is the best possible set of Bayesian Priors, but I honestly haven’t looked into it. It may merely be the best set of priors known so far. (Either way, it’s only the best assuming infinite computing power is available, so the question is more academic than practical.)
General conclusions
So, it looks like this is the least bad possible philosophy, or at least quite close. It’s a shame we can’t reject epistemic nihilism, but pretty much everything else seems objectively suboptimal, even if some things may hold more aesthetic appeal or be more intuitive or easy to apply. (This is really math heavy, and almost nothing in mathematics is intuitive. So, in practice we need lots of heuristics and rules of thumb to make day to day decisions. None of this is relevant except when these more practical methods fail us, like on really fundamental questions. The claim is just that all such practical heuristics seem to work by approximating Solomonoff induction. This allows aspiring rationalists to judge potential heuristics by this measure, and predict what circumstances the heuristic will work or fail in.)
It is NOT a guarantee that we’re right about everything. It is NOT an excuse to make lots of arbitrary presuppositions in order to get the conclusions we want. Anything with any assumptions is NOT perfect, but this is just the best we have, and if we ever find something better we should switch to that and never look back.
Now I am not making any statement about rationality or presuppositionalism, but it seems to me that there is a logical veil that we cannot get to the bottom of and it is called self-reference.
That assumes that a rational person is one who holds beliefs because of a chain of logic.
Empricially Superforcasters don’t simply try to follow a chain of logic to get their beliefs. A rational person in the LW sense thus is not one that holds beliefs because of a chain of logic.
Tedlock gives in his book a good outlook about how to form beliefs about the likelihood that beliefs are true.
Empricially Superforcasters don’t simply try to follow a chain of logic to get their beliefs.
Neither do bad forecasters, or cranks, or schizophrenics. The suppressed premiss here is that superforcasters are right or reliable. But that implies that their claims are tested or testable and that implies some basic presumptions of logic or empricism.
Tedlock lays out a bunch of principles to come to correct conclusions. One of the principles is being a fox that uses multiple chains instead of trying to use one correct chain that rests on a foundation based on which other beliefs can be logically deduced.
The paradigm that Arielgenesis proposes is to follow a hedgehog system where a single chain of logic can be relied on because certain basic presuppositions are accepted as true.
But that implies that their claims are tested or testable and that implies some basic presumptions of logic or empricism.
Holding a belief because of a chain of logic has little to do with the principle of empricism.
Neither do bad forecasters, or cranks, or schizophrenics.
There are many ways to do bad forecasts. As far as the examples of cranks and schizophrenics go, those are usually hedgehogs. A lot of cranks usually follow a chain of logic. If you take people who think there are illegal tricks to avoid paying income tax, they usually have elaborate chains of logic to back up their case.
The suppressed premiss here is that superforcasters are right or reliable.
How do you know that I hold my belief based on a “suppresed premiss”? If something is supressed and you can’t see it, maybe the structure of my reasoning process isn’t the structure you guess.
Tedlock lays out a bunch of principles to come to correct conclusions. One of the principles is being a fox that uses multiple chains instead of trying to use one correct chain that rests on a foundation based on which other beliefs can be logically deduced.
Missing the point. The point is how their conclusions are verified.
Holding a belief because of a chain of logic has little to do with the principle of empricism.
Logic is implicit in empricisicm because the idea that contradictions are false is implicit in the idea of disproof by contradictory evidence.
There are many ways to do bad forecasts. As far as the examples of cranks and schizophrenics go, those are usually hedgehogs. A lot of cranks usually follow a chain of logic. If you take people who think there are illegal tricks to avoid paying income tax, they usually have elaborate chains of logic to back up their case.
Missing the point. I didn’t say that logic is sufficient for correctness. I am saying that if you have some sort of black-box, but effective reasoning, then some kind of presupposition is going to be needed to verify it.
How do you know that I hold my belief based on a “suppresed premiss”? If something is supressed and you can’t see it, maybe the structure of my reasoning process isn’t the structure you guess.
If you have other reasoning show it. Otherwise that was an irrelevant nitpick.
Logic is implicit in empricisicm because the idea that contradictions are false is implicit in the idea of disproof by contradictory evidence.
I think Science and Sanity lays out a framework for dealing with beliefs that doesn’t categories them into true/false that is better than the basic true/false dichomity.
If you have other reasoning show it. Otherwise that was an irrelevant nitpick.
I care more about what Science and Sanity called semantic reactions than I care about presuppositions.
Basically you feed the relevant data into your mind and then you let it process the data. As a result of processing it there’s a semantic reaction. Internally the brain does that with a neural net that doesn’t use logical chains to do it’s work.
When I write here I point out the most important piece of the data, but not all of what my reasoning is based on because it’s based on lots of experiences and lots of empiric data.
I think Science and Sanity lays out a framework for dealing with beliefs that doesn’t categories them into true/false that is better than the basic true/false dichomity.
Using a ramified logic with more than two truth values is not the same as not using logic at all!
I care more about what Science and Sanity called semantic reactions than I care about presuppositions.
Basically you feed the relevant data into your mind and then you let it process the data. As a result of processing it there a semantic reaction. Internally the brain does that with a neural net that doesn’t use logical chains to do it’s work.
When I write here I point out the most important piece of the data, but not all of what my reasoning is based on because it’s based on lots of experiences and lots of empiric data.
That is such a vague description of reasoning that it covers everything from superforecasting to schizobabble. You have relieved yourself of the burden of explaining how reasoning works without presupposiitons by not treating reasoning as something that necessarily works at all.
Could you define what you mean with “logic” if not thinking in terms of whether a statement is true?
Thinking about how probable it is, or how much subjective credence it should have. There are formal ways of demonstrating how fuzzy logic and probability theory extend bivalent logic.
Science and Sanity is not about probability theory or similar concepts of having numbers between 0 and 1.
“The map is not the territory” doesn’t mean “The map is the territory with credence X that’s between 0 and 1″. It’s rather a rejection about the concept of the is of identity and instead thinking in terms like semantic reactions.
I was pointing out that the claim that logic is implicit in empiricism survives an attack on bivalence. I couldn’t see any other specific point being made.
Let’s say I want to learn juggling. Simply reading a book that gives me a theory of juggling won’t give me the skill to juggle. What gives me the skill is practicing it and exposing myself with the practice to empiric feedback.
I don’t think it’s useful to model that part of empiric learning to juggle with logic.
Juggling with logic is a loose metaphor...literally, juggling is a physical skill, so it cannot be learnt from pure theory. But reasoning is not a physical skill.
If you were able to make implicit reasoning explicit, you would be able to do useful things like seeing how it works, and improving it. I’m not seeing the downside to explicitness. Implicit reasoning is usually more complex than explicit reasoning, and it’s advantage lies in its complexity, not it’s implicitness.
Juggling with logic is a loose metaphor...literally, juggling is a physical skill, so it cannot be learnt from pure theory. But reasoning is not a physical skill.
Why do you think the dualistic distinction of physical and mental is useful for skill learning? But if you want a more mental skill how about dual n-Back?
I’m not seeing the downside to explicitness.
The problem is that the amount of information that you can use for implicit reasoning vastly outweighs the amount of information for explicit reasoning. It’s quite often useful to make certain information explicit but you usually can’t make all available information that a brain uses for a reasoning process explicit.
Besides neither General Semantics or the Superforcasting principles are against using explicit reasoning. In both cases there are quite explicit heuristics about how to reason.
I started by saying that your idea that all reasoning processes are either explicit or implicit is limiting. In General Sematics you rather say “X is more explicit than Y” instead of “X is explicit”.
Using the binary classifier mean that your model doesn’t show certain information about reality that someone who uses the General Sematics model uses shows.
“Explicitness is important” isn’t a defense at all because it misses the point. I’m not against using explicit information just as I’m not against using implicit information.
That is such a vague description of reasoning that it covers everything from superforecasting to schizobabble.
If you agree that it covers superforcasting than my argument is right.
Using presuppotions is a very particular way of reasoning and there are many other possible heuristics that can be used.
A LW comment also isn’t long enough to lay out a complete system of reasoning as complex as the one proposed in Science and Sanity or that proposed in Superforcasting. That why I refer to general arguments are refer to the books for a more detailed explanation of particular heuristics.
Using presuppotions is a very particular way of reasoning and there are many other possible heuristics that can be used.
There’s basically two kinds of reasoning—the kind that can be made manifest (explicit,etc) and the kind that can’t. The gold standard of solving of solving the problem of presuppositions (foundations, intuitions) is to show that nothing presupposition-like is needed in explicit reasoning. Failed attempts tend to switch to implicit reasoning, or to take it that sufficiently obvious presupposiitons don’t count as presuppositions (We can show this with induction...we can show this with empiricism).
I don’t think that’s the case. Trying to put complex concepts into two boxes binary boxes is done very frequently in the Western tradition but there no inherent argument that it’s the best way to do things.
Science and Sanity argues in detail why binary thining is limiting.
As far as this particular case of the implicit/explicit distinction, most kinds of reasoning tend to be a mix.
Reasoning that’s completely explicit is the kind of reasoning that can be done by a computer with very limited bandwith. For many problems we know that computers can’t solve them as easily as calculating 23472349 * 5435408 which can be done completely explicitely. If you limit yourself to what can be made completely explicit you limit yourself to a level of intelligence that can’t outperform computers with very limited memory/CPU power.
Explicit reasoning has a its disadvantages, but is still hard to do without. In talking about superforecasters, you are taking it that someone has managed to determine who they are as opposed to ordinary forecasters, raving lunatics, etc. Deterimining that kind of thing is where explicit reasoning..what’s the alternative? Groups of people intuiting that each other are reliable intuiters?
Explicit reasoning has a its disadvantages, but is still hard to do without.
That’s why you mix it with implicit reasoning if you care about the outcome of the reasoning process. Doing everything implict is as bad as doing everything explicit.
I would have thought the problem with doing everything explicitly is that it is not possible.
Our usual way of combining explicit about and implicit reasoning is to reason explicitly from premises which we find intuitively appealing, ie which we arrive at by implicit reasoning. That isn’t a solution to the problem, that is the problem: everything is founded on presuppositions, and if they are implicit we can’t check how they are arrived at, and we also can’t check how reliable they are without needing to use further presuppositions.
Korzybski seems to be saying we should be using more implicit reasoning. I don’t s how that helps.
Korzybski seems to be saying we should be using more implicit reasoning. I don’t s how that helps.
I don’t think that’s what he’s saying. In the case of “consciousness of abstraction” he even encourages people to be explicit about things that they usually aren’t.
Korzybski takes a long book to explain how he thinks reasoning should be done and coins a bunch of basic concepts on which it should be built that are internally consistent. I don’t think I can give you a full understanding of how the framework works in the space of a few comments.
Most statements we make in general semantics are about maps about there no presumption that the map is real and is the territory. Indeed being explicit about the fact that it isn’t is an important part.
How does that address the Presumption problem? You could say that no statement made by anybody has any bearing on reality, so the presumptions they are based on don’t matter...but if that kind of sweeping anti-realism were a good solution , it would have been adopted along ago.
I don’t think General Semantics is anti-realism anymore than Einsteins Relativity theory is anti-realism because it states that a lot is relative. I think General Semantics hasn’t been adopted because it’s actually hard to learn to switch to thinking in terms of General Semantics.
Academic Science in the 20st century worked to compartamentalize knowledge by subjects in a quite specific way and a discipline like General Semantics didn’t fit in that compartamentatilization.
It’s similar to how Cybernetics as a field didn’t make it big because it doesn’t fit into the common categorisation.
I am not saying that GS is necessarily anti realistic, just trying to find some relevance to your comment. I don’t suppose I will ever find out how GS solves the presupposition problem, since you seem to be more interested in saying how great it is in the most general possible terms.
Do you feel confident you personally have the answer in your own mind, or are you just running on the assumption that GS must contain it somewhere, because of its general wonderfulness?
I’m think the problem doesn’t make sense in the GS paradigm. Kuhn wrote that problem set in one paradigm aren’t necessarily expressable in the paradigm of another framework and I think this is case like that.
According to Kuhn science needs to have a crisis to stop using the existing paradigm and move to a different one.
In the field of medicine you could say that the paradigm of Evidence-Based Medicine solved certain issues that the prevailing scientific paradigm had at the time Korzybski wrote. Thinking in terms of probabilities and controlled trials solves certain practical problems really well. It especially solved the practical problem of proving that patented drugs provide clinical effects for patients really well and much better than the previous paradigm.
That’s a problem that GS doesn’t solve that problem as well. There are socioeconomic reasons why a paradigm that solves that problem well won. On the physics side “shut up and calculate” also worked well socioeconomically. “Shut up and calculate” works well for problems such as flying airplanes, going to the moon or building computer chips. To solve those problems the conceptualization of underyling ontology isn’t necessary. Economically people did well in those area with ignoring ontology and simply focusing on epistemology.
GS doesn’t provide better answer to those questions. On the other hand the prevailing paradigm gives really crappy answers for questions such as “What is autism?”. What’s a human? Is a human something different than a homo sapiens? GS is useful for thinking about the answers to those questions. Those questions are starting to become economically relevant in a way they didn’t used to with big data and AI.
On the QS facebook group I had yesterday a conversation about practical problems with the ontology of what the term mood means with a person saying that they had trouble creating data about moods because they couldn’t find a definition on which 30% of psychologists agree.
I think “general wonderfulness” is the wrong framing. It’s that GS is doing well at different problems.
I’m think the problem doesn’t make sense in the GS paradigm. Kuhn wrote that problem set in one paradigm aren’t necessarily expressable in the paradigm of another framework and I think this is case like that.
Do you realise that over the course of the discussion, you have
1) offered a solution to the problem of ubnfounded foundations.
2) offered a claim that a solution exists, but is too long to write down.
3) offered a claim that the problem doesn’t exist in the first place.
1) offered a solution to the problem of ubnfounded foundations.
The solution offered at the beginning is basically: “Don’t try to let your reasoning be based on underlying foundations in the first place.”
That leaves the open question about how to reason. GS is an answer to that question.
“One the one hand, on the other hand, on the third hand”-reasoning as advocated in Superforcasting where there doesn’t have to be a shared foudnation for all three hands is another. That’s what Tetlock calls “foxy” thinking and where he argues that it makes better predictions than hedgehog thinking where everything is based on one model with one foundation.
But Superfocasting provides a bunch of heuristics and not a deep ontological foundation.
I also have other frameworks that point in the same direction but that are even harder to describe and likely not accessible by simply reading a book.
3) offered a claim that the problem doesn’t exist in the first place.
No. The problem exist if you take certain assumptions for granted. If haven’t claim that you don’t have the problem if you make those assumption and follow certain heuristics.
This leaves open the question of how to reason differently. GS is an answer of how to reason differently and it’s complex and demonstrating that it’s an internally consistent approach takes time and is done in Science and Sanity over many pages.
3) offered a claim that the problem doesn’t exist in the first place.
No, I do see that the problem exist if you follow certain heuristics.
The solution offered at the beginning is basically: “Don’t try to let your reasoning be based on underlying foundations in the first place.”
What that seems to amount to is “conduct all your reasoning inside a black box”. That makes soem problems, such as the problem of being able to veify your reasoning
What that seems to amount to is “conduct all your reasoning inside a black box”.
Not it’s not a black box. It’s just not the usually used box and Science and Sanity describes how the box works. And that’s sufficiently complex that it’s not easy to break down on one page.
So, in the end, I think we must allow the use of brains to think about thinking; and the use of evolved brains to think about evolution; and the use of inductive brains to think about induction; and the use of brains with an Occam prior to think about whether the universe appears to be simple; for these things we really cannot unwind entirely, even when we have reason to distrust them. Strange loops through the meta level, I think, are not the same as circular logic.
And this have a lot of similarities with my previous conclusion (with significant differences about circular logic and meta loops)
a non-contradicting collection of self-referential statement that covers the epistemology and axiology
Others have given very practical answers, but it sounds to me like you are trying to ground your philosophy in something more concrete than practical advice, and so you might want a more ivory-tower sort of answer.
In theory, it’s best not to assign anything 100% certainty, because it’s impossible to update such a belief if it turns out not to be true. As a consequence, we don’t really have a set of absolutely stable axioms from which to derive everything else. Even “I think therefore I am” makes certain assumptions.
Worse, it’s mathematically provable (via Löb’s Theorem) that no system of logic can prove it’s own validity. It’s not just that we haven’t found the right axioms yet; it’s that it is physically impossible for any axioms to be able to prove that they are valid. We can’t just use induction to prove that induction is valid.
I’m not aware of this being discussed on LW before, but how can anyone function without induction? We couldn’t conclude that anything would happen again, just because it had worked a million times before. Why should I listen to my impulse to breathe, just because it seems like it’s been a good idea the past thousand times? If induction isn’t valid, then I have no reason to believe that the next breath won’t kill me instead. Why should I favor certain patterns of twitching my muscles over others, without inductive reasoning? How would I even conclude that persistent patterns in the universe like “muscles” or concepts like “twitching” existed? Without induction, we’d literally have zero knowledge of anything.
So, if you are looking for a fundamental rationalist presumption from which to build everything else, it’s induction. Once we decide to live with that, induction lets us accept fundamental mathematical truths like 1+1=2, and build up a full metaphysics and epistemology from there. This takes a lot of bootstrapping, by improving on imperfect mathematical tools, but appears possible.
(How, you ask? By listing a bunch of theorems without explaining them, like this: We can observe that simpler theories tend to be true more often, and use induction to conclude Occam’s Razor. We can then mathematically formalize this into Kolmogorov complexity. If we compute the Kolmogrov complexity of all possible hypotheses, we get Solomonof induction, which should be the theoretically optimum set of Bayesian priors. Cruder forms of induction also gives us evidence that statistics is useful, and in particular that Baye’s theorem is the optimal ways of updating existing beliefs. With sufficient computing power, we could theoretically perform Bayesian updates on these universal priors, for all existing evidence, and arrive at a perfectly rational set of beliefs. Developing a practical way of approximating this is left as an exercise for the reader.)
No one is really very happy about having to take induction as a leap of faith, but it appears to be the smallest possible assumption that allows for the development of a coherent and broadly practical philosophy. We’re making a baseless assumption, but it’s the smallest possible assumption, and if it turns out there was a mistake in all the proofs of Löb’s theorem and there is a system of logic that can prove it’s own validity, I’m sure everyone would jump on that. But induction is the best we have.
This, and your links to Lob’s theory, is one of the most fear inducing piece of writing that I have ever read. Now I want to know if I have understand this properly. I found that the best way to do it is to first explain what I understand to myself, and then to other people. My explanation is below:
I suppose that rationalist would have some simple, intuitive and obvious presumptions a foundation (e.g. most of the time, my sensory organs reflect the world accurately). But apparently, it put its foundation on a very specific set of statement, the most powerful, wild and dangerous of them all: self-referential statement:
Rationalist presume Occam’s razor because it proof itself Rationalist presume Induction razor because it proof itself *etc.
And a collection of these self-referential statement (if you collect the right elements) would reinforce one another. Upon this collection, the whole field of rationality is built.
To the best of my understanding, this train of thought is nearly identical to the Presuppositionalism school of Reformed Christian Apologetics.
The reformed / Presbyterian understanding of the Judeo-Christian God (from here on simply referred to as God), is that God is a self-referential entity, owing to their interpretation of the famous Tetragrammaton. They believe that God is true for many reasons, but chief among all, is that it attest itself to be the truth.
Now I am not making any statement about rationality or presuppositionalism, but it seems to me that there is a logical veil that we cannot get to the bottom of and it is called self-reference.
The best that we can do is to get a non-contradicting collection of self-referential statement that covers the epistemology and axiology and by that point, everyone is rational.
Very close, but not quite. (Or, at least not quite my understanding. I haven’t dug too deep.)
A reply to Presuppositionalism
I wouldn’t say that we should presume anything because it proves itself. Emotionally, we may have a general impulse to accept things because of evidence, and so it is natural to accept induction using inductive reasoning. So, that’s likely why the vast majority of people actually accept some form of induction. However, this is not self-consistent, according to Lob’s theorem. We must either accept induction without being able to make a principled argument for doing so, or we must reject it, also without a principled reason.
So, Presuppositionalism appears to be logically false, according to Lob’s theorem.
I could leave it at that, but it’s bad form to fight a straw man, and not the strongest possible form of an argument. The steel man of Presuppositionalism might instead take certain propositions as a matter of faith, and make no attempt to prove them. One might then build much more complex philosophies on top of those assumptions.
Brief detour
Before I reply to that, let me back up for a moment. I Agree Denotationally But Object Connotationally with most of the rest of what you said above. (It seems to me to be technically true, but phrased in such a way that it would be natural to draw false inferences from it.)
If I had merely posited that induction was valid, I suspect it wouldn’t have been disconcerting, even if I didn’t offer any explanation as to why we should start there and not at “I am not dreaming” or any of the examples you listed. You were happy to accept some starting place, so long as it felt reasonable. All I did was add a little rigor to the concept of a starting point.
However, by additionally pointing out the problems with asserting anything from scratch, I’ve weakened my own case, albeit for the larger goal of epistemic rationality. But since all useful philosophies must be based in something, they also can’t prove their own validity. The falling tide lowers all ships, but doesn’t change their hull draft) or mast height.
So, we still can’t then say “the moon is made of blue cheese, because the moon is made of blue cheese”. If we just assume random things to be true, eventually some of them might start to contradict one another. Even if they didn’t, we’d still have made multiple random assertions when it was possible to make fewer. It’s not practically possible not to use induction, so every practical philosophy does so. However, adding additional assertions is unnecessary.
So, I agree connotationally when you say “The best that we can do is to get a non-contradicting collection of self-referential statement that covers the epistemology and axiology”. This infers that all possible sets of starting points are equally valid, which I don’t agree with. I’ll concede that induction is equally as valid as total epistemic nihilism (the position that nothing is knowable, not to be confused with moral nihilism, which has separate problems). I can’t justify accepting induction over rejecting it. However, once I accept at least 1 thing, I can use that as a basis for judging other tools and axioms.
A reply to the Presuppositionalism steel man
Lets go back to the Presuppositionalism steel man. Rather than making a self-referential statement as a proof, it merely accepted certain claims without proof. Any given Presuppositionalist must accept induction to function in the real world. If they also use that induction and accept things that induction proves, then we can claim to have a simpler philosophy. (Simpler being closer to the truth, according to Occam’s razor.)
They might accept induction, but reject Occam’s razor, though. I haven’t thought through the philosophical implications of trying to reject Occam’s Razor, but at first glance it seems like it would make life impractically complicated. It doesn’t necessarily lead to being unable to conclude that one should continue breathing, since it’s always worked in the past. So, it’s not instant death, like truly rejecting induction, but I suspect that truly rejecting Occam’s razor, and completely following through with all the logical implications, would cause problems nearly as bad.
For example, overfitting might prevent drawing meaningful conclusions about how anything works, since trillions of arbitrarily complex function can all be fit to any given data set. (For example, sums of different sine waves.) It may be possible to substitute some other principle for Occam’s razor to minimize this problem, but I suspect that then it would then be possible to compare that method against Occam’s Razor (well, Solomonoff induction) and demonstrate that one produced more accurate results. There may already be a proof that Solomonoff induction is the best possible set of Bayesian Priors, but I honestly haven’t looked into it. It may merely be the best set of priors known so far. (Either way, it’s only the best assuming infinite computing power is available, so the question is more academic than practical.)
General conclusions
So, it looks like this is the least bad possible philosophy, or at least quite close. It’s a shame we can’t reject epistemic nihilism, but pretty much everything else seems objectively suboptimal, even if some things may hold more aesthetic appeal or be more intuitive or easy to apply. (This is really math heavy, and almost nothing in mathematics is intuitive. So, in practice we need lots of heuristics and rules of thumb to make day to day decisions. None of this is relevant except when these more practical methods fail us, like on really fundamental questions. The claim is just that all such practical heuristics seem to work by approximating Solomonoff induction. This allows aspiring rationalists to judge potential heuristics by this measure, and predict what circumstances the heuristic will work or fail in.)
It is NOT a guarantee that we’re right about everything. It is NOT an excuse to make lots of arbitrary presuppositions in order to get the conclusions we want. Anything with any assumptions is NOT perfect, but this is just the best we have, and if we ever find something better we should switch to that and never look back.
That assumes that a rational person is one who holds beliefs because of a chain of logic. Empricially Superforcasters don’t simply try to follow a chain of logic to get their beliefs. A rational person in the LW sense thus is not one that holds beliefs because of a chain of logic.
Tedlock gives in his book a good outlook about how to form beliefs about the likelihood that beliefs are true.
Neither do bad forecasters, or cranks, or schizophrenics. The suppressed premiss here is that superforcasters are right or reliable. But that implies that their claims are tested or testable and that implies some basic presumptions of logic or empricism.
Tedlock lays out a bunch of principles to come to correct conclusions. One of the principles is being a fox that uses multiple chains instead of trying to use one correct chain that rests on a foundation based on which other beliefs can be logically deduced.
The paradigm that Arielgenesis proposes is to follow a hedgehog system where a single chain of logic can be relied on because certain basic presuppositions are accepted as true.
Holding a belief because of a chain of logic has little to do with the principle of empricism.
There are many ways to do bad forecasts. As far as the examples of cranks and schizophrenics go, those are usually hedgehogs. A lot of cranks usually follow a chain of logic. If you take people who think there are illegal tricks to avoid paying income tax, they usually have elaborate chains of logic to back up their case.
How do you know that I hold my belief based on a “suppresed premiss”? If something is supressed and you can’t see it, maybe the structure of my reasoning process isn’t the structure you guess.
Missing the point. The point is how their conclusions are verified.
Logic is implicit in empricisicm because the idea that contradictions are false is implicit in the idea of disproof by contradictory evidence.
Missing the point. I didn’t say that logic is sufficient for correctness. I am saying that if you have some sort of black-box, but effective reasoning, then some kind of presupposition is going to be needed to verify it.
If you have other reasoning show it. Otherwise that was an irrelevant nitpick.
I think Science and Sanity lays out a framework for dealing with beliefs that doesn’t categories them into true/false that is better than the basic true/false dichomity.
I care more about what Science and Sanity called semantic reactions than I care about presuppositions.
Basically you feed the relevant data into your mind and then you let it process the data. As a result of processing it there’s a semantic reaction. Internally the brain does that with a neural net that doesn’t use logical chains to do it’s work.
When I write here I point out the most important piece of the data, but not all of what my reasoning is based on because it’s based on lots of experiences and lots of empiric data.
Using a ramified logic with more than two truth values is not the same as not using logic at all!
That is such a vague description of reasoning that it covers everything from superforecasting to schizobabble. You have relieved yourself of the burden of explaining how reasoning works without presupposiitons by not treating reasoning as something that necessarily works at all.
Could you define what you mean with “logic” if not thinking in terms of whether a statement is true?
Thinking about how probable it is, or how much subjective credence it should have. There are formal ways of demonstrating how fuzzy logic and probability theory extend bivalent logic.
Science and Sanity is not about probability theory or similar concepts of having numbers between 0 and 1.
“The map is not the territory” doesn’t mean “The map is the territory with credence X that’s between 0 and 1″. It’s rather a rejection about the concept of the is of identity and instead thinking in terms like semantic reactions.
I was pointing out that the claim that logic is implicit in empiricism survives an attack on bivalence. I couldn’t see any other specific point being made.
Let’s say I want to learn juggling. Simply reading a book that gives me a theory of juggling won’t give me the skill to juggle. What gives me the skill is practicing it and exposing myself with the practice to empiric feedback.
I don’t think it’s useful to model that part of empiric learning to juggle with logic.
Juggling with logic is a loose metaphor...literally, juggling is a physical skill, so it cannot be learnt from pure theory. But reasoning is not a physical skill.
If you were able to make implicit reasoning explicit, you would be able to do useful things like seeing how it works, and improving it. I’m not seeing the downside to explicitness. Implicit reasoning is usually more complex than explicit reasoning, and it’s advantage lies in its complexity, not it’s implicitness.
Why do you think the dualistic distinction of physical and mental is useful for skill learning? But if you want a more mental skill how about dual n-Back?
The problem is that the amount of information that you can use for implicit reasoning vastly outweighs the amount of information for explicit reasoning. It’s quite often useful to make certain information explicit but you usually can’t make all available information that a brain uses for a reasoning process explicit.
Besides neither General Semantics or the Superforcasting principles are against using explicit reasoning. In both cases there are quite explicit heuristics about how to reason.
I started by saying that your idea that all reasoning processes are either explicit or implicit is limiting. In General Sematics you rather say “X is more explicit than Y” instead of “X is explicit”. Using the binary classifier mean that your model doesn’t show certain information about reality that someone who uses the General Sematics model uses shows.
“Explicitness is important” isn’t a defense at all because it misses the point. I’m not against using explicit information just as I’m not against using implicit information.
If you agree that it covers superforcasting than my argument is right. Using presuppotions is a very particular way of reasoning and there are many other possible heuristics that can be used.
A LW comment also isn’t long enough to lay out a complete system of reasoning as complex as the one proposed in Science and Sanity or that proposed in Superforcasting. That why I refer to general arguments are refer to the books for a more detailed explanation of particular heuristics.
There’s basically two kinds of reasoning—the kind that can be made manifest (explicit,etc) and the kind that can’t. The gold standard of solving of solving the problem of presuppositions (foundations, intuitions) is to show that nothing presupposition-like is needed in explicit reasoning. Failed attempts tend to switch to implicit reasoning, or to take it that sufficiently obvious presupposiitons don’t count as presuppositions (We can show this with induction...we can show this with empiricism).
I don’t think that’s the case. Trying to put complex concepts into two boxes binary boxes is done very frequently in the Western tradition but there no inherent argument that it’s the best way to do things. Science and Sanity argues in detail why binary thining is limiting.
As far as this particular case of the implicit/explicit distinction, most kinds of reasoning tend to be a mix. Reasoning that’s completely explicit is the kind of reasoning that can be done by a computer with very limited bandwith. For many problems we know that computers can’t solve them as easily as calculating 23472349 * 5435408 which can be done completely explicitely. If you limit yourself to what can be made completely explicit you limit yourself to a level of intelligence that can’t outperform computers with very limited memory/CPU power.
Explicit reasoning has a its disadvantages, but is still hard to do without. In talking about superforecasters, you are taking it that someone has managed to determine who they are as opposed to ordinary forecasters, raving lunatics, etc. Deterimining that kind of thing is where explicit reasoning..what’s the alternative? Groups of people intuiting that each other are reliable intuiters?
That’s why you mix it with implicit reasoning if you care about the outcome of the reasoning process. Doing everything implict is as bad as doing everything explicit.
I would have thought the problem with doing everything explicitly is that it is not possible.
Our usual way of combining explicit about and implicit reasoning is to reason explicitly from premises which we find intuitively appealing, ie which we arrive at by implicit reasoning. That isn’t a solution to the problem, that is the problem: everything is founded on presuppositions, and if they are implicit we can’t check how they are arrived at, and we also can’t check how reliable they are without needing to use further presuppositions.
Korzybski seems to be saying we should be using more implicit reasoning. I don’t s how that helps.
I don’t think that’s what he’s saying. In the case of “consciousness of abstraction” he even encourages people to be explicit about things that they usually aren’t.
Korzybski takes a long book to explain how he thinks reasoning should be done and coins a bunch of basic concepts on which it should be built that are internally consistent. I don’t think I can give you a full understanding of how the framework works in the space of a few comments.
Does it address the problem at hand?
Most statements we make in general semantics are about maps about there no presumption that the map is real and is the territory. Indeed being explicit about the fact that it isn’t is an important part.
How does that address the Presumption problem? You could say that no statement made by anybody has any bearing on reality, so the presumptions they are based on don’t matter...but if that kind of sweeping anti-realism were a good solution , it would have been adopted along ago.
I don’t think General Semantics is anti-realism anymore than Einsteins Relativity theory is anti-realism because it states that a lot is relative. I think General Semantics hasn’t been adopted because it’s actually hard to learn to switch to thinking in terms of General Semantics.
Academic Science in the 20st century worked to compartamentalize knowledge by subjects in a quite specific way and a discipline like General Semantics didn’t fit in that compartamentatilization. It’s similar to how Cybernetics as a field didn’t make it big because it doesn’t fit into the common categorisation.
I am not saying that GS is necessarily anti realistic, just trying to find some relevance to your comment. I don’t suppose I will ever find out how GS solves the presupposition problem, since you seem to be more interested in saying how great it is in the most general possible terms.
Answering the question is like answering how some mathematical proof works that goes for 200 pages. GS is a complex system that builds on itself.
Do you feel confident you personally have the answer in your own mind, or are you just running on the assumption that GS must contain it somewhere, because of its general wonderfulness?
The outside view: http://lesswrong.com/lw/54u/bayesian_epistemology_vs_popper/3v49
I’m think the problem doesn’t make sense in the GS paradigm. Kuhn wrote that problem set in one paradigm aren’t necessarily expressable in the paradigm of another framework and I think this is case like that.
According to Kuhn science needs to have a crisis to stop using the existing paradigm and move to a different one. In the field of medicine you could say that the paradigm of Evidence-Based Medicine solved certain issues that the prevailing scientific paradigm had at the time Korzybski wrote. Thinking in terms of probabilities and controlled trials solves certain practical problems really well. It especially solved the practical problem of proving that patented drugs provide clinical effects for patients really well and much better than the previous paradigm.
That’s a problem that GS doesn’t solve that problem as well. There are socioeconomic reasons why a paradigm that solves that problem well won. On the physics side “shut up and calculate” also worked well socioeconomically. “Shut up and calculate” works well for problems such as flying airplanes, going to the moon or building computer chips. To solve those problems the conceptualization of underyling ontology isn’t necessary. Economically people did well in those area with ignoring ontology and simply focusing on epistemology.
GS doesn’t provide better answer to those questions. On the other hand the prevailing paradigm gives really crappy answers for questions such as “What is autism?”. What’s a human? Is a human something different than a homo sapiens? GS is useful for thinking about the answers to those questions. Those questions are starting to become economically relevant in a way they didn’t used to with big data and AI.
On the QS facebook group I had yesterday a conversation about practical problems with the ontology of what the term mood means with a person saying that they had trouble creating data about moods because they couldn’t find a definition on which 30% of psychologists agree.
I think “general wonderfulness” is the wrong framing. It’s that GS is doing well at different problems.
Do you realise that over the course of the discussion, you have
1) offered a solution to the problem of ubnfounded foundations.
2) offered a claim that a solution exists, but is too long to write down.
3) offered a claim that the problem doesn’t exist in the first place.
The solution offered at the beginning is basically: “Don’t try to let your reasoning be based on underlying foundations in the first place.”
That leaves the open question about how to reason. GS is an answer to that question.
“One the one hand, on the other hand, on the third hand”-reasoning as advocated in Superforcasting where there doesn’t have to be a shared foudnation for all three hands is another. That’s what Tetlock calls “foxy” thinking and where he argues that it makes better predictions than hedgehog thinking where everything is based on one model with one foundation. But Superfocasting provides a bunch of heuristics and not a deep ontological foundation.
I also have other frameworks that point in the same direction but that are even harder to describe and likely not accessible by simply reading a book.
No. The problem exist if you take certain assumptions for granted. If haven’t claim that you don’t have the problem if you make those assumption and follow certain heuristics.
This leaves open the question of how to reason differently. GS is an answer of how to reason differently and it’s complex and demonstrating that it’s an internally consistent approach takes time and is done in Science and Sanity over many pages.
No, I do see that the problem exist if you follow certain heuristics.
What that seems to amount to is “conduct all your reasoning inside a black box”. That makes soem problems, such as the problem of being able to veify your reasoning
Not it’s not a black box. It’s just not the usually used box and Science and Sanity describes how the box works. And that’s sufficiently complex that it’s not easy to break down on one page.
Eliezer ruminates on foundations and wrestles with the difficulties quite a bit in the Metaethics sequence, for example:
Where Recursive Justification Hits Bottom
Fundamental Doubts
Thank you. This reply actually answer the first part of my question.
The ‘working’ presuppositions include:
Induction
Occam’s razor
I will quote most important part from Fundamental Doubts
And this have a lot of similarities with my previous conclusion (with significant differences about circular logic and meta loops)