Beware of identifying with schools of thought
As a child I decided to do a philosophy course as an extracurricular activity. In it the teacher explained to us the notion of schools of philosophical thought. According to him classifying philosophers as adhering either to school A or school B, is typical for Anglo thought.
It deeply annoys me when Americans talk about Democrat and Republican political thought and suggest that you are either a Democrat or a Republican. The notion that allegiance to one political camp is supposed to dictate your political beliefs feels deeply wrong.
A lot of Anglo high schools do policy debating. The British do it a bit differently than the American but in both cases it boils down to students having to defend a certain side. Traditionally there’s nearly no debating at German high schools.
When writing political essays in German school there’s a section where it’s important to present your own view. Your own view isn’t supposed to be one that you simply copy from another person. Good thinking is supposed to provide a sophisticated perspective on the topic that is the synthesis of arguments from different sources instead of following a single source.
That’s part of the German intellectual thought has the ideal of ‘Bildung’. In Anna Wierzbicka tells me that ‘Bildung’ is a particularly German construct and the word isn’t easily translatable into other languages. The nearest English word is ‘education’. ‘Bildung’ can also be translated as ‘creation’. It’s about creating a sophisticated person, that’s more developed than the average person on the street who doesn’t have ‘Bildung’. Having ‘Bildung’ signals having a high status.
According to this ideal you learn about different viewpoints and then you develop a sophisticated opinion. Not having a sophisticated opinion is low class. In liberal social circles in the US a person who agrees with what the Democratic party does at every point in time would have a respectable political opinion. In German intellectual life that person would be seen as a credulous low status idiot that failed to develop a sophisticated opinion. A low status person isn’t supposed to be able to fake being high status by memorizing the teacher’s password.
If you ask me the political question “Do you support A or B?”, my response is: “Well, I neither want A or B. There are these reasons for A, there are those reasons for B. My opinion is that we should do C which solves those problems better and takes more concerns into account.” A isn’t the high status option so that I can signal status by saying that I’m in favour of A.
How does this relate to non-political opinions? In Anglo thought philosophic positions belong to different schools of thought. Members belonging to one school are supposed to fight for their school being right and being better than the other schools.
If we take the perspective of hardcore materialism, a statement like: “One of the functions of the heart is to pump blood” wouldn’t be a statement that can be objectively true because it’s teleology. The notion of function isn’t made up of atoms.
From my perspective as a German there’s little to be gained in subscribing to the hardcore materialist perspective. It makes a lot of practical sense to say that such as statement can be objectively true. I have gotten the more sophisticated view of the world, that I want to have. Not only statements that are about arrangements of atoms can be objectively true but also statements about the functions of organs. That move is high status in German intellectual discourse but it might be low status in Anglo-discourse because it can be seen as being a traitor to the school of materialism.
Of course that doesn’t mean that no Anglo accepts that the above statement can be objectively true. On the margin German intellectual norms make it easier to accept the statement as being objectively true. After Hegel you might say that thesis and antithesis come together to a synthesis instead of thesis or antithesis winning the argument.
The German Wikipedia page for “continental philosophy” tells me that the term is commonly used in English philosophy. According to the German Wikipedia it’s mostly used derogatorily. From the German perspective the battle between “analytic philosophy” and “continental philosophy” is not a focus of the debate. The goal isn’t to decide which school is right but to develop sophisticated positions that describe the truth better than answers that you could get by memorizing the teacher’s password.
One classic example of an unsophisticated position that’s common in analytic philosophy is the idea that all intellectual discourse is supposed to be based on logic. In Is semiotics bullshit? PhilGoetz stumbles about a professor of semiotics who claims: “People have an extra-computational ability to make correct judgements at better-than-random probability that have no logical basis.”
That’s seen as a strong violation of how reasoning based on logical positivism is supposed to work. It violates the memorized teachers password. But is it true? To answer that we have to ask what ‘logical basis’ means. David Chapman analysis the notion of logic in Probability theory does not extend logic. In it he claims that in academic philosophical discourse the phrase logic means predicate logic.
Predicate logic can make claims such:
(a) All men are mortal.
(b) Socrates is a man.
Therefore:
(c) Socrates is mortal.
According to Chapman the key trick of predicate logic is logical quantification. That means every claim has to be able to be evaluated as true or false without looking at the context.
We want to know whether a chemical substance is safe for human use. Unfortunately our ethical review board doesn’t let us test the substance on humans. Fortunately they allow us to test the substance on rats. Hurray, the rats survive.
(a) The substance is safe for rats.
(b) Rats are like humans
Therefore:
(c) The substance is safe for humans.
The problem with `Rats are like humans` is that it isn’t a claim that’s simply true or false.
The truth value of the claim depends on what conclusions you want to draw from it. Propositional calculus can only evaluate the statement as true or false and can’t judge whether it’s an appropriate analogy because that requires looking at the deeper meaning of the statement `Rats are like humans` to decide whether `Rats are like humans` in the context we care about.
Do humans sometimes make mistakes when they try to reason by analogy? Yes, they do. At the same time they also come to true conclusions by reasoning through analogy. Saying “People have an extra-computational ability to make correct judgements at better-than-random probability that have no logical basis.” sounds fancy, but if we reasonably define the term logical basis as being about propositional calculus, it’s true.
Does that mean that you should switch from the analytic school to the school of semiotics? No, that’s not what I’m arguing. I argue that just as you shouldn’t let tribalism influence yourself in politics and identify as Democrat or Republican you should keep in mind that philosophical debates, just as policy debates, are seldom one-sided.
Daring to slay another sacred cow, maybe we also shouldn’t go around thinking of ourselves as Bayesian. If you are on the fence on that question, I encourage you to read David Chapman’s splendid article I referenced above: Probability theory does not extend logic
I’m not sure what work the word “sophisticated” is doing here.
Let’s say the Greens advocate dispersing neurotoxins to eradicate all life on earth, and the Blues advocate not doing that. Is it “sophisticated” to say, “Well, there are certainly good arguments on both sides, for example if you assume this specific twisted utilitarian framework, or assume values that I don’t possess, then the Greens have some good points!”? That doesn’t seem sophisticated. That just indicates a pathological inability to “join a side” even when one side is the one you should join by your own ethical compunctions, you want to join, you would benefit from joining, and you would cause others to benefit by joining.
Also, what if you arrive at the party partway through, and the Green and Blue have already spoken, and also another Sophisticate has spoken and indicated that “both sides have some good points, perhaps the answer is in the middle”. Are you allowed to just say, “I agree with the Sophisticate!” or does that make you a “sophisticate partisan” meaning you are obligated by the laws of being/appearing “sophisticated” to say, “Well, actually, the answer can’t be in the middle, a 50-50 split just seems improbable, the Greens are probably 25% right and the Blues are probably 75% right.”?
What I’m getting at is I’m not sure what the difference is being your usage of “sophisticated” and just being a contrarian.
You mention the attitudes implicit in certain styles of debate. I’ve written before about the dangers of certain styles of policy debate taught in American schools. I’ve always seen it as damaging that the point of US policy debate is to be able to argue from any position and against any position. It implicitly teaches the young mind that you can “win” an argument through cleverness and rule-lawyering without regard to whether your position is actually superior. The whole framework actively undermines the truthseeking mindset, because in a policy debate, you’re not truthseeking, you’re trying to obfuscate the opponents’ inconvenient truths and distort facts that support your own argument to appear more important than they are. In short, I think there’s definitely such a thing as “too much sophistication”, and I blame this type of sophistication on why many of my former high school friends are now effectively insane.
Obviously I agree that it’s dangerous to identify with a school of thought. Political parties in particular are coalitions of disparate interest groups, so the odds that a group of people who are only aligned for historically contingent reasons are going to come up with uniformly correct conclusions is near zero. That doesn’t mean you can never be confident that you’re right about something.
Additionally, I think to the degree that LWers identify as Bayesian, they are mostly just acknowledging the superiority of the Bayesian toolkit, such as maintaining some notion of a probability distribution over beliefs rather than exclusive and inviolable belief-statements, updating beliefs incrementally based on evidence, etc. None of us are really Bayesians anyway, because a thorough and correct Bayesian network for computing something as simple as whether you should buy Hershey’s or Snickers would be computationally intractable.
I imagined that Greens want to neurotoxin rats and Blues want to do nothing and live with rats. Blues argue that neurotoxining would kill more than just rats. And greens argue that rats are uncomfortable to live with.
Complex positions then look like “kill rats with bullets” or “herd rats into zoos”. In thought experiements this would be “fighting the hypothetical”. I think because it is about issues and not about human groups taking sides that “fighting the framing” should be done with these kinds of issues. Greens will argue that “kill rats with bullets” will leave some wounded rats alive or let them escape. Blues will argue that “herd rats into zoos” diminishes rat life quality. But we went from “rats or no rats” to “wiped rats, wounded rats, caged rats or freeroaming rats”.
Sure from one point of view the options are just how much killing/opressing we want to do? How about none? But one could also construct a view point about restaurant health safety to have. The correct amount of rats to have on your plate is 0 and any increase (keeping other things handled) is further failure. Answering whos carings we care about means some people will get less preferential treatment than under other arrangements. But positions like “lets neurotox 30% of the rats” that are just compromises without additional idea behind them. Reframings are probably not as synergestic with the poles but not all compromises are sensible reunderstandings of the field. “Centrism” is not inherently sophisticated.
My analogy extension might have been less than elegant as it easily turns into gruesome territority if you replace rats with any human group. But maybe it also highlights that it is easier to be sympathetic to health safety than bigotry.
Both of those positions are expressible in a single sentence. Sophisticated positions on topics are generally complex enough that they aren’t expressible in a single sentence.
Saying: “Here’s the 300 page bill about how our policy in regards to using neurotoxins on life on earth should look like.” is more sophisticated.
There are cases where it’s useful to use probability when faced with uncertainty. It’s when you can define a specific test of how the world looks like when the belief is true and when it isn’t.
Many beliefs are too vague for such a test to exist. It doesn’t make sense to put a probability on “The function of the heart is to pump blood”. That belief doesn’t have a specific prediction. You could create different predictions based on the belief and those predictions would like have different probabilities.
At the same time it’s useful to have beliefs like “The function of the heart is to pump blood”.
ROFL...
tl;dr: KILL THEM ALL! …but if you want sophistication, here is a 300 page paper about how and why we should KILL THEM ALL!
It’s probably not the best example, but I stayed with the original example.
If “sophisticated” in this usage just means “complex”, I’m not sure that I can get behind the idea that complex theories or policies are just better than simple ones in any meaningful way.
There may be a tendency for more complex and complicated positions to end up being better, because complexity is a signal that somebody spent a lot of time and effort on something, but Timecube is a pretty complex theory and I don’t count that as being a plus.
Complexity or “sophistication” can cut the other way just as easily, as somebody adds spandrels to a model to cover up its fundamental insufficiency.
I don’t know. I try to root out beliefs that follow that general form and replace them, e.g. “the heart pumps blood” is a testable factual statement, and a basic observation, which semantically carries all the same useful information without relying on the word “function” which implies some kind of designed intent.
I haven’t argued that A is just better than B.
Yes, and I see that as a flaw that’s the result of thinking of everything in Bayesian terms.
When the lungs expand that process also leads to pumping of blood. Most processes that change the pressure somewhere in the body automatically pump blood as a result. The fact that the function of the heart is to pump blood has more meaning than just that it pumps blood.
Words are an imperfect information transfer system humans have evolved to develop. To interact with reality we have to use highly imperfect information-terms and tie them together with correlated observations. It seems like you are arguing that the human brain is often dealing with too much uncertainty and information loss to tractably apply a probabilistic framework that requires clearer distinctions/classifications.
Which is fair, sort of, but the point still stands that a sufficiently complex computer (human brain or otherwise) that is dealing with less information loss would still find Bayesian methods useful.
Again, this is sort of trivial, because all it’s saying is that ‘past information is probabilistically useful to the future.’ I think the fact that modern machine learning algos are able to implement Bayesian learning parameters should lead us to the conclusion that Bayesian reasoning is often intractable, but in its purest form it’s simply the way to interpret reality.
David Chapman brings the example of an algorithm that he wrote to solve an previously unsolved AI problem that worked without probability but with logic.
In biology people who build knowledge bases find it useful to allow storing knowledge like “The function of the heart is to pump blood”. If I’m having a discussion on Wikidata with another person whether X is a subclass or an instance of Y, probability matters little.
I’m still having trouble with this.
A human mind is built out of nonlinear logic gates of various kinds. So even a belief like “the function of the heart is to pump blood” is actually composed of some network of neural connections that could be construed as interdependent probabilistic classification and reasoning via probabilistic logic. Or, at least, the human brain looks a lot more like “probabilistic classification and probabilistic reasoning” than it looks like “a clean algorithm for some kind of abstract formal logic”. (Assume all the appropriate caveats that we don’t actually compute probabilities; the human mind works correctly to the degree that it accidentally approximates Bayesian reasoning.)
Heck, any human you find actually using predicate calculus is using these neural networks of probabilistic logic to “virtualize” it.
Maybe probability matters little at the object level of your discussion, but that’s completely ignoring the fact that your brain’s assessment that X has quality Z which makes it qualify as a member of category Y is a probability assessment whether or not you choose to call it that.
I think Chapman is talking past the position that Jeynes is trying to take. You obviously can build logic out of interlinked probabilistic nodes because that’s what we are.
There are two kids of -isms that are releantly different in this context. Philosophical isms tend to be groupings of answers to questions. Political -isms tend to be set of policies or policy production mechanisms that synergise.
In the philosphical style concepts like “materialism” are supposed to be connected to questions like “Is matter sufficient ingredient to constitute reality?” (with materialism corresponding to “yes” to this question).
In this passge
It reads as if it really meant “reductionism” which is connected to a question like “Does understanding the details of reality capture understanding reality?” (with reductionsim corresponding to “yes” to this question). In another sense it might be referring to less philosphical and more worldviewy isms.
You could answer “Is matter a sufficient ingredient to constitute reality?” “No, matter is how it functions and there is non-matteric functioning and reality is made of functions, so matter is insufficient.”. But you could also mean with “One of the functions of the heart is to pump blood” that you don’t mean “function” as an ontological entity. If you say a cardboard box has 6 sides you do not necessarily mean that in addition to the carton there would be non-paperic “side” entities extant.
That “function of the heart” is a sensible thing to talk about does take some philosophical stances. But those stances are not the narrow question about the sufficiency of matter. To those people that “that is quite reductive” is a negative assesment could answer “Does understanding the details of reality capture understanding reality?” “No, there are aspects of reality that is not seen from the parts”. Reductionism and materialism are separate narrow philophical structures.
Some people do not mean the narrow philopshical question by “materialism”. With very many such narrow philopshical questions one can wonder whether there are takes that could have the answer combinations “materialism yes, reductionism yes”, “materialism yes, reductionism no”, “materialism no, reductionism yes”, “materialism no, reductionism no”. And with more narrow questions even more combinations. Some people understand “materialism” to be a kind of “its gears all the way down” approach to things. Off-course its small gears, off-course all hands no reflection. The trouble can be that different people can disagree what narrow questions are answered and which way by this more umbrella category.
These “wide” takes can be connected with a single approach or imagination answering multiple facets. From a certain meme base some entrant memes might seem so natural that they might be assumed to be obvious or unavoidable. But the trouble is when different meme bases disagree about the naturalness of this meme creep. So these wide takes are no longer connected by objectioficable unambigious logical connections. Rather it is a known and recognised psychological fact. A definition is not sufficient to express these. Ideally these could be collapsed back or atleast be compatible with the narrow philosophical detail.
Then there are the memes which are bound together because they form a niche of existence for its carrier. You say the leader is dear because you would get smacked in the face otherwise. You kick down people in order to keep them having enough resources to repel effectively. You think people should give money to the goverment so that the extensive programs of it exist. Thinking that people should give money to the goverment that you have bigger payout when you embezzel it is another similar niche. A big thing about these is that they will have to fit to the total context of the life where it appears. If you embezzel people you might be guilty of a crime. If you kick down on people you might be verbally harassed. It is hard to check for local validity of these policies as they so heavily depend that the larger system survives and makes sense. If you are already doing criminal stuff then an extra embezzelment has a clear infrastructure to neatly slide into. If tax funds suffer from big corruption then pleading people to pay more taxes is an uphill battle.
“Materialism” in this sense can mean “I ain’t going to church ever. And all those hippy sociologist HR types are stealing essential resources from actual hard sciences”. And maybe you are an aggressive secularist because you are a “gears all the way down” person, formation of the belief system which hinged on question of “Can I dispence all bits of ontology I don’t have math for?” which was decised by answer to “Is matter sufficent to constitute reality?”. But you could have a person whos main spiritual belief is that “god does not play with dice” who laments that 3-body problem is not solved in closed form but can only be numerically simulated who thinks that electron psychism is exactly correlated with energic constitution and therefore matter is a complete set for ontological purposes. “Materialism” is very freaking wide and both can trace their systems roots to the same narrow binary question.
So I really dislike -isms and always struggle to unmuddle where they are used. Thoughts are important schools are not. Gears are not broken but systems are. So I will give the even more radical advice “beware referring to schools of thought”. The equal and opposite of this advice is that if your head spins on complicated topics don’t miss the woods for the trees.
The claim “People have an extra-computational ability to make correct judgements at better-than-random probability that have no logical basis.” has the interesting angle about symbolic and non-symbolic information processing. With artificial neural networks it is cleaner to see that they don’t have midsteps. To a large extent they just implement a function that gets refined point by point. Sure you could chain and apply them multiple times. But they also have the mode where they with a single activation produce their answer. Having a turing machine that never moves on the tape but just does one replacement of a starting symbol to a final symbol is a rather extreme one. We usually expect there to be a lot of reads and writes as the computation is “churned”. A conception where peoples thoughts are driven by some “internal language”, “saying words to themselfs” seems to be a churning process. Then the question arrises can all (significant) thought activity be captured with processes that are of this nature? All of propositional logic, existential logic, fuzzy logic would be just a different kind of churning process, but any systematic symbol dance would do. Is there any “single step dance”, like neural networks can do, in human thinking?
Barry Smith created Basic Formal Ontology which is the central framework for ontology used in bioinformatics. Basic Formal Ontology does take the stance that functions are ontologically meaningful entities. It does take the stance that they are part of the ontological entities that make up reality.
Other big biological ontologies like FMA also have immaterial anatomical entities.
Part of why the work of Judea Pearl didn’t happen earlier is that materialism and reductionism are frameworks that held back science from studying non-materialistic ontological entities. There’s a lot of progress that happened in the last twenty years that was about moving beyond those frameworks and how they stifled science.
There is also the distinction of computer science ontology and metaphysics ontology. The linked opening suggests that BFO can be “instantiated” to be about a chemical, physical or biological system.
That seems to be a setup for a effective theory kind of scheme. Assume that you have a electric field while having the understanding that a electric field is a result and not a starting point of another theory ie quantum field theory makes vacuum buzzing mediate the influence of charges.
An effective theory can be ambivalent whether it is taken to be for the metaphysical existence. One take that the quantum fields are the only thing that “really exists” and there is no electric field beyond them. Another take is to “believe in electric fields” thats a a thing that “really exists”. An effective theory in essence goes “assume X” without being cornerned whether X is or is not the case.
BFO point seems to be that it should provide a representation for any existing thing and thus more int he computer science realm. So that all territorities have a map instead of being a special purpose map that can be used to map some but can’t be used to map others.
For metaphysical claims it is an issue whether an electron makes charge and spin exist or charge and spin make an electron exist. A scheme that just wants the representations straight is just happy that the things involved are “electrons, spin and charge” without making any claims about their relationships (while maintaining that such relationships should be expressible).
tried to refer to the metaphysical kind of claim. As I am currently reading the Pearl advances are about allowing to talk about waving without anything that waves which is a representational improvement that allows for generalisations that do not get stuck on concretizations. Instead of going “assume that natural numbers exists” you go “assume that a group exists”
You seem to fail Bildung as you contrast AngloAmerican and German ways of doing things. A true sophisticate would know the truth is not in picking one side and sticking to it :-P
I am also not sure why should I care about what’s high-status in Germany (other than being Dr. Dr. Dr. Lumifer, of course).
Or is it that a true sophisticate would consider where and where not to apply sophistry?
A true sophisticate would apply sophistry everywhere but modulate it to make it appear that she possesses σοφία where she needs to show it and that she is a simpleton where it suits her :-P
There are frequently arguments that presume that tribalism is universal in a sense that it isn’t.
I dunno, man. What are you gonna do if a school of thought is right? It seems dumb to precommit to being too cool to agree with them, but surely you would lose your GERMANWORD if you credulously accepted their conclusions.
This seems like a desirable feature of German culture. Have Germans always been like this? Do you know where this aspect of German culture might have originated? The closest thing I can think of in American culture is the satirical TV show South Park, which is known for lampooning both sides in political debates. Unfortunately, the show is not strongly associated with intellectual sophistication.
Since German has elections it has a system that allows multiple political parties to exist. The US system on first-past-the-post shapes the political landscape in a way to move to having two political parties.
In the Germany of the 19th century there were three tiers in the the upper class signaled that they are upper class by having a better education. Germany never moved to get rid of it’s upper class.
To sum up: invent a single sentence to summarize your opponent’s position so that you condemn them as naive. For example, what you did to Phil Goetz.
General algorithm:
make a strawman version of your opponent’s ideas, and call it “thesis”;
make a strawman version of ‘what is wrong with the strawman of my opponent’s ideas’, and call it “antithesis”;
write a long text explaining how both “thesis” and “antithesis” are right about some partial aspects, optionally add your own ideas, and call it “synthesis”;
collect your Hegel points ;-)
tl;dr—any system of debate can be gamed
I don’t know German, but it sounds like the thing you mean by “Bildung” is something like “self-development”.
Let me know if this sounds like the right idea:
If you want to be an excellent person, strong in various capacities, mature and able, then you want to think for yourself, and always keep looking for deeper/subtler/more powerful insights. There’s a mental move of “yes, okay, but it could be even better” or “this is too easy, it’s boring” that makes it seem unappealing to remain stuck in dogma. You don’t want to flatten yourself or reduce yourself to a stereotype; you want to broaden and deepen your capacities.
The opposite mindset would be something like “I want to be Done, with all the thinking, forever, please don’t make me get up from where I’ve plopped down. I’m on the side of the angels, and That is That.”
Does that seem like the thing you’re pointing at?
As far as I understand especially in the US the term self-development is bend up with the American dream. It’s about developing capabilities to turn the dream into reality.
On the other hand “Bildung” can be for it’s own sake and only includes art and literature that have no practical usage.
I suspect there is a trade-off between partisanship and “deep wisdom”. You can make status moves of both kinds, you just have to choose the move that fits your audience.
Displaying sophistication by saying things like “there are also some interesting arguments against the statement 2+2=4” at every opportunity can perfectly kill any momentum. (Even if those arguments are technically valid, for example “there is no such thing as ‘4’ in base 3; it’s called ’11′ instead”, as long as they don’t contribute to solving the problem, only to signal the scholarship of the speaker.)
To the extent that using prior information about the world is useful in understanding the future, it’s sort of nonsensical to say someone shouldn’t think of themselves as Bayesian. To the extent someone is perhaps ignoring a correct methodological approach because of a misguided/misunderstood appeal to Bayesianism, that’s fine.
For example, back in the academic world I worked on research forecasting the U.S. yield curve. We did this using a series of non-linear equations to fit each day (cross-section), then used filtering dynamics to jointly capture the time-series. Figuring out a way to make this already insanely complex model work within a Bayesian framework wouldn’t only be perhaps too hard, but not particularly useful. There is no nice quantifiable information that would fit in a prior, given the constraints we have on data, math, and computational ability, that would make the model formally Bayesian.
Having said that, to the extent that we tweaked the model using our prior information on less structured scientific theories (e.g. Efficient market hypothesis) -- it certainly was Bayesian. Sometimes the model worked perfectly, computed perfectly, but something didn’t match up with how we wanted to map it to reality. In that sense we had our own neural-prior and found the posterior less likely.
It’s really hard for me to see under what model of the world (correct) Bayesian analysis could be misleading.
I think the claim that people can make correct judgements at better-than random probability that have no logical basis is nonsensical. Lots of this sort of writing and theorizing of the world is from a time, and from people, who existed before modern computational powers and machine learning. In the past the view was that the human mind was this almost-mystical device for processing reality, and we had to ground our beliefs in some sort of formal logic for them to follow. At least in my experience from working with stuff like neural-nets, I only see vast amounts of information, which our brains can filter out to predict the future. To reason from analogy, sometimes when doing some sort of ML problem you’ll add a bunch of data and you have no logical clue why it would improve your model… But then your predictions/classification score increase.
In this context what does it even mean to call this logical or non-logical? It’s nothing more than using past observed information patterns to classify and predict future information patterns. It’s strictly empirical. I can’t think of any logical decomposition of that which would add meaning.
If someone sees themselves as a Bayesian with a capital “B” that person is likely prefer Bayesian methods of modeling a problem.
If I have a personal problem and do Gendlin’s Focusing, I come up with an intuitive solution. There’s little logic involved. There are real life cases where it makes more sense to follow the intuitive solution.
Is there ever a case where priors are irrelevant to a distinction or justification? That’s the difference between pure Bayesian reasoning and alternatives.
OP gave the example of the function of organs for a different purpose, but it works well here. To a pure Bayesian reasoner, there is no difference between saying that the heart has a function and saying that the heart is correlated with certain behaviors, because priors alone are not sufficient to distinguish the two. Priors alone are not sufficient to distinguish the two because the distinction has to do with ideals and definitions, not with correlations and experience.
If a person has issues with erratic blood flow leading to some hospital visit, why should we look at the heart for problems? Suppose there were a problem found with the heart. Why should we address the problem at that level as opposed to fixing the blood flow issue in some more direct way? What if there was no reason for believing that the heart problem would lead to anything but the blood flow problem? What’s the basis for addressing the underlying cause as opposed to addressing solely the issue that more directly led to a hospital visit?
There is no basis unless you recognize that addressing underlying causes tends to resolve issues more cleanly, more reliably, more thoroughly, and more persistently than addressing symptoms, and that the underlying cause only be identified by distinguishing erroneous functioning from other abnormalities. Pure Bayesian reasoners can’t make the distinction because the distinction has to do with ideals and definitions, not with correlations and experience.
If you wanted a model that was never misleading, you might as well use first order logic to explain everything. Or go straight for the vacuous case and don’t try to explain anything. That problem is that that doesn’t generalize well, and it’s too restrictive. It’s about broadening your notion of reasoning so that you consider alternative justifications and more applications.
I don’t understand what you mean by ‘ideals and definitions,’ and how these are not influenced by past empirical observations and observed correlations. Any definition can simply be reduced to past observed correlations. The function of a heart is based strictly on past observations, and our mapping them to a functional model of how the heart behaves.
My argument seems trivial to me, because the idea that there is some non-empirical or correlated knowledge not based on past information seems nonsensical.
The distinction between “ideal” and “definition” is fuzzy the way I’m using it, so you can think of them as the same thing for simplicity.
Symmetry is an example of an ideal. It’s not a thing you directly observe. You can observe a symmetry, but there are infinitely many kinds of symmetries, and you have some general notion of symmetry that unifies all of them, including ones you’ve never seen. You can construct a symmetry that you’ve never seen, and you can do it algorithmically based on your idea of what symmetries are given a bit of time to think about the problem. You can even construct symmetries that, at first glance, would not look like a symmetry to someone else, and you can convince that someone else that what you’ve constructed is a symmetry.
The set of natural numbers is an example of something that’s defined, not observed. Each natural number is defined sequentially, starting from 1.
Addition is an example of something that’s defined, not observed. The general notion of a bottle is an ideal.
In terms of philosophy, an ideal is the Platonic Form of a thing. In terms of category theory, an ideal is an initial or terminal object. In terms of category theory, a definition is a commutative diagram.
I didn’t say these things weren’t influenced by past observations and correlations. I said past observations and correlations were irrelevant for distinguishing them. Meaning, for example, you can distinguish between more natural numbers than your past experiences should allow.
I’m going to risk going down a meaningless rabbit hole here of semantic nothingness --
But I still disagree with your distinction, although I do appreciate the point you’re making. I view, and think the correct way to view, the human brain as simply a special case of any other computer. You’re correct that we have, as a collective species, proven and defined these abstract patterns. Yet even all these patterns are based on observations and rules of reasoning between our mind and the empirical reality. We can use our neurons to generate more sequences in a pattern, but the idea of an infinite set of numbers is only an abstraction or an appeal to something that could exist.
Similarly, a silicon computer can hold functions and mappings, but can never create an array of all numbers. They reduce down to electrical on-off switches, no matter how complex the functions are.
There is also no rule that says natural numbers or any category can’t change tomorrow. Or that right outside of the farthest information set in the horizon of space available to humans, the gravitational and laws of mathematics all shift by 0.1. It is sort of nonsensical, but it’s part of the view that the only difference between things that feel real and inherently distinguishable is our perception of how certain they are to continue based on prior information.
In my experience talking about this with people before, it’s not the type of thing people change their mind on (not implying your view is necessarily wrong). It’s a view of reality that we develop pretty foundationally, but I figured I’d write out my thoughts anyway for fun. It’s also sort of a self-indulgent argument about how we perceive reality. But, hey, it’s late and I’m relaxing.
I don’t understand what point you’re making with the computer, as we seem to be in complete agreement there. Nothing about the notion of ideals and definitions suggests that computers can’t have them or their equivalent. It’s obvious enough that computers can represent them, as you demonstrated with your example of natural numbers. It’s obvious enough that neurons and synapses can encode these things, and that they can fire in patterned ways based on them because… well that’s what neurons do, and neurons seem to be doing to bulk of the heavy lifting as far as thinking goes.
Where we disagree is in saying that all concepts that our neurons recognize are equivalent and that they should be reasoned about in the same way. There are clearly some notions that we recognize as being valid only after seeing sufficient evidence. For these notions, I think bayesian reasoning is perfectly well-suited. There are also clearly notions we recognize as being valid for which no evidence is required. For these, I think we need something else. For these notions, only usefulness is required, and sometimes not even that. Bayesian reasoning cannot deal with this second kind because their acceptability has nothing to do with evidence.
You argue that this second kind is irrelevant because these things exist solely in people’s minds. The problem is that the same concepts recur again and again in many people minds. I think I would agree with you if we only ever had to deal with a physical world in which people’s minds did not matter all that much, but that’s not the world we live in. If you want to be able to reliably convey your ideas to others, if you want to understand how people think at a more fundamental level, if you want your models to be useful to someone other than yourself, if you want to develop ideas that people will recognize as valid, if you want to generalize ideas that other people have, if you want your thoughts to be integrated with those of a community for mutual benefit, then you cannot ignore these abstract patterns because these abstract patterns constitute such a vast amount of how people think.
It also, incidentally, has a tremendous impact on how your own brain thinks and the kinds of patterns your brain lets you consciously recognize. If you want to do better generalizing your own ideas in reliable and useful ways, then you need to understand how they work.
For what it’s worth, I do think there are physically-grounded reasons for why this is so.