Michael Oakeshott’s critique of something-he-called-rationalism
Ideally participants in this discussion would have read his relevant essays (collected in the Rationalism in Politics book) but as an introduction this will do and this one is also good.
Clealy Oakeshott means something different under Rationalism than LW. I will call it SOCR (Something Oakeshott Calls Rationalism) now.
SOCR is the the idea that you can learn to cook from a recipe book, following the algorithms. He argues it used to be a popular idea in early 20th century Britain and it is false. Recipe books are written for people who can already cook, and this knowledge only comes from experience, not books. Either self-discovery, or apprenticing. Try to learn to cook from a recipe book and the book will not teach you, but you own failed experiments will, the hard way, you end up rediscovering cooking by a trial and error basis. Apprenticing is easier. The recipe book writer assumes the recipe works on an empty mind, while it only works on a mind already filled with experience. And what is worse, often minds are filled with the wrong kind of experience.
While Oakeshott can be accusing of endorsing “life experience as conversation stopper” his main argument is basically how is knowledge communicable. You have knowledge in your head, much of it gathered through experience, you may not be able to communicate all aspect of it through training an apprentice and even less through writing a book. Doing things is often more of an art than science. Worse, you would expect having the students cup pre-filled by the right kind of stuff, but often it is empty or filled with the wrong kind of stuff, which makes your book misunderstood.
Oakeshott focused on politics because his main point was that following a recipe book like Marxism-Leninism or Maoism is not simply a bad idea, but literally impossible, the doctrine you learn will be colored by your pre-existing experience and you will do whatever your experience dictates anyway. The issue is misleading yourself and others into thinking you are implementing a recipe, an algorithm, when it is not the case.
Oakeshott is basically saying e.g. you can never predict what the Soviet Union will do by looking at the Marxist books they read. However, if you add up the experience of the Tzarist imperialism and the experience of being a very reasonably paranoid revolutionary on the run from the Ohrana and fearing betrayal at every corner, you may predict what they are up to better.
SOCR is clearly not LWR and it is unfortunate the word “Rationalism” appears in both. Since I was exposed to Oakeshott and similar ideas earlier than LW I would actually prefer different terms for LWR, like “pragmatic reason”, but it is not up to me to make this choice, at least not in English, I may try to influence other languages though.
Ultimately, Oakeshott ends up with a set of ideas very similar to LW, such as coming down on the side of the shepherd in The Simple Truth, not on the side of Marcos Sophisticus. In fact, Oakeshotts’ SOCR he criticizes is clearly the later:
>As knowledge of the realm of the shadows is a real and hard-won achievement, the theorist goes gravely astray when he relies on his theoretical insights to issue directives to the practitioner, ridiculously trying to “set straight” the practical man on matters with which the theorist has no familiarity. The cave dwellers, first encountering the theorist on his return, might be impressed “when he tells them that what they had always thought of as ‘a horse’ is not what they suppose it to be . . . but is, on the contrary, a modification of the attributes of God. . . . But if he were to tell them that, in virtue of his more profound understanding of the nature of horses, he is a more expert horse-man, horse-chandler, or stable boy than they (in their ignorance) could ever hope to be, and when it becomes clear that his new learning has lost him the ability to tell one end of a horse from the other . . . [then] before long the more perceptive of the cave-dwellers [will] begin to suspect that, after all, he [is] not an interesting theorist but a fuddled and pretentious ‘theoretician’ who should be sent on his travels again, or accommodated in a quiet home.”
Ultimately both LW and Oakeshott support the cavemen. It is just unfortunate they use the term “Rationalist” in entirely opposite meanings.
I suspect it goes like this:
there is a word, which means something useful;
some group starts using it as their applause light;
they are doing it wrong, but they don’t notice it, because even their wrong version is still an applause light, therefore it must be good;
other people will also start using the word to mean “what this group is doing”, that is, the wrong version.
Yes. The primary historical change we see here is that “rational knowledge” used to mean certain, “geometrical” knowledge that leaves no room for probabilities: like the Pythagoras theorem, the “truths of logic” that are true in every conceivable universe and that are seen by the infallible mind not with the fallible eyes. This is of course totally crazy, but it was a fairly popular thing from Plato to Descartes. Probabilistic knowledge was called common sense, empiricism, pragmatism, phronesis, prudentia / prudence.
Much of the history of Western thought is a contest between these two:
Infallibilists: Plato, Neo-Platonists, Descartes partially and his students (Lausanne Logicians) much more, partially positivism, various political-ideological recipes, from the left wing to Mises.org
Probabilists: Aristotle, Scholastics/Thomists, Vico (against Descartes), Pascal, Edmund Burke, Oakeshott, Peirce, generally moderate to moderate-conservative politicial positions, roughly saying “the world is messy and chaotic, we need knowledge inferred from experience, not lofty principles”
The way this was inverted here is that there are a bunch of economists and mathemathicians, statisticians who defined the figuring out of the most accurate probabilities as rational decision-making, and Eliezer imported the terminology into the mainstream and philosophy, so to speak. If you talk to a philosopher, you better call LW-Rationalism Pragmatism, as it is mainly about being a Peirce who can do (Bayesian) math.
Again, this is not necessarily a problem. I think Eliezer wanted to direct attention to the mathemathical-statistical aspect of it, that is why he used that kind of terminology instead of the philosophical one. Since Eliezer was interested in AI even before he started the sequences, it sort of makes sense that if the goal is to build a Friendly Bayesian AI it will run on math more than on philosophy. So from that angle this change is okay, just expect the terminology to break down when compared to mainstream philosophy.
Apparently there’s a reasonable case to be made for Smith’s awareness of interval probabilities or so Michael emmet brady tells me.
SOCR seems to be YASR (Yet Another Straw Rationalism). What real people saying real things is Oakeshott writing against? Neither of the linked essays say. (The second link needs fixing, BTW.)
For starters, basically anyone cooking from a socialist cookbook. Oakeshott was the Professor of Political Science at the LSE and made friends with Hayek, they were trying to argue with the leftward shift they saw all around them including on the LSE, and the important aspect is here that the kind of socialism they argued with was not based on a practical organic tradition but on overly-theoretical “cookbooks”. Presumably, Oakeshott would not have resisted on these principles a leftward, or egalitarian shift that is based on organic elements that are tried in practice. But most socialism practiced in the era was of the kind that Nassim Taleb calls “ornithologists teaching birds to fly”.
Don’t be too attached to the word “Rationalism”, because it meant very different things in different times and different people. I think Eliezer took the term from mathemathical economics, but overally in philosophy LW-Rationalism used to be called Pragmatism, see Charles Sanders Peirce. And in philosophy or politics rationalism used to mean the same thing as “being logical” or “being theoretical” instead of being pragmatic or empirical. For example Descartes is associated with inventing rationalism-in-the-old-sense, but his main opponent Vico is closer to LW-Rationalism because Descartes was looking for sure, certain knowledge, while Vico saw life as probablistic, prone to chance and opportunity, that does not obey geometrical 100%-certain rules.
This is a causing a lot of confusion. A “geometrical” view, looking for 100% certainty used to be called rationalist, going way back to Plato, and the probability-oriented view used to be called pragmatist or empirical in philosophy, and Eliezer inverted it by importing mathemathical terminology where rational choices are based on accurate probabilities, so now the terminology got less clear.
This is not a big issue, just be aware that it is a changing terminology.
Downvote explanations?
Uhm, didn’t downvote, but I wish your posting style could shift towards higher quality, and perhaps smaller frequency. (The frequency is not a problem per se, it’s the “high frequency of low quality” that gets annoying.)
Okay, this leaves a question of what exactly “higher quality” means, for which I cannot give you a satisfactory answer here. (I just decided to give a possibly unsatisfactory answer rather than staying silent.)
I like many of your ideas. I think a lot of them are great conversation starters. I just wish to see them written in a different form, if you want to post them on LW. My impression is that you open a possibly interesting topic, but then you write a long text which actually is not sufficiently “digested”. Like, you could have think about the topic more, debate with a few people, find some links… okay, maybe now I am pointing in a wrong direction here, I just feel that you didn’t do some homework, although I am not completely sure what that homework should be.
Also, you like to post about politics, or to drag politics into other topics. It’s not completely forbidden, but let’s say such things have a cost, so what exactly is the benefit here which would justify the cost? Speaking for myself, I never heard about Oakeshott, so why should I care about his opinions on rationality? (Maybe explaining who Oakeshott is and why should I care is the part of the “homework” you should have done. But maybe it just does not belong here.)
But this is Discussion, not the front page… isn’t anything that can start a discussion OK? Isn’t posts in Discussion should be seen as simply “first comment” ?
Look, if I had people to debate it with, I would not need LW as I could get sufficient feedback on them. Hm, this is a bit of a misunderstanding here. I thought it is generally seen so that discussion boards are people who are less social IRL, or their IRL social circle is uninterested in one of their interests.
Makes sense.
Then I would suggest writing the article a bit differently… as an invitation to a debate, not as a solution. Instead of “here is a problem, and here is my solution”, something like “here is background, here is a problem… and I would like to hear your solutions in the comments”. That is: 1) be explicit that you want other people to write their opinion, and 2) avoid writing your own opinion as a part of the article. And maybe 3) provide some background so more people can join the debate.
Don’t get discouraged! Some of your articles are upvoted, so… uhm… seems like an opportunity to try making a hypothesis about “what is the difference between articles in this set, and in that set?” For example, my hypothesis would be that it depends on how much readers feel you give them the answer, or you ask them to write their own answer. But I could be wrong here.
OK, thanks, this is a good about. About politics… I am focusing on political philosophy and largely the skeptical subset of it (next would be John Kekes’s pluralism), that ought to be popular :)
Do you think starting a debate about the ethics of piracy / intellectual property would go down well? (From a pro-pirate angle, with an explicit goal to kill pop culture.)
That’s just another form of politics.
Have you read the parts of Sequences about politics and motivated reasoning? (Short version is that “I will start chanting our slogans and give you selected arguments about why my side is better than the other side” does not contribute to epistemic rationality, and so we should not do it here.)
But I think I am “strong” enough to avoid my usual tribal arguments (“copy is not stealing as it does not remove the original”) and be fully consequentualist (“copying kills pop culture, and it is good because”) and how would that be a bad thing? My point is precisely that we are probably strong enough to discuss such topics without slogan-chanting and well within epistemic rationality.
And I am unsure how you didn’t recognize that the sentence you quoted is not the usual four-legs-good tribal chant but something with a clear consequence predicted which is easy to approach rationally (“what is the chance it kills pop culture?” “what is the chance good things happen if pop culture gets killed?”)
The entire point of “politics is the mind-killer” is that no, even here is not immune to tribalistic idea-warfare politics. The politics just get more complicated. And the stopgap solution until we figure out a way around that tendency, which doesn’t appear reliably avoidable, is to sandbox the topic and keep it limited. You should have a high prior that a belief that you can be “strong” is Dunning-Kruger talking.
Okay, but feeling no passion, literally, no blood pressure rising isn’t a strong evidence there with few false positives? Does it have many false positives?
Sandboxing is okay, better than total taboo, this is why I recommended a quarantine. Or a biweekly thread.
I disagree, if you are putting a issue before the LW crowd, showing what your preliminary thoughts about it are is highly useful.
Not if your primary goal is to generate open-ended discussion. (Putting forth an opinion too often divides your audience into two groups: “for” and “against”, even on LW.)