Alexander is a disciple of the equally humorless “rationalist” movement Less Wrong, a sort of Internet update of Robespierre’s good old Cult of Reason, Lenin’s very rational Museums of Atheism, etc, etc. If you want my opinion on this subject, it is that—alas—there is no way of becoming reasonable, other than to be reasonable. Reason is wisdom. There is no formula for wisdom—and of all unwise beliefs, the belief that wisdom can be reduced to a formula, a prayer chant, a mantra, whatever, is the most ridiculous.
Out of interest, does anyone here have a positive unpacking of “wisdom” that makes it a useful concept, as opposed to “getting people to do what you want by sounding like an idealised parental figure”?
Is it simply “having built up a large cache of actually useful responses”?
“Wise” and “smart” are both ways of saying someone knows what to do. The difference is that “wise” means one has a high average outcome across all situations, and “smart” means one does spectacularly well in a few. That is, if you had a graph in which the x axis represented situations and the y axis the outcome, the graph of the wise person would be high overall, and the graph of the smart person would have high peaks.
Wisdom seems to be basically successful pattern matching of mental concepts to situations, and you need life experience as the training data for mental concepts, the varieties of situations, and the outcomes of applying different concepts to different situations to get it running at the sort of intuitive level you need it.
I think Moldbug is somewhat on target, LW doesn’t really have much in the way of either explicitly cultivating or effectively identifying the sort of wisdom that lets you produce high-quality original content, beyond the age-old way of hanging around with people who somehow can already do it and hoping that some of it rubs off. So we get people adopting the community opinions and jargon, getting upvotes for being good little redditors, not doing much else, and thinking that they are gaining rationality. We haven’t managed to get the martial art of rationality thing going, where there would be a system in place for getting unambiguous feedback on your actual strength of wisdom.
Prediction markets are one interesting candidate for a mechanism for trying to measure the actual strength of rationality.
In this case he could not be farther off target if he tried. Yvain’s writings are some of the best, most engaging, most charitable and most reasonable anywhere online. This is widely acknowledged even by those who disagree with him.
When I was very young, I had a funny idea about layers of information packing.
“Data” is raw, unfiltered sensory perception (where “senses” include instruments/etc.)
“Information” is data, processed and organized into a particular methodology.
“Knowledge” is information processed, organized and correlated into a particular context.
“Intelligence” is knowledge processed, organized and correlated into a particular praxis.
“Wisdom” is intelligence processed, organized and correlated into a particular goalset.
“Enlightenment” is wisdom processed, organized and correlated into a particular worldview.
I never rigorously defined what the process was to my own satisfaction, but there seemed to my young mind to be an isomorphic ‘level-jumping’ process between each layer that involved processing, organizing and correlating one’s understanding at the previous layer.
In my own head, I mostly unpack “smart” as being able to effectively reason with a given set of data, and “wise” as habitually treating all my observations as data to reason from. Someone with a highly compartmentalized mind can be smart, but not wise. If (A → B) but A is not actually true, someone who is smart but not wise will answer B given A where someone wise will reject A given A.
That said, this seems to be an entirely ideosyncratic mapping, and I don’t expect anyone else to use it.
Cult accusations, criticism by way of comparison to things one doesn’t like simply because they bear similar names, use of ill-defined terms as part of that criticism, bizarre analogies to minimally historically connected individuals (Shabtai Tzvi? Seriously? Also does Moldbug realize what that one particularly sounds like given Eliezer’s background?), phrasing things in terms of conflicts of power rather than in terms of what might actually be true, operating under the strong presumption that people who disagree with one are primarily motivated by ulterior motivations rather than their stated ones especially when those ulterior motivations would support one’s narrative.
Moldbug weighs in on the Techcrunch thing, with words on LW:
We’re so humorless that our primary piece of evangelical material is a Harry Potter fanfiction.
It’s “humorless” that hurts the most, of course.
Out of interest, does anyone here have a positive unpacking of “wisdom” that makes it a useful concept, as opposed to “getting people to do what you want by sounding like an idealised parental figure”?
Is it simply “having built up a large cache of actually useful responses”?
Paul Graham has takes a stab at it:
Wisdom seems to be basically successful pattern matching of mental concepts to situations, and you need life experience as the training data for mental concepts, the varieties of situations, and the outcomes of applying different concepts to different situations to get it running at the sort of intuitive level you need it.
I think Moldbug is somewhat on target, LW doesn’t really have much in the way of either explicitly cultivating or effectively identifying the sort of wisdom that lets you produce high-quality original content, beyond the age-old way of hanging around with people who somehow can already do it and hoping that some of it rubs off. So we get people adopting the community opinions and jargon, getting upvotes for being good little redditors, not doing much else, and thinking that they are gaining rationality. We haven’t managed to get the martial art of rationality thing going, where there would be a system in place for getting unambiguous feedback on your actual strength of wisdom.
Prediction markets are one interesting candidate for a mechanism for trying to measure the actual strength of rationality.
In this case he could not be farther off target if he tried. Yvain’s writings are some of the best, most engaging, most charitable and most reasonable anywhere online. This is widely acknowledged even by those who disagree with him.
Unfortunately most of Less Wrong is non-Yvain.
The point is not even Yvain’s writings are high-quality enough, in Moldbug’s view.
When I was very young, I had a funny idea about layers of information packing.
“Data” is raw, unfiltered sensory perception (where “senses” include instruments/etc.) “Information” is data, processed and organized into a particular methodology. “Knowledge” is information processed, organized and correlated into a particular context. “Intelligence” is knowledge processed, organized and correlated into a particular praxis. “Wisdom” is intelligence processed, organized and correlated into a particular goalset. “Enlightenment” is wisdom processed, organized and correlated into a particular worldview.
I never rigorously defined what the process was to my own satisfaction, but there seemed to my young mind to be an isomorphic ‘level-jumping’ process between each layer that involved processing, organizing and correlating one’s understanding at the previous layer.
In my own head, I mostly unpack “smart” as being able to effectively reason with a given set of data, and “wise” as habitually treating all my observations as data to reason from. Someone with a highly compartmentalized mind can be smart, but not wise. If (A → B) but A is not actually true, someone who is smart but not wise will answer B given A where someone wise will reject A given A.
That said, this seems to be an entirely ideosyncratic mapping, and I don’t expect anyone else to use it.
It is interesting how similar in style and thought patterns this is to the far left rant about LW from a few months ago.
In what way?
Cult accusations, criticism by way of comparison to things one doesn’t like simply because they bear similar names, use of ill-defined terms as part of that criticism, bizarre analogies to minimally historically connected individuals (Shabtai Tzvi? Seriously? Also does Moldbug realize what that one particularly sounds like given Eliezer’s background?), phrasing things in terms of conflicts of power rather than in terms of what might actually be true, operating under the strong presumption that people who disagree with one are primarily motivated by ulterior motivations rather than their stated ones especially when those ulterior motivations would support one’s narrative.
FWIW Moldbug is Jewish too.