Out of interest, does anyone here have a positive unpacking of “wisdom” that makes it a useful concept, as opposed to “getting people to do what you want by sounding like an idealised parental figure”?
Is it simply “having built up a large cache of actually useful responses”?
“Wise” and “smart” are both ways of saying someone knows what to do. The difference is that “wise” means one has a high average outcome across all situations, and “smart” means one does spectacularly well in a few. That is, if you had a graph in which the x axis represented situations and the y axis the outcome, the graph of the wise person would be high overall, and the graph of the smart person would have high peaks.
Wisdom seems to be basically successful pattern matching of mental concepts to situations, and you need life experience as the training data for mental concepts, the varieties of situations, and the outcomes of applying different concepts to different situations to get it running at the sort of intuitive level you need it.
I think Moldbug is somewhat on target, LW doesn’t really have much in the way of either explicitly cultivating or effectively identifying the sort of wisdom that lets you produce high-quality original content, beyond the age-old way of hanging around with people who somehow can already do it and hoping that some of it rubs off. So we get people adopting the community opinions and jargon, getting upvotes for being good little redditors, not doing much else, and thinking that they are gaining rationality. We haven’t managed to get the martial art of rationality thing going, where there would be a system in place for getting unambiguous feedback on your actual strength of wisdom.
Prediction markets are one interesting candidate for a mechanism for trying to measure the actual strength of rationality.
In this case he could not be farther off target if he tried. Yvain’s writings are some of the best, most engaging, most charitable and most reasonable anywhere online. This is widely acknowledged even by those who disagree with him.
When I was very young, I had a funny idea about layers of information packing.
“Data” is raw, unfiltered sensory perception (where “senses” include instruments/etc.)
“Information” is data, processed and organized into a particular methodology.
“Knowledge” is information processed, organized and correlated into a particular context.
“Intelligence” is knowledge processed, organized and correlated into a particular praxis.
“Wisdom” is intelligence processed, organized and correlated into a particular goalset.
“Enlightenment” is wisdom processed, organized and correlated into a particular worldview.
I never rigorously defined what the process was to my own satisfaction, but there seemed to my young mind to be an isomorphic ‘level-jumping’ process between each layer that involved processing, organizing and correlating one’s understanding at the previous layer.
In my own head, I mostly unpack “smart” as being able to effectively reason with a given set of data, and “wise” as habitually treating all my observations as data to reason from. Someone with a highly compartmentalized mind can be smart, but not wise. If (A → B) but A is not actually true, someone who is smart but not wise will answer B given A where someone wise will reject A given A.
That said, this seems to be an entirely ideosyncratic mapping, and I don’t expect anyone else to use it.
It’s “humorless” that hurts the most, of course.
Out of interest, does anyone here have a positive unpacking of “wisdom” that makes it a useful concept, as opposed to “getting people to do what you want by sounding like an idealised parental figure”?
Is it simply “having built up a large cache of actually useful responses”?
Paul Graham has takes a stab at it:
Wisdom seems to be basically successful pattern matching of mental concepts to situations, and you need life experience as the training data for mental concepts, the varieties of situations, and the outcomes of applying different concepts to different situations to get it running at the sort of intuitive level you need it.
I think Moldbug is somewhat on target, LW doesn’t really have much in the way of either explicitly cultivating or effectively identifying the sort of wisdom that lets you produce high-quality original content, beyond the age-old way of hanging around with people who somehow can already do it and hoping that some of it rubs off. So we get people adopting the community opinions and jargon, getting upvotes for being good little redditors, not doing much else, and thinking that they are gaining rationality. We haven’t managed to get the martial art of rationality thing going, where there would be a system in place for getting unambiguous feedback on your actual strength of wisdom.
Prediction markets are one interesting candidate for a mechanism for trying to measure the actual strength of rationality.
In this case he could not be farther off target if he tried. Yvain’s writings are some of the best, most engaging, most charitable and most reasonable anywhere online. This is widely acknowledged even by those who disagree with him.
Unfortunately most of Less Wrong is non-Yvain.
The point is not even Yvain’s writings are high-quality enough, in Moldbug’s view.
When I was very young, I had a funny idea about layers of information packing.
“Data” is raw, unfiltered sensory perception (where “senses” include instruments/etc.) “Information” is data, processed and organized into a particular methodology. “Knowledge” is information processed, organized and correlated into a particular context. “Intelligence” is knowledge processed, organized and correlated into a particular praxis. “Wisdom” is intelligence processed, organized and correlated into a particular goalset. “Enlightenment” is wisdom processed, organized and correlated into a particular worldview.
I never rigorously defined what the process was to my own satisfaction, but there seemed to my young mind to be an isomorphic ‘level-jumping’ process between each layer that involved processing, organizing and correlating one’s understanding at the previous layer.
In my own head, I mostly unpack “smart” as being able to effectively reason with a given set of data, and “wise” as habitually treating all my observations as data to reason from. Someone with a highly compartmentalized mind can be smart, but not wise. If (A → B) but A is not actually true, someone who is smart but not wise will answer B given A where someone wise will reject A given A.
That said, this seems to be an entirely ideosyncratic mapping, and I don’t expect anyone else to use it.