Quantifying wisdom
So we know that many smart people make stupid (at least in retrospect) decisions. What these people seem to be lacking, at least at the moment they make a poor decision, is wisdom (“judicious application of knowledge”). More from Wikipedia:
It is a deep understanding and realization of people, things, events or situations, resulting in the ability to apply perceptions, judgements and actions in keeping with this understanding. It often requires control of one’s emotional reactions (the “passions”) so that universal principles, reason and knowledge prevail to determine one’s actions.
From Psychology Today:
It can be difficult to define Wisdom, but people generally recognize it when they encounter it. Psychologists pretty much agree it involves an integration of knowledge, experience, and deep understanding that incorporates tolerance for the uncertainties of life as well as its ups and downs. There’s an awareness of how things play out over time, and it confers a sense of balance.
Wise people generally share an optimism that life’s problems can be solved and experience a certain amount of calm in facing difficult decisions. Intelligence—if only anyone could figure out exactly what it is—may be necessary for wisdom, but it definitely isn’t sufficient; an ability to see the big picture, a sense of proportion, and considerable introspection also contribute to its development.
From SEP:
(1) wisdom as epistemic humility, (2) wisdom as epistemic accuracy, (3) wisdom as knowledge, and (4) wisdom as knowledge and action.
Clearly, if one created a human-level AI, one would want it to “choose wisely”. However, as human examples show, wisdom does not come for free with intelligence. Actually, we usually don’t trust intelligent people nearly as much as we trust wise ones (or appearing to be wise, at any rate). We don’t trust them to make good decisions, because they might be too smart for their own good. Speaking of artificial intelligence, one (informal) quality we’d expect an FAI to have is that of wisdom.
So, how would one measure wisdom? Converting the above description (“ability to apply perceptions, judgements and actions in keeping with this understanding”) into a more technical form, one can interpret wisdom, in part, as understanding one’s own limitations (“running on corrupt hardware”, in the local parlance) and calibrating one’s actions accordingly. For example, of two people of the same knowledge and intelligence level (as determined by your favorite intelligence test), how do you tell which one is wiser? You look at how the outcomes of their actions measure up against what they predicted. The good news is that you can practice and test your calibration (and, by extension, your wisdom), by playing with the PredictionBook.
For example, Aaron Swartz was clearly very smart, but was it wise of him to act they way he did, gambling on one big thing after another, without a clear sense of what is likely to happen and at what odds? On the other end of the spectrum, you can often see wise people of average intelligence (or lower) recognizing their limitations and sticking with “what works”.
Now, this quantification is clearly not exhaustive. Even when perfectly calibrated, how do you quantify being appropriately cautious when making drastic choices and appropriately bold when making minor ones? What algorithms/decision theories make someone wiser? Bayesianism can surely help, but it relies on decent priors and does not compel one to act. Would someone implementing TDT or UDT to the best of their ability maximize their wisdom for a given intelligence/knowledge level? Is this even a meaningful question to ask?
EDIT: fixed fonts (hopefully).
- 17 Jan 2013 23:18 UTC; 2 points) 's comment on Welcome to Less Wrong! (July 2012) by (
Judging Aaron Swartz’s decision-making capabilities in light of his suicide is a form of survivorship bias (pun not intended, I am terribly sorry). It’s not at all clear that the outcome of his decisions was a predictable consequence of them.
On a societal level, it’s both useful and inspiring to have some people take more risks than others (risk-taking people sometimes bring large benefits to society, risk-averse people get inspiration and some Bayesian evidence about what works from looking at what risk-taking people do). This is probably why whatever personality traits correlate with risk-taking are heritable and vary within populations, and it suggests the reasonable hypothesis that optimal group rationality doesn’t necessarily mean that each individual in the group has optimal individual rationality.
Nitpick: Only if “the group” and “the individual” don’t have the exact same utility function and/or that either optimal rationality is not common knowledge within the group or not all individuals are optimal—which all seems rather suboptimal for “optimal group rationality”, but not excluded.
If all group members have knowledge that all group members are optimal, and all of them care about the group utility function rather than individualistic interests, they will all randomize who has the sub-personally-optimal strategy. Just like in the Rationalist vs Barbarians scenario.
I encountered a definition for the different concepts in one of my programming classes, while discussing AI (the trivial kind of AI in that case, not AGI), which has stuck with me:
Knowledge is the ability to more usefully apply data. Intelligence is the ability to more usefully apply knowledge. Wisdom is the ability to more usefully apply intelligence.
A key point is that each level ability becomes more generalized; knowledge applies to a wide range of data. Intelligence applies to a wide range of knowledge, and hence a very wide range of data. And wisdom applies to a wide range of intelligence, etc. A key point is that “more usefully” may indicate -not- working on a problem.
While hierarchical, each level is not necessarily dependent on the layers beneath it; one can be wise without intelligence, or intelligent without knowledge, or knowledgeable without data. More people have knowledge about quantum physics than have data about it; they can know the results of an experiment, and what it means, without looking at the data produced by that experiment. Whether or not this is a good thing depends on your perspective.
ETA: If this definition doesn’t seem meaningful, try this instead, where data includes process definitions: Knowledge is data about data. Intelligence is data about data about data. Wisdom is data about data about data about data. You can have false knowledge, false intelligence, and false wisdom, just as much as you can have false data. Just as it is difficult to tell good data from bad without knowledge, it is difficult to tell good knowledge from bad without intelligence, and good intelligence from bad without wisdom. Experimentation in each case can also suffice, but is difficult.
Ah, so you are saying… rationality is GOD Over Data?
To be succinct:
Wisdom is the development of good heuristics.
Good heuristics + good priors, I’d say (that’s how you decide between heuristics)
I’ve been thinking about this recently, noticeing that my more experienced coworkers have a good deal of expereince that tends to trump my intelligence and rationality in some cases.
I think you could relate this to the observation in machine learning that a mediocre algorthm trained on 10 times as much data is often superior to a more sophisticated one with less data.
As for appliability to FAI, it might generalize and account for the implications of this in the usual consequentialist fashion.
One general benefit of experience is that it leads to the accumulation of tacit knowledge. I would guess that most tacit knowledge is domain-specific, though (my main experience with tacit knowledge is in mathematics). Do you have evidence that your more experienced coworkers are better at things outside of the workplace?
No serious evidence of outside-workplace competence.
Usual politics is mindkiller stuff (november), lack of strategic life planning, lack of big thinking, but can’t seriously expect that stuff from non-LWers.
It is plausible that their wisdom is mostly in-field. Though the field of “working in an engineering startup” is pretty broad.
Can you give an example of your co-workers doing this?
On technical matters, like fluid mechanics, I usually understand the theory better and can work through new problems faster, but the older engineer I work closely with has a bigger bag of heuristics and standard designs (like when to use resistor-network approximations, what the general qualitative shape of this function is, how to solve this problem, etc).
One guy has a big bag of approximately the same advice we derive on LW, but it appears to be derived from practical experience (ideas are not attched to people, do lots of experiments, etc).
It’s hard to lay out specific examples that illustrate because each small occurance is more or less unconnected (because they don’t have underlying theory), and each occurance is relatively unenlightening.
Anyways, I’ve updated in the direction of “experience/wisdom beats intelligence/rationality, at least at first”.
As far as I can tell, “wisdom” is just a word that refers to the sort of knowledge that (1) defines the person being described as high status, and (2) is the result of extensive experience. When I imagine someone as “wise”, I think of the person looking rather eminent, and most likely sort of old simply because long stretches of experience require long stretches of living—that is being at least somewhat old.
Many people we would label as “smart” make decisions we end up labeling “stupid”. This doesn’t seem very remarkable. When I think of the word “smart”, what comes to mind is a comparatively high mental ability in certain subjects, or someone who’s demonstrated a comparatively high likelihood of coming to interesting insights, or getting good at something requiring strong mental ability, such as chess. Someone meeting that criteria making a decision we end up calling “stupid” seems no more interesting than someone we call “athletic” getting injured.
You’re saying these people—those who we would be likely to label as “smart”, yet sometimes make decisions we would likely call “stupid”—what they’re missing is “wisdom”. This makes it sound like ‘wisdom’ is some sort of component they’re missing, as if this insight would put us on some sort of useful quest, analogous to being told that the way by which to open this box we want to open is “to find the key, which is somewhere in this house” (a clue).
Well, I would rephrase what you’re saying as the completely unremarkable observation that someone we would likely call “smart”, if they were to make a series of stupid decisions, we would probably be unlikely to call them “wise”. This is a fact about how we employ English words, nothing more. Part of the meaning of “wise” seems to be consistency. Someone erratic, yet “smart”, we would be unlikely to refer to by the word “wise”. I don’t see how this observation could generate any useful hypotheses pertaining to building FAI, or anything like that. As it doesn’t seem to concern anything but definitions, the only application to FAI would, as far as I can tell, be one of suggesting which FAI to call “wise”, and which to not—a rather uninteresting conversation indeed.
Here I just want to point out that although you transitioned to this sentence as if it was part of your general point, it should be mentioned that although the grammar may suggest that “wisely” in “choose wisely” is a conjugation of “wisdom” or “wise”, it seems to be a slightly different word. ‘Choosing wisely’ just seems to be choosing based on calm, rational deliberation, like in telling someone to “choose wisely” one is suggesting they not be hasty. It doesn’t seem to suggest anything pertaining to extensive experience, or anything like that, as the words “wise” and “wisdom” do.
Call me pedantic, but I’m just trying to show how slippy words can be, and the sort of care that’s necessary to not get sucked into shuffling around words to no real purpose.
You mean calling someone “smart” doesn’t mean it would be tautological to call them “wise”, as in the classic example of calling someone a “bachelor” meaning it would be a tautology to call them an “unmarried man”? Yeah, that much is obvious. Wisdom seems to suggest consistency, but plenty of people we call “intelligent” are rather erratic in certain respects, to no contradiction of that label. Again, I see no interesting insight here. We’re still just discussing English-language conventions.
Yeah, because consistency is a component of the common definition of “wise”. We trust people we would consider consistent more than those we wouldn’t label with that word.
Again, there is an equivocation going on with this sort of transition. Although related in meaning, and sharing the same sequence of characters, the word “wise” in the question “was it wise of him” seems to be of a different meaning than the word “wise” in referring to Aaron as a “wise elder”. The question “was it wise of him” seems no more than just asking whether it was a good idea, whereas the idea of being a “wise elder” seems to be about his experience, etc. Again the definitions are just being moved around in a word shuffle that doesn’t seem to be getting us anywhere.
I don’t know, ones that make them more consistent? Or ones that signal higher social status, or allow them to react more calmly when confronted with shocking situations? As with the rest of your post, you seem to be just asking questions about definitions, or making statements about how we use certain words. I can’t seem to find any real, useful content in your post. It seems like no more than an exercise in messing around with definitions, masquerading as being in some way insightful.
There’s another cliche, though—the wise, low status person. This person is usually old, though occasionally you get a child who’s wise beyond their years.
I can’t think of any tropes about surprisingly wise middle-aged people.
Parents (or contemporaries of parents) who turn out to be surprisingly wise are a staple of a certain kind of YA fiction.
I wonder if this is a slight reaction to another kind of YA fiction, in which adults are useless, often from stupidity.
Thinking about the wise(1) people know (real people, not fictional), the property they seem to have in common that others lack is that they habitually reason with all of the data available to them, which includes data about their own habitual behavior and reasoning processes. They rarely if ever seem to have that experience of suddenly realizing that they’ve been doing or believing something which they already knew was the wrong thing to do or believe, but somehow that didn’t seem to matter. That’s not to say that they’re always right, but when they’re wrong it’s easy to identify what data they’re missing, and when that data is supplied they self-correct quickly.
Wisdom, in this sense, is “seeing the forest despite the trees.” It relates to having a well-integrated mind.
By contrast, I seem to class as merely “intelligent” people who are able to reason effectively from a set of data to a justified conclusion, even if they have a habit of neglecting vast chunks of the data they have available. I know lots of intelligent people who apply their intelligence differentially to different domains—who are brilliant at math, for example, but hopeless at working machinery, or skilled engineers who can’t seem to figure out what pisses off their colleagues, or brilliant at developing working models of other people’s motivations but unable to make sense of a stock prospectus, etc. (The example of creationist scientists gets used a lot on this site as well.)
I suspect that wisdom in this sense is distinct from intelligence (EDIT: as an attribute of humans), but that I’m less likely to notice wisdom in an unintelligent person because there are so many implications of the data they have which are obvious to me but not them that it’s easy to assume they aren’t attending to that data in the first place. That’s just speculation, though. It’s true that the people who strike me as wise often also strike me as intelligent.
If I wanted to build a system that demonstrated wisdom in this sense, I don’t think I would do anything special… it’s likely to come for free as an emergent property of a well-designed intelligence. I suspect that “lack of wisdom” in humans is an artifact of our jury-rigged, evolved, confluence-of-a-million-special-purpose-hacks brains.
Conversely, if I wanted to increase wisdom (in this sense) in humans, I would probably focus my attention on attention. It seems to be associated with diffuse focus of attention. Which is consistent with my experience that fear, anxiety, obsession, addiction, and other cognitive patterns that focus attention tend to inhibit wisdom.
====
1 - When I talk about people being wise here, I’m referring to those I intuitively class that way, and then reasoning backwards from that set to get at what the element that leads me to class them that way is. This is, of course, just an element of my own intuitive classification, I don’t mean to present it as some kind of universal definition for “wise” or anything like that. If it reflects your own use of the word as well, great; if not, that’s OK too.
I suspect part of what we ordinarily call “wisdom” involves having the right sort of utility function, which is not something that your decision theory can police. If someone were implementing TDT perfectly in order to fulfill their paperclip-maximizing desires, I doubt we would characterize them as wise.
I agree that a sizable component of wisdom is the choice of utility functions, and some UFs are certainly less wise than others, like in your example (I nearly included it in the OP, actually). However, the means of maximizing utility matters just as much, as some of those actions might backfire spectacularly. For example, a preemptive nuclear strike could be considered a means to secure future (if you are JFK during the Cuban missile crisis), but one can hardly call it wise. Hence my point about calibration.
Could you please fix the post to not contain four or five different fonts?
An intelligent person may still ask others for assistance—particularly elders and respected individuals—when unsure of their own decision making. They may have confidence in their thought processes, but were they considering the right things? is there even a right thing to consider in that instance? They desire help applying their intelligence correctly, or rather, efficiently and effectively.
They may wish to decrease overpopulation while lowering hunger rates in industrial Ireland, and think courses of human infants may solve the issue but are uncertain. Asking after the wisdom of the masses*, elders, and/or respected persons, they are questioned whether realising that end would most effectively and efficiently utilise their intelligence.
Thus sums my apparent view wisdom derives from experience, for in order to know where intelligence is best applied, one must have an accurate map to strategise upon.
* That is a set phrase, correct? I wonder its accuracy.
In other words, it is an ill-defined concept that could mean many things, many of them possibly contradictory. Based on this non-definition I cannot see how you can draw so many conclusions. If I hand you an arbitrary agent, how can you measure its level of wisdom independent from its intelligence?
So like “art”, then.
Should we conclude that art is not a thing, at least in any meaningful sense?
I’m not arguing it’s not a thing, I’m arguing that it’s hard to have an objective debate about it due to being ill-defined. The fact that we are now having this debate is indicative of this. Of course, I have nothing against debating the possible meanings of wisdom. But the subject of the post is ‘quantifying wisdom’ so I expect there to be some quantification.
So the bit where they laid out a definition (noting that it is not universally-agreed upon but contending that it’s useful) and expressed a method for quantifying it didn’t catch your attention?
(IMO it’s not a great post as yet—I think the topic’s fairly important but given how it started I was underwhelmed by where it went. I’m just saying, they did technically do the thing you’re complaining about them not doing here...)
Admitting that a definition is informal doesn’t make it acceptable.
Try this:
“Art is defined as anything I, passive_fist, personally like. I admit this might not be a universally agreed-upon definition, but it is a definition that will be used throughout this article.”
Again, I want to stress that I have nothing against debating the meaning of Wisdom. In fact, I would enjoy debating it. But let’s not pretend we’re having a logical argument here.
Yeah, I’m not sure how that’s appreciably different from defining “intelligence” as “optimization power” other’n that’s a local norm, whereas there is no local norm for “wisdom.”
Intelligence is also a hard-to-define term, no doubt about it. And the same arguments I’ve been giving also apply to intelligence. But when you try to do an even finer act of logical separation—separating wisdom from intelligence—the arguments apply doubly.