“As one shocked 42-year-old manager exclaimed in the middle of a self-reflective career planning exercise, ‘Oh, no! I just realized I let a 20-year-old choose my wife and my career!’”
-- Douglas T. Hall, Protean Careers of the 21st Century
“As one shocked 42-year-old manager exclaimed in the middle of a self-reflective career planning exercise, ‘Oh, no! I just realized I let a 20-year-old choose my wife and my career!’”
-- Douglas T. Hall, Protean Careers of the 21st Century
Sorry; I didn’t realize that I can still post. I went ahead and posted it.
Here’s the latest version, what I will attempt to post on the top level when I again have enough karma.
“Life Experience” as a Conversation-Halter
Sometimes in an argument, an older opponent might claim that perhaps as I grow older, my opinions will change, or that I’ll come around on the topic. Implicit in this claim is the assumption that age or quantity of experience is a proxy for legitimate authority. In and of itself, such “life experience” is necessary for an informed rational worldview, but it is not sufficient.
The claim that more “life experience” will completely reverse an opinion indicates that to the person making such a claim, belief that opinion is based primarily on an accumulation of anecdotes, perhaps derived from extensive availability bias. It actually is a pretty decent assumption that other people aren’t Bayesian, because for the most part, they aren’t. Many can confirm this, including Haidt, Kahneman, Tversky.
When an opponent appeals to more “life experience,” it’s a last resort, and it’s a conversation halter. This tactic is used when an opponent is cornered. The claim is nearly an outright acknowledgment of a move to exit the realm of rational debate. Why stick to rational discourse when you can shift to trading anecdotes? It levels the playing field, because anecdotes, while Bayesian evidence, are easily abused, especially for complex moral, social, and political claims. As rhetoric, this is frustratingly effective, but it’s logically rude.
Although it might be rude and rhetorically weak, it would be authoritatively appropriate for a Bayesian to be condescending to a non-Bayesian in an argument. Conversely, it can be downright maddening for a non-Bayesian to be condescending to a Bayesian, because the non-Bayesian lacks the epistemological authority to warrant such condescension. E.T. Jaynes wrote in Probability Theory about the arrogance of the uninformed, “The semiliterate on the next bar stool will tell you with absolute, arrogant assurance just how to solve the world’s problems; while the scholar who has spent a lifetime studying their causes is not at all sure how to do this.”
Hi Morendil,
Thanks for the comment. The particular version you are commenting on was an earlier, worse version than what I posted and then pulled this morning. The version I posted this morning was much better than this. I actually changed the claim about the Sokal affair completely.
Due to what I fear was an information cascade of negative karma, I pulled the post so that I might make revisions.
The criticism concerning both this earlier version and the newer one from this morning still holds though. I too realized after the immediate negative feedback that I actually was combining, poorly, two different points and losing both of them in the process. I think I need to revise this into two different posts, or cut out the point about academia entirely. I will concede that anecdotes are evidence as well in the future version.
Unfortunately I was at exactly 50 karma, and now I’m back down to 20, so it will be a while before I can try again. I’ll be working on it.
There is a difference between science, a.k.a. basic research, and technology, a.k.a. applied science. A popular justification for funding basic research is that it suffers the positive external effects you mention, but this is inappropriately conflating science and technology. Technology doesn’t suffer from external effects. The patent system and the profit motive allow for technological goods and services to be excludable.
Right, so a “public” library is a good example of a good that is provided publicly, but has little economic justification as such. A “public” good is technically specific in economics, and refers to something more narrow than what is used in everyday language.
A book is excludable, even if somewhat nonrivalrous. It’s rivalrous in the sense that it can’t be checked out to multiple people at once, but nonrivalrous in the sense that a book in a library can be consumed by many more people than a book kept on a shelf in someone’s private home, over an extended period of time.
A library could operate without positive external effects with a subscription model.
So, it turns out that power affects what kind of moral reasoning a person uses.
Yes, degrees of rivalrousness and excludability exist on a continuum, but that’s irrelevant here. Scientific knowledge isn’t nonexcludable.
Let’s be precise with our language. Scientific knowledge is produced in respected, formal, peer-reviewed journals. Such journals charge for access to that knowledge. We shouldn’t be sloppy with how we define scientific knowledge; there is a lot of knowledge about science, that’s not the same thing as scientific knowledge, which is produced by a specific, formal, institutional process.
Mancur Olson’s The Logic of Collective Action might serve as a very useful theoretical tool here, for talking about groups. We might extend Olson’s analysis by thinking of how different kinds of groups produce rationality and scientific information.
I’m sorry; how is scientific knowledge a public good? Yes, it is nonrivalrous in consumption, but certainly not nonexcludable. Legitimate, peer-reviewed journals charge for subscriptions, individual issues, or even for individual articles online.
Via Tyler Cowen, Max Albert has a paper critiquing Bayesian rationality.
It seems pretty shoddy to me, but I’d appreciate analysis here. The core claims seem more like word games than legitimate objections.
This sounds very Foucauldian, almost straight out of Discipline and Punish.
I’m not Seth Godin, by the way.
We can discuss both epistemic and instrumental rationality.
So I finally picked up a copy of Probability Theory: The Logic of Science, by E.T. Jaynes. It’s pretty intimidating and technical, but I was surprised how much prose there is, which makes it surprisingly palatable. We should recommend this more here on Less Wrong.
I find excruciating honesty a worthy ideal, but not everyone is prepared for it. So, plainly describing everything you intend to signal and counter-signal might come off as eccentric, but worth doing if you can pull it off. It requires the right type of audience.
Eliezer, how is progress coming on the book on rationality? Will the body of it be the sequences here, but polished up? Do you have an ETA?
Yes! Both you and Kaj Sotala seem right on the money here. Deontology falls flat. A friend once observed to me that consequentialism is a more challenging stand to take because one needs to know more about any particular claim to defend an opinion about it.
I know it’s been discussed here on Less Wrong, but Jonathan Haidt’s research is really great, and relevant to this discussion. Professor Haidt’s work has validated David Hume’s assertions that we humans do not reason to our moral conclusions. Instead, we intuit about the morality of an action, and then provide shoddy reasoning as justification one way or the other.
Mike Gibson has a great and interesting question. How would Bayesian methodology address this? Might this be an information cascade?
Prof. Hanson,
I’m 22, and haven’t encountered an opportunity where I thought to use this claim. There are probably instances where it would have been factually appropriate for me to do so, but I’m not inclined to make this point, because it seems to me like a cop-out.
Maybe I would have difficulty in explaining something highly technical or specialized to someone with no background, but crying “life experience” doesn’t seem to be the proper response. It’s far too vague. I would find it more appropriate to direct my debate partner to the specialized or technical material they haven’t studied to understand why my position might be different.
The problem is that nebulously appealing to “life experience” doesn’t even grant how the debate partner is uninformed. It’s as if the person with more “life experience” is on such a higher level of understanding that they can’t even communicate how their additional information informs their understanding. Like Silas Barta, I’m skeptical that even the most informed and educated people would ever be simply unable to explain the basic ideas of even the most difficult material. When this claim is not used to try to explain how their training or experience leads them to a different conclusion, I suspect that more often than not, their differing position isn’t actually about any specialized training, just that their line of argumentation has run out of steam.
In critiquing postmodernism, Noam Chomsky wrote, “True, there are lots of other things I don’t understand: the articles in the current issues of math and physics journals, for example. But there is a difference. In the latter case, I know how to get to understand them, and have done so, in cases of particular interest to me; and I also know that people in these fields can explain the contents to me at my level, so that I can gain what (partial) understanding I may want.”