Any sufficiently advanced wisdom is indistinguishable from bullshit
In the grand tradition of sequences, I’m going to jot this down real quick because it’s required for the next argument I’m going to make.
Shalmanese’s 3rd law is “Any sufficiently advanced wisdom is indistinguishable from bullshit”. Shalmanese’s first law is “As the length of any discussion involving the metric system approaches infinity, the likelihood approaches 1 of there being a reference to The Simpsons episode about 40 rods to the hogshead” so judge it by the company it keeps.
Imagine you got to travel back in time to meet yourself from 10 years ago and impart as much wisdom as possible on your past-self in 6 hours. You’re bound by the Time Enforcement Committee not to reveal that you are the future-self of your past-self and it never occurs to your past-self that this ugly thing in front of them could ever be you. As far as the past-self is concerned, it’s just a moderately interesting person they’re having a conversation with.
There would be 3 broad sets that your discussions would fall in: Beliefs that you both mutually agree on, Beliefs that you are able to convince your past-self through reason and Beliefs which make the past-self regard your future-self as being actively stupid for holding. It’s this third category which I’m going to term Advanced Wisdom.
For everybody, those beliefs are going to be specific to the individual. Maybe you used to be devoutly religious and now you’re staunchly atheist. Perhaps you were once radically marxist and now you’re a staunch libertarian. For me, it was the wisdom of the advice to “be yourself”. I have no doubt that I would get precisely nowhere convincing my past-self that “be yourself” is a piece of wisdom. Anything I could ever possibly say to him, he had already heard many times before and convinced himself was utter bullshit. If even my actual self couldn’t convince myself of something, what hope is there that any rational argument could have penetrated?
If I were to have my future-self visit my present-self now, I would have no doubt that he would also present me with some pieces of advanced wisdom I thought were bullshit. The problem is, sufficiently advanced wisdom is indistinguishable from bullshit. There is no possible test that can separate the two. You might be told something is advanced wisdom and keep the openest mind possible about it and investigate it in all the various ways and perhaps even be convinced by it and maybe it was actual wisdom you were convinced by. Then again, you could just have been convinced by bullshit. As a result, advanced wisdom, as a concept, is completely, frustratingly useless in an argument. If you’re on the arguer’s side, you know that the assertion of advanced wisdom is going to be taken as just more bullshit, if you’re on the arguee’s side, any assertion of advanced wisdom looks like the mistaken rambling of a deluded fool.
The one positive thing this law has lead me to is a much higher tolerance for bullshit. I’m no longer so quick to dismiss ideas which, to me, seem obvious bullshit.
I’m unable to distinguish Shalmanese’s 3rd law from bullshit. If it is true then it may in fact be advanced wisdom. I still think it’s bullshit.
Being a science and maths geek, I’ve tended to dismiss a lot of philosophy as bullshit, and have only recently begun to realise that (some of) what I’ve dismissed is actually valid and interesting. Of course, one place where this effect is incredibly strong is when a parent is arguing with a child, i.e. the “teenagers think they know everything” syndrome.
If this were true, it would mean that any advanced wisdom distinguishable from bullshit is insufficiently advanced. I don’t think that’s true.
Some caveats to this comment:
1) Not my original insight; this is a paraphrase of Gehm’s Corollary to Clarke’s Third Law.
2) The original’s “sufficiently advanced” and the corollary’s “insufficiently advanced” don’t seem to be answering the same “sufficient for what?”
Agreed. My best guess for correcting the statement is something like “any wisdom separated by a sufficiently large inferential distance from my current state is indistinguishable from bullshit,” which might be closer to the truth (though I still wonder whether if its really a monotone function from inferential distance to understandability).
Well said.
I would be at a loss for words in addressing my 10-years-past self. In fact, all my life I’ve been able to look back at my earlier selves and think, “how could I possibly have lived in something so small?” And yet I can’t think of anything I could tell them that they could use to change the course of their life. At the most I might be able to put myself in contact with sources of ideas that were available before I happened to come across them, but beyond that I don’t know what I could say.
I’d prefer to reserve the word “bullshit” for “arguments made to advance an agenda with no regard for the truth of the matter”, per Frankfurt. I think you’re using the term to mean “crazy stuff that is just wrongheaded, even if sincerely believed by the idiot in question”; to capture this concept I propose “horseshit”. Thus, one can be a bullshitter, but not a horseshitter (or rather, we’re all horseshitters to one extent or another).
semantic distinctions are popular here, apparently.
Semantic distinctions are vital to efficient communication. Think of the crimes against good communication that have been committed by indiscriminate, over-broad use of terms like, “freedom,” “democracy,” “efficiency,” “equality,” and “terrorism,” to name just a few. Words carry important connotations, and we cannot help but evoke these connotations when we hear such words, even if they’ve been redefined so that they no longer deserve such connotations.
This is not to endorse this particular distinction; bullshit does appear inaccurate, but the suggested alternative is scarcely better.
I can’t readily think of something I used to strongly believe but later strongly disbelieved. Taking what I can, here’s my compressed pellet of wisdom:
“Graduate students often tell Michael Wilson they think they know how to create smarter-than-human artificial intelligence that won’t fail to care about us. They are probably always wrong, as Eliezer Yudkowsky could tell you. Don’t bite off more than you can chew; you’re prone to this. By the way, the interlibrary loan system is awesome.”
I hope all of those are statements that I would understand and agree with well enough. If I remember correctly, my reaction to the whole Friendly AI idea was, “Oh yeah. Heh.”
I would be extremely suspicious of any new “wisdom” I’ve acquired in the past 10 years that I found myself unable to explain to my past self in the course of six hours. Any sufficiently incommunicable wisdom is, in fact, bullshit.
I’d be extremely suspicious that I’d stopped maturing if myself in 10 years could get along perfectly with myself of today. Take an informal poll of the people around you, I’ll bet the vast majority of them would regard their past selves as frustratingly irritating because of all the missing advanced wisdom.
Personally, I hope my future selves will have improved at getting along with people in general, even people who lack wisdom, and especially if they have privileged knowledge of the contents of the other person’s head.
I would be extremely depressed if I was unable to grasp a concept communicated by my near future self over the space of 6 hours.
I’d be interested to see how their age played into this.
For example, I would expect that some college students who want to be teachers might find tweens decent company whereas others would be horrified at the prospect, and that 90 year olds might be sympathetic towards 80 year olds, whereas 30 year olds might more often regret drinking too much and studying too little in college.
Although in large part that is propaganda to make the question interesting more than a solid prediction.
You may need months of full-time effort to get up to speed in math or physics to have the concepts necessary to understand what some new assertion means, even before you start considering whether it’s true.
Work on better calibration. Make sure to not term “impossible” what will turn out to be true. Be aware of the inferential distances: maybe you just don’t get the intended meaning (but this is also a problem with presentation).
The advanced wisdom you describe is basically just experiential knowledge that has not been well described or quantified. It seems like bullshit because: (1) it’s not described specifically enough to be applicable (\what exactly does ‘be yourself’ mean?), and (2) scope insensitivity of anecdotes (there may well be a vast sea of evidence for how ‘being yourself’ is useful, but there is certainly some evidence as to how it can be damaging, and the two are not quantified for comparison). In general we should give such experiential anecdotes some weight, especially when many people describe similar experiences, but we should not assume such reports are representative of anything more than the singular experiences of individuals.
No, it’s not (only) experiential knowledge. It’s about the basic framework through which you view the world. More experience isn’t going to help if you keep on fitting it within the same, inaccurate model.
How do you posit the model is formed or updated if not through experience? The reason why your self of 10 years ago doesn’t believe ‘just be yourself’ will work is that he hasn’t experienced what actually happens yet, and everything you say is purely theoretical for him. Even giving examples of other people following your advice might not work, because those people are not him, and the idea that this will work for him is still theoretical. Now, if the population in question were picked to be as similar as possible to the subject, and the variable (being or not being one’s self) was well defined and well controlled, then a good rationalist would indeed take the result seriously and not just say, “this advice is bullshit,” though he still might be uncertain as to whether or not it would work for him.
I think that statement is true only for time-constrained arguments. It takes time to research and understand the prerequisites to any “advanced wisdom,” so to speak. Likewise, it takes time to understand the flaws in untrue things, and to notice your own biases. Finally, even if you understand the evidence and arguments leading up to some great insight, it takes time to fully understand the ramifications of the idea. If you’re time-constrained like in your example, your past self simply can’t process everything fast enough and the absurdity heuristic wins.
This is where I have to disagree with you. There are plenty of ways to quickly and accurately rule out most incorrect beliefs without accidentally ruling out correct beliefs. Many of them are mentioned on this site.
If you think Christians are Christians (to pick an arbitrary example) because of time constraints, then you’re in for a rude shock.
Actually, I do think the reason a lot of Christians are Christians is because it takes a lot of time for someone else to deconvert them.
To deconvert a religious person with a high school education, you usually need to touch on a lot of topics: the scientific method, beliefs and evidence, anthropics, biology, evolution, the problem of evil, cosmology, reductionism, and cognitive biases. It takes time for people to explain and comprehend all these things.
Conjecture: the amount of time needed to escape the Christian paradigm is arbitrarily large (say, a year of concentrated effort) so Christians are Christians due to time constraints because they don’t see being a Christian as an issue to put time into (reference to the post where Eliezer talked about robots lifting refrigerators and teacups or whatever goes here)
Note: The converse is not true. Not all bullshit looks like advanced wisdom.
Does more advanced wisdom look like bullshit than bullshit looks like advanced wisdom? I doubt it. Bullshit is selected for appearances.
By the OP’s definition of ‘advanced wisdom’, all advanced wisdom looks like bullshit, by definition.
That (re)definition makes Shalmanese’s third law tautological rather than clearly false. That’s fine, so long as no attempt is made to draw any conclusions about, well, actual advanced wisdom.
No matter how silly a piece of BS is, someone will call it wisdom.
No matter how well reasoned and well supported factually something is, someone will call it BS.
Neither matters particularly, unless you are trying to convince the other, what matters now is “Is this useful to me?”
I don’t quite understand what this post is trying to say, but it reads like incredibly deep wisdom. +1.
But does this hold up if you broaden the question: Could I (if I spoke ancient Greek) convince Plato of quantum mechanics and general relativity? Yes, but not in six hours. After all, it took me more than a decade of schooling to get there, and that was with generations of scientists pre-digesting it for me. On the other hand, if I spent weeks or months explaining algebra and calculus in geometric terms, doing simple experiments to demonstrate Newton’s laws and the concept of a mathematical natural law, and then a historical overview of nineteenth century chemistry, physics, thermodynamics, and astronomical observations, then yes, I could probably communicate this advanced wisdom. I suspect that this depends on this constituting knowledge which is Truly Part Of Me.
I agree with mattnewport (to a degree)… I happen to think that this law is indistinguishable from Bullshit, yet I also happen to believe that it is not bullshit (where we diverge).
I had thought something similar when I was about 16, when I was accused of being able to BS with someone 3 times my age… I asked them… “How do you know that what I am saying is BS? It could just be so alien to your life experience as to have no basis for comparison.”
Addendum: The same could be said for wisdom given us from alien life forms. It could be so alien to us that we would have no basis for a comparison, yet it could be the secret to happiness… We also might not have any way to identify their real excrement from magical technology. It might be the case that some aliens have feces that opens up wormholes into other dimensions, where said feces them moves (in order to have this universe be rid of it)… Yet, imagine what said aliens would think if they saw us playing with their shit in order to try to create and manipulate wormholes… Huh???? Huh?!
Circular: “I call (enlightened future self’s) beliefs that seem like bullshit Advanced Wisdom”; “sufficiently advanced wisdom is indistinguishable from bullshit”.
So, this amounts to a definition.
The question is (to be addressed in your next post?): how many important things does it describe? Will anything I encounter falling into the category (Advanced Wisdom), and how’s the uility*probability of that vs. the cost to consider and reject, and the possibility to mistakenly accept, some real bullshit?
The only defense of the imperial system against the metric (not just “it would be a hassle to change”) I’m aware of is Lloyd Mintern’s.
I’m undecided on the message of this post.
Bullshit!
It sounds as though this manual recalibration did not take much account of the traditional role of the bullshitometer—namely insulating you from bullshit.