My idea would be to give a truncated version of a point made in Truly Part of You.
The different sound-bite ways to say it are:
True knowledge regenerates.
Only believe something once you recognize how you would learn it some other way.
Your beliefs should be unaffected by your choice of labels.
Low inferential distance explanation: When learning about something, the most important thing is to notice what you’ve been told. Not understand, but notice: what kinds of things would you expect to see if you believed these claims, versus if you did not? Are you being told about some phenomenon, or just some labels? Once you’ve noticed what you’re being told, think about how it plugs in with the rest of your knowledge: what implications does this body of knowledge have for other fields, and vice versa? What discoveries in one area would force you to believe differently in the others? When you can answer these questions, you have a meaningful, predictive model of the world that can be phrased under any choice of labels.
(Aside: When you are at the stage where most of your knowledge regenerates, I call that the highest level of understanding.)
Btw, I had seen this in open thread and been thinking about a response, and this is what I settled on.
“Wisdom is like a tree. Cut away the pages of predictions, the branches, even the roots, but from a single seed the whole structure can be rebuilt.
Foolishness is like a heap of stones. Stack them up however you please, paint bright colors to catch the eye, call them ancient or sacred or mysterious, and yet a child could scatter them.”
If I were to make a top-level post on how to rephrase truthful things to sound like mysticism or poetry, how many times do you think it would be downvoted?
Yes, but those are polished outputs, and (no offense) have your halo-effect to back them up. I’m talking about sketching in a more generalized algorithm which accepts highly technical explanations as input, and produces output which a member of the general public would intuitively recognize as ‘wise,’ while retaining the input’s truth-value.
It’s not the poetry that’s the problem, it’s the mysticism. Your quote sounds like the former, not the latter.
Or maybe “ancient wisdom” is the right term to describe what your version sounds like—but the point is, it tells people why to think some way, and if they endorse it, they endorse a good truth-seeking procedure for the right reason, which is the important part.
By the way, I had googled “wisdom is like a tree” before asking you, and it didn’t seem to turn up any existing quotations. It surprised me that no one had famously compared wisdom to a tree—not in a positive sense, anyway.
It’s a good analogy, and—if you’re into that kind of thing—you can extend it even further: trees (can) yield fruit, the seed stays dormant if it’s not in an environment that lets it grow, all the seeds take a similar path when expanding …
That’s only a negative sense if you’re working with the assumption that the biblical God is a good guy, an assumption which (given the sheer volume of genocide He committed personally, though His direct subordinates, or demanded of His human followers) simply does not hold up to scrutiny for any widely-accepted modern standard of ‘good.’ I mean, look at Genesis 3:22 if nothing else.
If I were to make a top-level post on how to rephrase truthful things to sound like mysticism or poetry, how many times do you think it would be downvoted?
-13. (Well, actually I estimate 18 upvotes and 5 downvotes leaving effectively −13 downvotes).
I agree. I’m taking suggestions for notoriously difficult rationalist concepts (including information-theoretic ones) that are regarded as difficult to explain, or as having a high inferential distance.
I’m working on some articles related to that, but I’d be more interested in what topics others think I should try explaining better than standard accounts.
My idea would be to give a truncated version of a point made in Truly Part of You.
The different sound-bite ways to say it are:
True knowledge regenerates.
Only believe something once you recognize how you would learn it some other way.
Your beliefs should be unaffected by your choice of labels.
Low inferential distance explanation: When learning about something, the most important thing is to notice what you’ve been told. Not understand, but notice: what kinds of things would you expect to see if you believed these claims, versus if you did not? Are you being told about some phenomenon, or just some labels? Once you’ve noticed what you’re being told, think about how it plugs in with the rest of your knowledge: what implications does this body of knowledge have for other fields, and vice versa? What discoveries in one area would force you to believe differently in the others? When you can answer these questions, you have a meaningful, predictive model of the world that can be phrased under any choice of labels.
(Aside: When you are at the stage where most of your knowledge regenerates, I call that the highest level of understanding.)
Btw, I had seen this in open thread and been thinking about a response, and this is what I settled on.
“Wisdom is like a tree. Cut away the pages of predictions, the branches, even the roots, but from a single seed the whole structure can be rebuilt.
Foolishness is like a heap of stones. Stack them up however you please, paint bright colors to catch the eye, call them ancient or sacred or mysterious, and yet a child could scatter them.”
Very well said! Is that your own phrasing?
It is.
If I were to make a top-level post on how to rephrase truthful things to sound like mysticism or poetry, how many times do you think it would be downvoted?
People seemed to like Twelve Virtues of Rationality and Harry Potter and the Methods of Rationality.
Yes, but those are polished outputs, and (no offense) have your halo-effect to back them up. I’m talking about sketching in a more generalized algorithm which accepts highly technical explanations as input, and produces output which a member of the general public would intuitively recognize as ‘wise,’ while retaining the input’s truth-value.
There are algorithms for that? My brain just does it automatically on request.
(Also, I presented HPMOR to a new audience with my name stripped off just to check if people still liked what I wrote without the halo effect.)
Of course there are algorithms. The question is whether they have been adequately documented yet.
It’s not the poetry that’s the problem, it’s the mysticism. Your quote sounds like the former, not the latter.
Or maybe “ancient wisdom” is the right term to describe what your version sounds like—but the point is, it tells people why to think some way, and if they endorse it, they endorse a good truth-seeking procedure for the right reason, which is the important part.
By the way, I had googled “wisdom is like a tree” before asking you, and it didn’t seem to turn up any existing quotations. It surprised me that no one had famously compared wisdom to a tree—not in a positive sense, anyway.
It’s a good analogy, and—if you’re into that kind of thing—you can extend it even further: trees (can) yield fruit, the seed stays dormant if it’s not in an environment that lets it grow, all the seeds take a similar path when expanding …
That’s only a negative sense if you’re working with the assumption that the biblical God is a good guy, an assumption which (given the sheer volume of genocide He committed personally, though His direct subordinates, or demanded of His human followers) simply does not hold up to scrutiny for any widely-accepted modern standard of ‘good.’ I mean, look at Genesis 3:22 if nothing else.
I say do it. Literary style is a huge obstacle to the dissemination of skepticism.
-13. (Well, actually I estimate 18 upvotes and 5 downvotes leaving effectively −13 downvotes).
You claim to be good at explaining things. If you have time, you should take a crack at some more short explanations of things.
I agree. I’m taking suggestions for notoriously difficult rationalist concepts (including information-theoretic ones) that are regarded as difficult to explain, or as having a high inferential distance.
I’m working on some articles related to that, but I’d be more interested in what topics others think I should try explaining better than standard accounts.