I’ll have more to say on this in the future, but for now I just want to ramble about something.
I’ve been reading through some of the early General Semantics works. Partially to see if there are any gaps in my understanding they can fill, partially as a historical curiosity (how much of rationality did they have figured out, and what could they do with it?), partially because it might be good fodder for posts on LW (write a thousand posts to Rome).
And somehow a thing that keeps coming into my mind while reading them is the pre-rigorous/rigorous/post-rigorous split Terrence Tao talks about, where mathematicians start off just doing calculation and not really understanding proofs, and then they understand proofs through careful diligence, and then they intuitively understand proofs and discard many of the formalisms in their actions and speech.
Like, the early General Semantics writers pay careful attention to many things that I feel like I have intuitively incorporated; they’re trying to be rigorous about scientific thinking (in the sense that they mean) in a way that I think I can be something closer to post-rigorous. Rather than this just being “I’m sloppier than they were”, I think I see at least one place where they’re tripping up (tho maybe they’re just trying to bridge effectively to their audience?), of which the first that comes to mind is when an author, in a discussion of the faults of binarism, makes their case using a surprisingly binarist approach (instead of the more scientific quantitative language).
And so when I see something that seems to say “good math is all about correct deductions”, there’s a part of me that says “well, but… that’s not actually where good math comes from, if you ask the good mathematicians.” There’s a disagreement going on at the moment between Zvi and Elizabeth about what inferences to draw from the limited data we have about the container stacking story. It’s easy for me to tell a story about how Zvi is confused and biased and following the wrong policies, and it’s easy for me to tell a story about how Elizabeth is confused and biased and following the wrong policies. But, for me at least, doing either of those things would be about trying to Win instead trying to Understand.
And, like, I don’t know; the reason this is a ramble instead of a clear point is because I think I’m saying “don’t bother the professors who talk in intuitions” to someone who is saying “we really need to be telling undergraduates when they make math errors”, and yet somehow I’m seeing in this a vision that’s something more like “people become rational by carefully avoiding errors” instead of something that’s more like “people become rational by trying to cleave as closely as possible to the Way”.
You may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory.” But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.
For me, what this resonates most clearly with is the interaction I just had with Ben Pace.
Ben was like “X”
And I was like “mostly yes to X, but also no to the implicature that I think surrounds X which is pretty bad.”
And Ben was like “oh, definitely not that! Heck no! Thanks for pointing it out, but no, and also I think I’ll change nothing in response to finding out that many people might draw that from X!”
And my response was basically “yeah, I don’t think that Ben Pace on the ideal LessWrong should do anything different.”
Because the thing Ben said was fine, and the implicature is easy to discard/get past. Like “almost certainly Ben Pace isn’t trying to imply [that crap], I don’t really need to feel defensive about it, I can just offhandedly say ‘by the way, not [implication], and it’s fine for Ben to not have ruled that out just like it’s fine for Ben to not also actively rule out ‘by the way, don’t murder people’ in every comment.”
But that’s because Ben and I have a high-trust, high-bandwidth thing going.
The more that LW as a whole is clean-and-trustworthy in the way that the Ben-Duncan line segment is clean-and-trustworthy, the less that those implications are flying around all over the place and actuallyreal in the sense that they’re being read into things by large numbers of people who then vote and comment as if they were clearly just intended.
I had a similar interaction with Romeo, about the comment I highlighted in the essay; his response was basically “I knew Aella and I were unlikely to fall into that attractor, and didn’t even pause to consider the heathen masses.”
(Nowhere near an exact quote; don’t blame Romeo.)
And Ben was also advocating not even pausing to consider the heathen masses.
And maybe I should update in that direction, and just ignore a constant background shrieking.
But I feel like “ignoring the heathen masses” has gotten me in trouble before on LessWrong specifically, which is why I’m hesitant to just pretend they don’t exist.
EDIT: also last time you made a comment on one of my posts and I answered back, I never heard back from you and I was a little Sad so could you at least leave me a “seen” or something
And maybe I should update in that direction, and just ignore a constant background shrieking.
I’m not sure about this; there is some value in teaching undergrads rigor, and you seem more motivated to than I am. And, like, I did like Logan’s comment about rumor, and I think more people observing things like that sooner is better. I think my main hope with the grandparent was to check if you’re thinking the rigor is the moon or the finger, or something.
My views here aren’t fully clarified, but I’m more saying “the pendulum needs to swing this way for LessWrong to be good” than saying “LessWrong being good is the pendulum being all the way over there.”
Or, to the extent that I understood you and am accurately representing Ben Pace, I agree with you both.
“You have to know the rules before you can break them.”
There has to be some sense that you’re riffing deliberately and not just wrong about the defaults
The ability to depart from The Standard Forms is dependent on both the level of trust and the number of bystanders who will get the wrong idea (see my and Critch’s related posts, or my essay on the social motte-and-bailey).
“Level three players can’t distinguish level two players from level four players.”
This suggests to me a different idea on how to improve LessWrong: make an automated “basics of disagreement” test. This involves recognizing a couple of basic concepts like cruxes and common knowledge, and involves looking at some comment threads and correctly diagnosing “what’s going on” in them (e.g. where are they talking past each other) and you have to notice a bunch of useful ways to intervene.
Then if you pass, your username on comments gets a little badge next to it, and your strong vote strength gets moved up to +4 (if you’re not already there).
The idea is to make it clearer who is breaking the rules that they know, versus who is breaking the rules that they don’t know.
EDIT: also last time you made a comment on one of my posts and I answered back, I never heard back from you and I was a little Sad so could you at least leave me a “seen” or something
Seen, also, which are you thinking of? I might have had nothing to say, or I might have just been busy when I saw the response and I wasn’t tracking that I should respond to it.
Not surprising to me given my recent interactions with you and Romeo, but I agree it’s quite important and I wouldn’t mind a world where it became the main frontier of this discussion.
And somehow a thing that keeps coming into my mind while reading them is the pre-rigorous/rigorous/post-rigorous split Terrence Tao talks about, where mathematicians start off just doing calculation and not really understanding proofs, and then they understand proofs through careful diligence, and then they intuitively understand proofs and discard many of the formalisms in their actions and speech.
I’ll have more to say on this in the future, but for now I just want to ramble about something.
I’ve been reading through some of the early General Semantics works. Partially to see if there are any gaps in my understanding they can fill, partially as a historical curiosity (how much of rationality did they have figured out, and what could they do with it?), partially because it might be good fodder for posts on LW (write a thousand posts to Rome).
And somehow a thing that keeps coming into my mind while reading them is the pre-rigorous/rigorous/post-rigorous split Terrence Tao talks about, where mathematicians start off just doing calculation and not really understanding proofs, and then they understand proofs through careful diligence, and then they intuitively understand proofs and discard many of the formalisms in their actions and speech.
Like, the early General Semantics writers pay careful attention to many things that I feel like I have intuitively incorporated; they’re trying to be rigorous about scientific thinking (in the sense that they mean) in a way that I think I can be something closer to post-rigorous. Rather than this just being “I’m sloppier than they were”, I think I see at least one place where they’re tripping up (tho maybe they’re just trying to bridge effectively to their audience?), of which the first that comes to mind is when an author, in a discussion of the faults of binarism, makes their case using a surprisingly binarist approach (instead of the more scientific quantitative language).
And so when I see something that seems to say “good math is all about correct deductions”, there’s a part of me that says “well, but… that’s not actually where good math comes from, if you ask the good mathematicians.” There’s a disagreement going on at the moment between Zvi and Elizabeth about what inferences to draw from the limited data we have about the container stacking story. It’s easy for me to tell a story about how Zvi is confused and biased and following the wrong policies, and it’s easy for me to tell a story about how Elizabeth is confused and biased and following the wrong policies. But, for me at least, doing either of those things would be about trying to Win instead trying to Understand.
And, like, I don’t know; the reason this is a ramble instead of a clear point is because I think I’m saying “don’t bother the professors who talk in intuitions” to someone who is saying “we really need to be telling undergraduates when they make math errors”, and yet somehow I’m seeing in this a vision that’s something more like “people become rational by carefully avoiding errors” instead of something that’s more like “people become rational by trying to cleave as closely as possible to the Way”.
For me, what this resonates most clearly with is the interaction I just had with Ben Pace.
Ben was like “X”
And I was like “mostly yes to X, but also no to the implicature that I think surrounds X which is pretty bad.”
And Ben was like “oh, definitely not that! Heck no! Thanks for pointing it out, but no, and also I think I’ll change nothing in response to finding out that many people might draw that from X!”
And my response was basically “yeah, I don’t think that Ben Pace on the ideal LessWrong should do anything different.”
Because the thing Ben said was fine, and the implicature is easy to discard/get past. Like “almost certainly Ben Pace isn’t trying to imply [that crap], I don’t really need to feel defensive about it, I can just offhandedly say ‘by the way, not [implication], and it’s fine for Ben to not have ruled that out just like it’s fine for Ben to not also actively rule out ‘by the way, don’t murder people’ in every comment.”
But that’s because Ben and I have a high-trust, high-bandwidth thing going.
The more that LW as a whole is clean-and-trustworthy in the way that the Ben-Duncan line segment is clean-and-trustworthy, the less that those implications are flying around all over the place and actually real in the sense that they’re being read into things by large numbers of people who then vote and comment as if they were clearly just intended.
I had a similar interaction with Romeo, about the comment I highlighted in the essay; his response was basically “I knew Aella and I were unlikely to fall into that attractor, and didn’t even pause to consider the heathen masses.”
(Nowhere near an exact quote; don’t blame Romeo.)
And Ben was also advocating not even pausing to consider the heathen masses.
And maybe I should update in that direction, and just ignore a constant background shrieking.
But I feel like “ignoring the heathen masses” has gotten me in trouble before on LessWrong specifically, which is why I’m hesitant to just pretend they don’t exist.
EDIT: also last time you made a comment on one of my posts and I answered back, I never heard back from you and I was a little Sad so could you at least leave me a “seen” or something
I’m not sure about this; there is some value in teaching undergrads rigor, and you seem more motivated to than I am. And, like, I did like Logan’s comment about rumor, and I think more people observing things like that sooner is better. I think my main hope with the grandparent was to check if you’re thinking the rigor is the moon or the finger, or something.
My views here aren’t fully clarified, but I’m more saying “the pendulum needs to swing this way for LessWrong to be good” than saying “LessWrong being good is the pendulum being all the way over there.”
Or, to the extent that I understood you and am accurately representing Ben Pace, I agree with you both.
Some strings of wisdom that seem related:
“You have to know the rules before you can break them.”
There has to be some sense that you’re riffing deliberately and not just wrong about the defaults
The ability to depart from The Standard Forms is dependent on both the level of trust and the number of bystanders who will get the wrong idea (see my and Critch’s related posts, or my essay on the social motte-and-bailey).
“Level three players can’t distinguish level two players from level four players.”
This suggests to me a different idea on how to improve LessWrong: make an automated “basics of disagreement” test. This involves recognizing a couple of basic concepts like cruxes and common knowledge, and involves looking at some comment threads and correctly diagnosing “what’s going on” in them (e.g. where are they talking past each other) and you have to notice a bunch of useful ways to intervene.
Then if you pass, your username on comments gets a little badge next to it, and your strong vote strength gets moved up to +4 (if you’re not already there).
The idea is to make it clearer who is breaking the rules that they know, versus who is breaking the rules that they don’t know.
Interestingly, my next planned essay is an exploration of a single basic of disagreement.
Seen, also, which are you thinking of? I might have had nothing to say, or I might have just been busy when I saw the response and I wasn’t tracking that I should respond to it.
https://www.lesswrong.com/posts/57sq9qA3wurjres4K/ruling-out-everything-else?commentId=fdapBc7dnKEhJ4no4
This comment is surprising to me in how important I think this point is.
Not surprising to me given my recent interactions with you and Romeo, but I agree it’s quite important and I wouldn’t mind a world where it became the main frontier of this discussion.
This is an instance of three levels of mastery.