Over the past couple years, I have updated to “yes, LessWrong should be the place focused on truthseeking.”
Updated to? This wording surprises me, because I’m having trouble forming a hypothesis as to what your earlier position could have been. (I’m afraid I haven’t studied your blogging corpus.) What else is this website for, exactly?
My steelman of this position is something like, “I favored focusing instrumental rationality because it seemed, well, useful. At the time I figured that this was just a different subject than epistemic rationality, & focusing on it would at worst mean less progress improving the accuracy of our beliefs. But in hindsight this involved allowing epistemics to get worse for the sake of more instrumental success. I’ve now updated towards that having been a bad tradeoff.”
Thanks! I’m not sure this is a place where steelmanning is quite the appropriate tool. My past self was optimized for being my past self, not being right. He was mostly just not trying to solve this question.
But, in this case, I think the best tool is more properly called “modeling people” and maybe “empathy”.
Things my past self cared about and/or believed included:
All the probability stuff feels too hard to think about, and it doesn’t seem like it’s really going to help me that much even if I put a lot of work into it. So for me personally, I’m just going to try to “remember base rates” and a few other simple heuristics and call it a day. I was glad other people took it more seriously though
Truth seems like one of many important things. What matters is getting things accomplished. (I’ve never been optimizing against truth, I have just prioritized other things. There’s been times where I, say, only put 20 minutes into checking an essay for being right, rather than 2 hours, when I had reason to suspect I might have had motivated reasoning.)
I thought (and still think, although less strongly and for more nuanced reasons) that the in person rationality community is unhealthy because it only selects for a few narrow types of person, who are min-maxed in a particular skillset. And I think the in person community is important (both for epistemic and instrumental reasons). It is important to be a community that doesn’t actively drive away people who bring other skills to the table.
I still roughly believe all that. The main update is that there should a) be dedicated spaces that focus on truthseeking as their [probably] sacred value, b) that LessWrong should be such a space. (But, as noted in Tensions in Truthseeking, there are still different tradeoffs you can make in your truthseeking frame, and I think it’s good to have spaces that have made different min-max tradeoffs to explore those tradeoffs. For example, there might be math-heavy spaces, there might be “blunt communication” spaces that optimize for directness, there might be feelings-heavy spaces that optimize for understanding and owning your internal state)
(I have made a bit of conceptual progress on probability stuff. I probably will never do real Bayesian Wizardry but I think grok it better now – I can follow some conversations I didn’t used to be able to follow and in some cases I can participate in and uphold norms that help others on their way to learning it better than I)
There is an interesting thing in all this space I recently re-read while perusing the old critiques of Gleb. A paraphrase of the linked comment is:
I think a problem with effective altruists is they often end up with a conception that marketing is icky, and that without marketing they are ineffective. I think Gleb might have just said “I’d rather be effective and icky than ineffective and pure.” And this is maybe an unhelpful frame that other people are implicitly using. There are ways you can market effectively without actually being icky.
And, while I’m not sure, I think I might have held a frame somewhat like that (I don’t have clear memories of biting either particular bullet). But my current position is “effective altruists should hold to a high epistemic standard, even when marketing. But, learn to market well within those constraints.”
Okay, but I thought the idea was that instrumental rationality and epistemic rationality are very closely related. Two sides of the same coin, not two flavors of good thing that sometimes trade off against each other. That agents achieve their goals by means of building accurate models, and using those models to “search out paths through probability” that steer the world into the desired goal-state. If the models aren’t accurate, the instrumental probability-bending magic doesn’t work and cannot work.
Okay, but geez man, my past self had different beliefs. What do you want here? What is your incredulity here aiming to accomplish? If you can’t simulate the mind of a person who showed up on LessWrong with one set of beliefs and gradually updated their beliefs in a set of directions that are common on the site, I think you should prioritize learning to simulate other minds a bit
What is your incredulity here aiming to accomplish?
I genuinely feel incredulous and am trying to express what I’m actually thinking in clear language? I mean, it’s also totally going to be the case that the underlying generator of “genuinely felt incredulity” is no doubt going to be some sort of subconscious monkey-politics status move designed by evolution to make myself look good at the expense of others. It’s important to notice that! But the mere fact of having noticed that doesn’t make the feeling go away, and given that the feeling is there, it’s probably going to leak into my writing. I could expend more effort doing a complicated System-2 political calculation that tries to simulate you and strategically compute what words I should say in order to have the desired effect on you. But not only is that more work than saying what I’m actually thinking in clear language, I also expect it to result in worse writing. Use the native architecture!
I mean, if it’ll help, we can construct a narrative in which my emotion of incredulity that was designed by evolution to make me look good, actually makes me look bad in local social reality? That’s a win-win Pareto improvement: I don’t have to mutilate my natural writing style in the name of so-called “cooperative” norms, and you don’t have to let my monkey-politics brain get away with “winning” the interaction.
How about this? Incredulity is, definitionally, a failed prediction. The fact that I felt incredulous means that my monkey status instincts are systematically distorting my anticipations about the world, making me delusionally perceive things as “obvious” exactly when they’re things that I coincidentally happened to already know, and not because of their actual degree-of-obviousness as operationalized by what fraction of others know them. (And conversely, I’ll delusionally perceive things as “nonobvious” exactly when I coincidentally happened to not-know them.)
(Slaps forehead)Hello, Megan! Ten years into this “rationality” business, and here I am still making rookie mistakes like this! How dumb can I get?
I think you should prioritize learning to simulate other minds a bit
Thanks, this is a good suggestion! I probably am below average at avoiding the typical mind fallacy. You should totally feel superior to me on this account!
I think there are separate worthwhile skills of “focus on learning empathy/modeling and let clear language flow from that”, and also “writing skills exist that are separate from epistemics” (such as brevity, which I think actually factors in here a bit)
Something that may not have been clear from my past discussion is that when I say “this could have been written in a way that was less triggering”, or something, I’m not (usually) meaning that to be a harsh criticism. Just, the sort of thing that you should say ‘ah, that makes sense. I will work on that’ for the future.
Just, the sort of thing that you should say ‘ah, that makes sense. I will work on that’ for the future.
It’s actually not clear to me that I should work on that. As a professional hazard of my other career, I’m pretty used to people trying to use “You would be more persuasive if you were nicer” as an attempted silencing tactic; if I just believed everyone who told me that, I would never get anything done.
Updated to? This wording surprises me, because I’m having trouble forming a hypothesis as to what your earlier position could have been. (I’m afraid I haven’t studied your blogging corpus.) What else is this website for, exactly?
Instrumental rationality?
My steelman of this position is something like, “I favored focusing instrumental rationality because it seemed, well, useful. At the time I figured that this was just a different subject than epistemic rationality, & focusing on it would at worst mean less progress improving the accuracy of our beliefs. But in hindsight this involved allowing epistemics to get worse for the sake of more instrumental success. I’ve now updated towards that having been a bad tradeoff.”
How close is that?
Thanks! I’m not sure this is a place where steelmanning is quite the appropriate tool. My past self was optimized for being my past self, not being right. He was mostly just not trying to solve this question.
But, in this case, I think the best tool is more properly called “modeling people” and maybe “empathy”.
Things my past self cared about and/or believed included:
All the probability stuff feels too hard to think about, and it doesn’t seem like it’s really going to help me that much even if I put a lot of work into it. So for me personally, I’m just going to try to “remember base rates” and a few other simple heuristics and call it a day. I was glad other people took it more seriously though
Truth seems like one of many important things. What matters is getting things accomplished. (I’ve never been optimizing against truth, I have just prioritized other things. There’s been times where I, say, only put 20 minutes into checking an essay for being right, rather than 2 hours, when I had reason to suspect I might have had motivated reasoning.)
I thought (and still think, although less strongly and for more nuanced reasons) that the in person rationality community is unhealthy because it only selects for a few narrow types of person, who are min-maxed in a particular skillset. And I think the in person community is important (both for epistemic and instrumental reasons). It is important to be a community that doesn’t actively drive away people who bring other skills to the table.
I still roughly believe all that. The main update is that there should a) be dedicated spaces that focus on truthseeking as their [probably] sacred value, b) that LessWrong should be such a space. (But, as noted in Tensions in Truthseeking, there are still different tradeoffs you can make in your truthseeking frame, and I think it’s good to have spaces that have made different min-max tradeoffs to explore those tradeoffs. For example, there might be math-heavy spaces, there might be “blunt communication” spaces that optimize for directness, there might be feelings-heavy spaces that optimize for understanding and owning your internal state)
(I have made a bit of conceptual progress on probability stuff. I probably will never do real Bayesian Wizardry but I think grok it better now – I can follow some conversations I didn’t used to be able to follow and in some cases I can participate in and uphold norms that help others on their way to learning it better than I)
There is an interesting thing in all this space I recently re-read while perusing the old critiques of Gleb. A paraphrase of the linked comment is:
And, while I’m not sure, I think I might have held a frame somewhat like that (I don’t have clear memories of biting either particular bullet). But my current position is “effective altruists should hold to a high epistemic standard, even when marketing. But, learn to market well within those constraints.”
Okay, but I thought the idea was that instrumental rationality and epistemic rationality are very closely related. Two sides of the same coin, not two flavors of good thing that sometimes trade off against each other. That agents achieve their goals by means of building accurate models, and using those models to “search out paths through probability” that steer the world into the desired goal-state. If the models aren’t accurate, the instrumental probability-bending magic doesn’t work and cannot work.
Okay, but geez man, my past self had different beliefs. What do you want here? What is your incredulity here aiming to accomplish? If you can’t simulate the mind of a person who showed up on LessWrong with one set of beliefs and gradually updated their beliefs in a set of directions that are common on the site, I think you should prioritize learning to simulate other minds a bit
I genuinely feel incredulous and am trying to express what I’m actually thinking in clear language? I mean, it’s also totally going to be the case that the underlying generator of “genuinely felt incredulity” is no doubt going to be some sort of subconscious monkey-politics status move designed by evolution to make myself look good at the expense of others. It’s important to notice that! But the mere fact of having noticed that doesn’t make the feeling go away, and given that the feeling is there, it’s probably going to leak into my writing. I could expend more effort doing a complicated System-2 political calculation that tries to simulate you and strategically compute what words I should say in order to have the desired effect on you. But not only is that more work than saying what I’m actually thinking in clear language, I also expect it to result in worse writing. Use the native architecture!
I mean, if it’ll help, we can construct a narrative in which my emotion of incredulity that was designed by evolution to make me look good, actually makes me look bad in local social reality? That’s a win-win Pareto improvement: I don’t have to mutilate my natural writing style in the name of so-called “cooperative” norms, and you don’t have to let my monkey-politics brain get away with “winning” the interaction.
How about this? Incredulity is, definitionally, a failed prediction. The fact that I felt incredulous means that my monkey status instincts are systematically distorting my anticipations about the world, making me delusionally perceive things as “obvious” exactly when they’re things that I coincidentally happened to already know, and not because of their actual degree-of-obviousness as operationalized by what fraction of others know them. (And conversely, I’ll delusionally perceive things as “nonobvious” exactly when I coincidentally happened to not-know them.)
(Slaps forehead) Hello, Megan! Ten years into this “rationality” business, and here I am still making rookie mistakes like this! How dumb can I get?
Thanks, this is a good suggestion! I probably am below average at avoiding the typical mind fallacy. You should totally feel superior to me on this account!
I think there are separate worthwhile skills of “focus on learning empathy/modeling and let clear language flow from that”, and also “writing skills exist that are separate from epistemics” (such as brevity, which I think actually factors in here a bit)
Something that may not have been clear from my past discussion is that when I say “this could have been written in a way that was less triggering”, or something, I’m not (usually) meaning that to be a harsh criticism. Just, the sort of thing that you should say ‘ah, that makes sense. I will work on that’ for the future.
It’s actually not clear to me that I should work on that. As a professional hazard of my other career, I’m pretty used to people trying to use “You would be more persuasive if you were nicer” as an attempted silencing tactic; if I just believed everyone who told me that, I would never get anything done.