Thanks! I’m not sure this is a place where steelmanning is quite the appropriate tool. My past self was optimized for being my past self, not being right. He was mostly just not trying to solve this question.
But, in this case, I think the best tool is more properly called “modeling people” and maybe “empathy”.
Things my past self cared about and/or believed included:
All the probability stuff feels too hard to think about, and it doesn’t seem like it’s really going to help me that much even if I put a lot of work into it. So for me personally, I’m just going to try to “remember base rates” and a few other simple heuristics and call it a day. I was glad other people took it more seriously though
Truth seems like one of many important things. What matters is getting things accomplished. (I’ve never been optimizing against truth, I have just prioritized other things. There’s been times where I, say, only put 20 minutes into checking an essay for being right, rather than 2 hours, when I had reason to suspect I might have had motivated reasoning.)
I thought (and still think, although less strongly and for more nuanced reasons) that the in person rationality community is unhealthy because it only selects for a few narrow types of person, who are min-maxed in a particular skillset. And I think the in person community is important (both for epistemic and instrumental reasons). It is important to be a community that doesn’t actively drive away people who bring other skills to the table.
I still roughly believe all that. The main update is that there should a) be dedicated spaces that focus on truthseeking as their [probably] sacred value, b) that LessWrong should be such a space. (But, as noted in Tensions in Truthseeking, there are still different tradeoffs you can make in your truthseeking frame, and I think it’s good to have spaces that have made different min-max tradeoffs to explore those tradeoffs. For example, there might be math-heavy spaces, there might be “blunt communication” spaces that optimize for directness, there might be feelings-heavy spaces that optimize for understanding and owning your internal state)
(I have made a bit of conceptual progress on probability stuff. I probably will never do real Bayesian Wizardry but I think grok it better now – I can follow some conversations I didn’t used to be able to follow and in some cases I can participate in and uphold norms that help others on their way to learning it better than I)
There is an interesting thing in all this space I recently re-read while perusing the old critiques of Gleb. A paraphrase of the linked comment is:
I think a problem with effective altruists is they often end up with a conception that marketing is icky, and that without marketing they are ineffective. I think Gleb might have just said “I’d rather be effective and icky than ineffective and pure.” And this is maybe an unhelpful frame that other people are implicitly using. There are ways you can market effectively without actually being icky.
And, while I’m not sure, I think I might have held a frame somewhat like that (I don’t have clear memories of biting either particular bullet). But my current position is “effective altruists should hold to a high epistemic standard, even when marketing. But, learn to market well within those constraints.”
Thanks! I’m not sure this is a place where steelmanning is quite the appropriate tool. My past self was optimized for being my past self, not being right. He was mostly just not trying to solve this question.
But, in this case, I think the best tool is more properly called “modeling people” and maybe “empathy”.
Things my past self cared about and/or believed included:
All the probability stuff feels too hard to think about, and it doesn’t seem like it’s really going to help me that much even if I put a lot of work into it. So for me personally, I’m just going to try to “remember base rates” and a few other simple heuristics and call it a day. I was glad other people took it more seriously though
Truth seems like one of many important things. What matters is getting things accomplished. (I’ve never been optimizing against truth, I have just prioritized other things. There’s been times where I, say, only put 20 minutes into checking an essay for being right, rather than 2 hours, when I had reason to suspect I might have had motivated reasoning.)
I thought (and still think, although less strongly and for more nuanced reasons) that the in person rationality community is unhealthy because it only selects for a few narrow types of person, who are min-maxed in a particular skillset. And I think the in person community is important (both for epistemic and instrumental reasons). It is important to be a community that doesn’t actively drive away people who bring other skills to the table.
I still roughly believe all that. The main update is that there should a) be dedicated spaces that focus on truthseeking as their [probably] sacred value, b) that LessWrong should be such a space. (But, as noted in Tensions in Truthseeking, there are still different tradeoffs you can make in your truthseeking frame, and I think it’s good to have spaces that have made different min-max tradeoffs to explore those tradeoffs. For example, there might be math-heavy spaces, there might be “blunt communication” spaces that optimize for directness, there might be feelings-heavy spaces that optimize for understanding and owning your internal state)
(I have made a bit of conceptual progress on probability stuff. I probably will never do real Bayesian Wizardry but I think grok it better now – I can follow some conversations I didn’t used to be able to follow and in some cases I can participate in and uphold norms that help others on their way to learning it better than I)
There is an interesting thing in all this space I recently re-read while perusing the old critiques of Gleb. A paraphrase of the linked comment is:
And, while I’m not sure, I think I might have held a frame somewhat like that (I don’t have clear memories of biting either particular bullet). But my current position is “effective altruists should hold to a high epistemic standard, even when marketing. But, learn to market well within those constraints.”