But I suspect there’s a lot of typical mind fallacy in the parts that sound more universal and less “here’s what happened to and worked for me”.
In parts of this I’m talking to the kind of person who could benefit from being spoken to about this.
My experience is that folk who need support out of tough spots like this have a harder time hearing the deeper message when it’s delivered in carefully caveated epistemically rigorous language. It shoves them too hard into thinking, and usually in ways that activate the very machinery they’re trying to find a way to escape.
I know that’s a little outside the discourse norms of LW. Caveating things not as “People experience X” but instead as “I experienced X, and I suspect it’s true of some others too”. I totally respect that has a place here.
Just not so much when trying to point out an exit.
For me, I went through my doomsday worries in my teens and twenties, long before AI was anything to take seriously.
I like you sharing your experience overview here. Thank you. I resonate with a fair bit of it, though I came at it from a really different angle.
(I grew up believing I’d live forever, then “became mortal” at 32. Spent a few years in nihilistic materialist hell. A lot of what you’re saying reminds me of what I was grappling with in that hell. Now that’s way, way more integrated — but probably not in a way the LW memeplex would approve of.)
I lived in “nihilistic materialist hell” from the ages of 5 (when it hit me what death meant) and ~10. It—belief in the inevitable doom of myself and everyone I cared for and ultimately the entire universe to heat death—was at times directly apprehended and completely incapacitating, and otherwise a looming unendurable awareness which for years I could only fend off using distraction. There was no gamemaster. I realized it all myself. The few adults I confided in tried to reassure me with religious and non-religious rationalizations of death, and I tried to be convinced but couldn’t. It was not fun and did not feel epic in the least, though maybe if I’d discovered transhumanism in this period it would’ve been a different story.
I ended up getting out of hell mostly just by developing sufficient executive function to choose not to think of these things, and eventually to think of them abstractly without processing them as real on an emotional level.
Years later, I started actually trying to do something about it. (Trying to do something about it was my first instinct as well, but as a 5 yo I couldn’t think of anything to do that bought any hope.)
But I think the machinery I installed in order to not think and not feel the reality of mortality is still in effect, and actually inhibits my ability to think clearly about AI x-risk, e.g., by making it emotionally tenable for me to do things that aren’t cutting the real problem—when you actually feel like your life is in danger, you won’t let motivated reasoning waste your EV.
This may be taken as a counterpoint to your argument invitation in this post. But I think it’s just targeted, as you say, at a subtly different audience.
My experience is that folk who need support out of tough spots like this have a harder time hearing the deeper message when it’s delivered in carefully caveated epistemically rigorous language.
I kinda feel like my reaction to this is similar to your reaction to frames:
I refuse to comply with efforts to pave the world in leather. I advocate people learn to wear shoes instead. (Metaphorically speaking.)
To be more explicit, I feel like… sure, I can believe that sometimes epistemic rigor pushes people into thinky-mode and sometimes that’s bad; but epistemic rigor is good anyway. I would much prefer for people to get better at handling things said with epistemic rigor, than for epistemic rigor to get thrown aside.
And maybe that’s not realistic everywhere, but even then I feel like there should be spaces where we go to be epistemically rigorous even if there are people for whom less rigor would sometimes be better. And I feel like LessWrong should be such a space.
I think the thing I’m reacting to here isn’t so much the lack of epistemic rigor—there are lots of things on LW that aren’t rigorous and I don’t think that’s automatically bad. Sometimes you don’t know how to be rigorous. Sometimes it would take a lot of space and it’s not necessary. But strategic lack of epistemic rigor—“I want people to react like _ and they’re more likely to do that if I’m not rigorous”—feels bad.
But strategic lack of epistemic rigor—“I want people to react like _ and they’re more likely to do that if I’m not rigorous”—feels bad.
That’s not what I meant.
I mean this much more like switching to Spanish when speaking with a Mexican store clerk. We can talk about the virtues of English all we want to, and maybe even justify that we’re helping the clerk deepen their skill with interfacing with the modern world… but really, I just want to communicate.
You can frame that as dropping standards in order to have a certain effect on them, but that’s a really damn weird frame.
I think this relies on “Val is not successfully communicating with the reader” being for reasons analogous to “Val is speaking English which the store clerk doesn’t, or only speaks it poorly”. But I suspect that if we unpacked what’s going on, I wouldn’t think that analogy held, and I would still think that what you’re doing seems bad.
(Also, I want to flag that “justify that we’re helping the clerk deepen their skill with interfacing with the modern world” doesn’t pattern match to anything I said. It hints at pattern matching with me saying something like “part of why we should speak with epistemic rigor is to help people hear things with epistemic rigor”, but I didn’t say that. You didn’t say that I did, and maybe the hint wasn’t intentional on your part, but I wanted to flag it anyway.)
In parts of this I’m talking to the kind of person who could benefit from being spoken to about this.
My experience is that folk who need support out of tough spots like this have a harder time hearing the deeper message when it’s delivered in carefully caveated epistemically rigorous language. It shoves them too hard into thinking, and usually in ways that activate the very machinery they’re trying to find a way to escape.
I know that’s a little outside the discourse norms of LW. Caveating things not as “People experience X” but instead as “I experienced X, and I suspect it’s true of some others too”. I totally respect that has a place here.
Just not so much when trying to point out an exit.
I like you sharing your experience overview here. Thank you. I resonate with a fair bit of it, though I came at it from a really different angle.
(I grew up believing I’d live forever, then “became mortal” at 32. Spent a few years in nihilistic materialist hell. A lot of what you’re saying reminds me of what I was grappling with in that hell. Now that’s way, way more integrated — but probably not in a way the LW memeplex would approve of.)
I lived in “nihilistic materialist hell” from the ages of 5 (when it hit me what death meant) and ~10. It—belief in the inevitable doom of myself and everyone I cared for and ultimately the entire universe to heat death—was at times directly apprehended and completely incapacitating, and otherwise a looming unendurable awareness which for years I could only fend off using distraction. There was no gamemaster. I realized it all myself. The few adults I confided in tried to reassure me with religious and non-religious rationalizations of death, and I tried to be convinced but couldn’t. It was not fun and did not feel epic in the least, though maybe if I’d discovered transhumanism in this period it would’ve been a different story.
I ended up getting out of hell mostly just by developing sufficient executive function to choose not to think of these things, and eventually to think of them abstractly without processing them as real on an emotional level.
Years later, I started actually trying to do something about it. (Trying to do something about it was my first instinct as well, but as a 5 yo I couldn’t think of anything to do that bought any hope.)
But I think the machinery I installed in order to not think and not feel the reality of mortality is still in effect, and actually inhibits my ability to think clearly about AI x-risk, e.g., by making it emotionally tenable for me to do things that aren’t cutting the real problem—when you actually feel like your life is in danger, you won’t let motivated reasoning waste your EV.
This may be taken as a counterpoint to your
argumentinvitation in this post. But I think it’s just targeted, as you say, at a subtly different audience.I kinda feel like my reaction to this is similar to your reaction to frames:
To be more explicit, I feel like… sure, I can believe that sometimes epistemic rigor pushes people into thinky-mode and sometimes that’s bad; but epistemic rigor is good anyway. I would much prefer for people to get better at handling things said with epistemic rigor, than for epistemic rigor to get thrown aside.
And maybe that’s not realistic everywhere, but even then I feel like there should be spaces where we go to be epistemically rigorous even if there are people for whom less rigor would sometimes be better. And I feel like LessWrong should be such a space.
I think the thing I’m reacting to here isn’t so much the lack of epistemic rigor—there are lots of things on LW that aren’t rigorous and I don’t think that’s automatically bad. Sometimes you don’t know how to be rigorous. Sometimes it would take a lot of space and it’s not necessary. But strategic lack of epistemic rigor—“I want people to react like _ and they’re more likely to do that if I’m not rigorous”—feels bad.
That’s not what I meant.
I mean this much more like switching to Spanish when speaking with a Mexican store clerk. We can talk about the virtues of English all we want to, and maybe even justify that we’re helping the clerk deepen their skill with interfacing with the modern world… but really, I just want to communicate.
You can frame that as dropping standards in order to have a certain effect on them, but that’s a really damn weird frame.
I think this relies on “Val is not successfully communicating with the reader” being for reasons analogous to “Val is speaking English which the store clerk doesn’t, or only speaks it poorly”. But I suspect that if we unpacked what’s going on, I wouldn’t think that analogy held, and I would still think that what you’re doing seems bad.
(Also, I want to flag that “justify that we’re helping the clerk deepen their skill with interfacing with the modern world” doesn’t pattern match to anything I said. It hints at pattern matching with me saying something like “part of why we should speak with epistemic rigor is to help people hear things with epistemic rigor”, but I didn’t say that. You didn’t say that I did, and maybe the hint wasn’t intentional on your part, but I wanted to flag it anyway.)