You can’t fight fire with fire, getting out of a tightly wound x-risk trauma spiral involves grounding and building trust in yourself, not being scared into applying the same rigidity in the opposite direction.
The comment is generally illuminating but this particular sentence seems too snappy and fake-wisdomy to be convincing. Would you mind elaborating?
There’s a class of things that could be described as losing trust in yourself and in your ability to reason.
For a mild example, a friend of mine who tutors people in math recounts that many people have low trust in their ability to mathematical reasoning. He often asks his students to speak out loud while solving a problem, to find out how they are approaching it. And some of them will say something along the lines of, “well, at this point it would make the most sense to me to [apply some simple technique], but I remember that when our teacher was demonstrating how to solve this, he used [some more advanced technique], so maybe I should instead do that”.
The student who does that isn’t trusting that the algorithm of “do what makes the most sense to me” will eventually lead to the correct outcome. Instead, they’re trying to replace it with “do what I recall an authority figure doing, even if I don’t understand why”.
Now it could be that the simple technique is wrong to apply here, and the more advanced one is needed. But if the student had more self-trust and tried the thing that made the most sense to them, then their attempt to solve the problem using the simple approach might help them understand why that approach doesn’t work and why they need another approach. Or maybe it’s actually the case that the simple approach does work just as well—possibly the teacher did something needlessly complex, or maybe the student just misremembers what the teacher did. In which case they would have learned a simpler way of solving the problem.
Whereas if the student always just tries to copy what they remember the teacher doing—guessing the teacher’s password, essentially—even if they do get it right, they won’t develop a proper understanding of why it went right. The algorithm that they’re running isn’t “consider what you know of math and what makes the most sense in light of that”, it’s “try to recall instances of authority figures solving similar problems and do what they did”. Which only works to the extent that you can recall instances of authority figures solving highly similar problems as the one that you are dealing with.
Why doesn’t the student want to try their own approach first? After all, the worst that could happen is that it wouldn’t work and they would have to try something else, right?
But if you have math trauma—if you’ve had difficulties with math and been humiliated for it—then trying an approach and failing at it isn’t something that you could necessarily just shrug at. Instead, it will feel like another painful reminder that You Are Bad At Math and that You Will Never Figure This Out and that You Shouldn’t Even Try. It might make you feel lost and disoriented and make you hope that someone would just tell you what to do. (It doesn’t necessarily need to feel this extreme—it’s enough if the thought of trying and failing just produces a mild flinch away from it.)
In this case, you need to find some reassurance that trying and failing is actually safe. To build trust in the notion that even if you do fail once, or twice, or thrice, or however many times it takes, you’ll still be able to learn from each failure and figure out the right answer eventually. That’s what enables you to do the thing that’s required to actually learn. (Of course, some problems are just too hard and then you’ll need to ask someone for guidance—but only after you’ve exhausted every approach that seemed promising to you.)
Now that’s how it looks in the case of math. It’s also possible to lose trust in yourself in other domains; e.g. Anna mentions here how learning about AI risk sometimes destabilizes self-trust when it comes to your career decisions:
Over the last 12 years, I’ve chatted with small hundreds of people who were somewhere “in process” along the path toward “okay I guess I should take Singularity scenarios seriously.” From watching them, my guess is that the process of coming to take Singularity scenarios seriously is often even more disruptive than is losing a childhood religion. Among many other things, I have seen it sometimes disrupt: [...]
People’s understanding of when to use their own judgment and when to defer to others.
“AI risk is really really important… which probably means I should pick some random person at MIRI or CEA or somewhere and assume they know more than I do about my own career and future, right?”
And besides domain-specific self-trust, there also seems to be some relatively domain-general component of “how much do you trust your own ability to figure stuff out eventually”. In all cases, I suspect that the self-mistrust has to do with feeling that it’s not safe to trust yourself—because you’ve been punished for doing so in the past, or because you feel that AI risk is so important that there isn’t any room to make mistakes.
Scott Alexander has also speculated that depression involves a global lack of self-trust—predictive processing suggests that various neural predictions come with associated confidence levels. And “low global self-trust” may mean that the confidence your brain assigns even to predictions like “it’s worth getting up from bed today” falls low enough so as to not be strongly motivating.
To go back to Elizabeth’s original sentence… looked at from a certain angle, Val’s post can be read as saying “those thoughts that you have about AI risk? Don’t believe in them; believe in what I am saying instead”. Read that way, that’s a move that undermines self-trust. Stop thinking in terms of what makes sense to you, and replace that with what you think Val would approve of.
And while Val’s post is not exactly talking about a lack of self-trust, it’s talking about something in a related space. It’s talking about how some experiences have been so painful that the body is in a constant low-grade anxiety/vigilance response, and the person isn’t able to stop and be with those unpleasant sensations—similar to how the math student isn’t able to stop and try out uncertain approaches, as it’s too painful for the student to be with the unpleasant sensations of shame and humiliation of being Bad At Math.
Both “I’m feeling too anxious to trust myself” and “I’m feeling too anxious to stop thinking about AI” are problems that orient one’s attention away from bodily sensations. “You can’t fight fire with fire”—you can’t solve anxiety about AI by with a move that creates more unpleasant bodily sensations and makes it harder to orient your attention to your body.
IDK if helpful, but my comment on this post here is maybe related to fighting fire with fire (though Elizabeth might have been more thinking of strictly internal motions, or something else):
That’s a super reasonable request that I wish I was able to fulfill. Engaging with Val on this is extremely costly for me, and it’s not reasonable to ask him to step out of a conversation on his own post, so I can’t do it here. I thought about doing a short form post but couldn’t feature creeped myself to the point it was infeasible.
…it’s not reasonable to ask [Val] to step out of a conversation on his own post…
If it’s understood that I’m not replying because otherwise the contribution won’t happen at all rather than because I have nothing to say about it, then I’m fine stepping back and letting you clarify what you mean. If that helps.
The comment is generally illuminating but this particular sentence seems too snappy and fake-wisdomy to be convincing. Would you mind elaborating?
There’s a class of things that could be described as losing trust in yourself and in your ability to reason.
For a mild example, a friend of mine who tutors people in math recounts that many people have low trust in their ability to mathematical reasoning. He often asks his students to speak out loud while solving a problem, to find out how they are approaching it. And some of them will say something along the lines of, “well, at this point it would make the most sense to me to [apply some simple technique], but I remember that when our teacher was demonstrating how to solve this, he used [some more advanced technique], so maybe I should instead do that”.
The student who does that isn’t trusting that the algorithm of “do what makes the most sense to me” will eventually lead to the correct outcome. Instead, they’re trying to replace it with “do what I recall an authority figure doing, even if I don’t understand why”.
Now it could be that the simple technique is wrong to apply here, and the more advanced one is needed. But if the student had more self-trust and tried the thing that made the most sense to them, then their attempt to solve the problem using the simple approach might help them understand why that approach doesn’t work and why they need another approach. Or maybe it’s actually the case that the simple approach does work just as well—possibly the teacher did something needlessly complex, or maybe the student just misremembers what the teacher did. In which case they would have learned a simpler way of solving the problem.
Whereas if the student always just tries to copy what they remember the teacher doing—guessing the teacher’s password, essentially—even if they do get it right, they won’t develop a proper understanding of why it went right. The algorithm that they’re running isn’t “consider what you know of math and what makes the most sense in light of that”, it’s “try to recall instances of authority figures solving similar problems and do what they did”. Which only works to the extent that you can recall instances of authority figures solving highly similar problems as the one that you are dealing with.
Why doesn’t the student want to try their own approach first? After all, the worst that could happen is that it wouldn’t work and they would have to try something else, right?
But if you have math trauma—if you’ve had difficulties with math and been humiliated for it—then trying an approach and failing at it isn’t something that you could necessarily just shrug at. Instead, it will feel like another painful reminder that You Are Bad At Math and that You Will Never Figure This Out and that You Shouldn’t Even Try. It might make you feel lost and disoriented and make you hope that someone would just tell you what to do. (It doesn’t necessarily need to feel this extreme—it’s enough if the thought of trying and failing just produces a mild flinch away from it.)
In this case, you need to find some reassurance that trying and failing is actually safe. To build trust in the notion that even if you do fail once, or twice, or thrice, or however many times it takes, you’ll still be able to learn from each failure and figure out the right answer eventually. That’s what enables you to do the thing that’s required to actually learn. (Of course, some problems are just too hard and then you’ll need to ask someone for guidance—but only after you’ve exhausted every approach that seemed promising to you.)
Now that’s how it looks in the case of math. It’s also possible to lose trust in yourself in other domains; e.g. Anna mentions here how learning about AI risk sometimes destabilizes self-trust when it comes to your career decisions:
And besides domain-specific self-trust, there also seems to be some relatively domain-general component of “how much do you trust your own ability to figure stuff out eventually”. In all cases, I suspect that the self-mistrust has to do with feeling that it’s not safe to trust yourself—because you’ve been punished for doing so in the past, or because you feel that AI risk is so important that there isn’t any room to make mistakes.
But you still need to have trust in yourself. Knowing that yes, it’s possible that trusting yourself will mean that you do make the wrong call and nothing catches you and then you die, but that’s just the way it goes. Even if you decided to outsource your decisions to someone else, not only would that be unlikely to work, but you’d still need to trust your own ability in choosing who to outsource them to.
Scott Alexander has also speculated that depression involves a global lack of self-trust—predictive processing suggests that various neural predictions come with associated confidence levels. And “low global self-trust” may mean that the confidence your brain assigns even to predictions like “it’s worth getting up from bed today” falls low enough so as to not be strongly motivating.
To go back to Elizabeth’s original sentence… looked at from a certain angle, Val’s post can be read as saying “those thoughts that you have about AI risk? Don’t believe in them; believe in what I am saying instead”. Read that way, that’s a move that undermines self-trust. Stop thinking in terms of what makes sense to you, and replace that with what you think Val would approve of.
And while Val’s post is not exactly talking about a lack of self-trust, it’s talking about something in a related space. It’s talking about how some experiences have been so painful that the body is in a constant low-grade anxiety/vigilance response, and the person isn’t able to stop and be with those unpleasant sensations—similar to how the math student isn’t able to stop and try out uncertain approaches, as it’s too painful for the student to be with the unpleasant sensations of shame and humiliation of being Bad At Math.
Both “I’m feeling too anxious to trust myself” and “I’m feeling too anxious to stop thinking about AI” are problems that orient one’s attention away from bodily sensations. “You can’t fight fire with fire”—you can’t solve anxiety about AI by with a move that creates more unpleasant bodily sensations and makes it harder to orient your attention to your body.
IDK if helpful, but my comment on this post here is maybe related to fighting fire with fire (though Elizabeth might have been more thinking of strictly internal motions, or something else):
https://www.lesswrong.com/posts/kcoqwHscvQTx4xgwa/?commentId=bTe9HbdxNgph7pEL4#comments
And gjm’s comment on this post points at some of the relevant quotes:
https://www.lesswrong.com/posts/kcoqwHscvQTx4xgwa/?commentId=NQdCG27BpLCTuKSZG
That’s a super reasonable request that I wish I was able to fulfill. Engaging with Val on this is extremely costly for me, and it’s not reasonable to ask him to step out of a conversation on his own post, so I can’t do it here. I thought about doing a short form post but couldn’t feature creeped myself to the point it was infeasible.
If it’s understood that I’m not replying because otherwise the contribution won’t happen at all rather than because I have nothing to say about it, then I’m fine stepping back and letting you clarify what you mean. If that helps.
Sure, no big deal.