Ideally reviews would be done by people who read the posts last year, so they could reflect on how their thinking and actions changed. Unfortunately, I only discovered this post today, so I lack that perspective.
Posts relating to the psychology and mental well being of LessWrongers are welcome and I feel like I take a nugget of wisdom from each one (but always fail to import the entirety of the wisdom the author is trying to convey.)
The nugget from “Here’s the exit” that I wish I had read a year ago is “If your body’s emergency mobilization systems are running in response to an issue, but your survival doesn’t actually depend on actions on a timescale of minutes, then you are not perceiving reality accurately.” I panicked when I first read Death with Dignity (I didn’t realize it was an April Fools Joke… or was it?). I felt full fight-or-flight when there wasn’t any reason to do so. That ties into another piece of advice that I needed to hear, from Replacing Guilt: “stop asking whether this is the right action to take and instead ask what’s the best action I can identify at the moment.” I don’t know if these sentences have the same punch when removed from their context, but I feel like they would have helped me. This wisdom extends beyond AI Safety anxiety and generalizes to all irrational anxiety. I expect that having these sentences available to me will help me calm myself next time something raises my stress level.
I can’t speak to the rest of the wisdom in this post. “Thinking about a problem as a defense mechanism is worse (for your health and for solving the problem) than thinking about a problem not as a defense mechanism” sounds plausible, but I can’t say much for its veracity or its applicability.
I would be interested to see research done to test the claim. Does increased sympathetic nervous system activation cause decreased efficacy? A correlational study could classify people in AI safety by (self reported?) efficacy and measure their stress levels, but causation is always trickier than correlation.
A flood of comments criticized the post, especially for typical-minding. The author responded with many comments of their own, some of which received many upvotes and agreements and some of which received many dislikes and disagreements. A follow up post from Valentine would ideally address the criticism and consolidate the valid information from the comments into the post.
A sequence or book compiled from the wisdom of many LessWrongers discussing their mental health struggles and discoveries would be extremely valuable to the community (and to me, personally) and a modified version of this post would earn a spot in such a book.
I like the tone of this review. That might be because it scans as positive about something I wrote! :D But I think it’s at least in part because it feels clear, even where it’s gesturing at points of improvement or further work. I imagine I’d enjoy more reviews written in this style.
I would be interested to see research done to test the claim. Does increased sympathetic nervous system activation cause decreased efficacy [at AI research]?
If folk can find ways of isolating testable claims from this post and testing them, I’m totally for that project.
The claim you name isn’t quite the right one though. I’m not saying that people being stressed will make them bad at AI research inherently. I’m saying that people being in delusion will make what they do at best irrelevant for solving the actual problem, on net. And that for structural reasons, one of the signs of delusion is having significant recurring sympathetic nervous system (SNS) activation in response to something that has nothing to do with immediate physical action.
The SNS part is easy to measure. Galvanic skin response, heart rate, blood pressure, pupil dilation… basically hooking them up to a lie detector. But you can just buy a GSR meter and mess with it.
I’m not at all sure how to address the questions of (a) identifying when something is unrelated to immediate physical action, especially given the daughter’s arm phenomenon; or (b) whether someone’s actions on net have a positive effect on solving the AI problem.
E.g., it now looks plausible that Eliezer’s net effect was to accelerate AI timelines while scaring people. I’m not saying that is his net effect! But I’m noting that AFAIK we don’t know it isn’t.
I think it would be extremely valuable to have some way of measuring the overall direction of some AI effort, even in retrospect. Independent of this post!
But I’ve got nuthin’. Which is what I think everyone else has too.
I’d love for someone to prove me wrong here.
A sequence or book compiled from the wisdom of many LessWrongers discussing their mental health struggles and discoveries would be extremely valuable to the community (and to me, personally)…
I’m glad you enjoyed my review! Real credit for the style goes to whoever wrote the blurb that pops up when reviewing posts; I structured my review off of that.
When it comes to “some way of measuring the overall direction of some [AI] effort,” conditional prediction markets could help. “Given I do X/Y, will Z happen?” Perhaps some people need to run a “Given I take a vacation, will AI kill everyone?” market in order to let themselves take a break.
What would be the next step to creating a LessWrong Mental Health book?
Ideally reviews would be done by people who read the posts last year, so they could reflect on how their thinking and actions changed. Unfortunately, I only discovered this post today, so I lack that perspective.
Posts relating to the psychology and mental well being of LessWrongers are welcome and I feel like I take a nugget of wisdom from each one (but always fail to import the entirety of the wisdom the author is trying to convey.)
The nugget from “Here’s the exit” that I wish I had read a year ago is “If your body’s emergency mobilization systems are running in response to an issue, but your survival doesn’t actually depend on actions on a timescale of minutes, then you are not perceiving reality accurately.” I panicked when I first read Death with Dignity (I didn’t realize it was an April Fools Joke… or was it?). I felt full fight-or-flight when there wasn’t any reason to do so. That ties into another piece of advice that I needed to hear, from Replacing Guilt: “stop asking whether this is the right action to take and instead ask what’s the best action I can identify at the moment.” I don’t know if these sentences have the same punch when removed from their context, but I feel like they would have helped me. This wisdom extends beyond AI Safety anxiety and generalizes to all irrational anxiety. I expect that having these sentences available to me will help me calm myself next time something raises my stress level.
I can’t speak to the rest of the wisdom in this post. “Thinking about a problem as a defense mechanism is worse (for your health and for solving the problem) than thinking about a problem not as a defense mechanism” sounds plausible, but I can’t say much for its veracity or its applicability.
I would be interested to see research done to test the claim. Does increased sympathetic nervous system activation cause decreased efficacy? A correlational study could classify people in AI safety by (self reported?) efficacy and measure their stress levels, but causation is always trickier than correlation.
A flood of comments criticized the post, especially for typical-minding. The author responded with many comments of their own, some of which received many upvotes and agreements and some of which received many dislikes and disagreements. A follow up post from Valentine would ideally address the criticism and consolidate the valid information from the comments into the post.
A sequence or book compiled from the wisdom of many LessWrongers discussing their mental health struggles and discoveries would be extremely valuable to the community (and to me, personally) and a modified version of this post would earn a spot in such a book.
I like the tone of this review. That might be because it scans as positive about something I wrote! :D But I think it’s at least in part because it feels clear, even where it’s gesturing at points of improvement or further work. I imagine I’d enjoy more reviews written in this style.
If folk can find ways of isolating testable claims from this post and testing them, I’m totally for that project.
The claim you name isn’t quite the right one though. I’m not saying that people being stressed will make them bad at AI research inherently. I’m saying that people being in delusion will make what they do at best irrelevant for solving the actual problem, on net. And that for structural reasons, one of the signs of delusion is having significant recurring sympathetic nervous system (SNS) activation in response to something that has nothing to do with immediate physical action.
The SNS part is easy to measure. Galvanic skin response, heart rate, blood pressure, pupil dilation… basically hooking them up to a lie detector. But you can just buy a GSR meter and mess with it.
I’m not at all sure how to address the questions of (a) identifying when something is unrelated to immediate physical action, especially given the daughter’s arm phenomenon; or (b) whether someone’s actions on net have a positive effect on solving the AI problem.
E.g., it now looks plausible that Eliezer’s net effect was to accelerate AI timelines while scaring people. I’m not saying that is his net effect! But I’m noting that AFAIK we don’t know it isn’t.
I think it would be extremely valuable to have some way of measuring the overall direction of some AI effort, even in retrospect. Independent of this post!
But I’ve got nuthin’. Which is what I think everyone else has too.
I’d love for someone to prove me wrong here.
This is a beautiful idea. At least to me.
I’m glad you enjoyed my review! Real credit for the style goes to whoever wrote the blurb that pops up when reviewing posts; I structured my review off of that.
When it comes to “some way of measuring the overall direction of some [AI] effort,” conditional prediction markets could help. “Given I do X/Y, will Z happen?” Perhaps some people need to run a “Given I take a vacation, will AI kill everyone?” market in order to let themselves take a break.
What would be the next step to creating a LessWrong Mental Health book?