Worth noting that the reason SquirrelInHell is dead is that they committed suicide after becoming mentally unstable, likely in part due to experimentation with exotic self-modification techniques. This one in particular seems fine AFAICT, but, ya know, caveat utilitor.
This seems reasonable to note; at the same time, I think that a lot of people who end up badly after experimenting with exotic self-modification techniques do so despite rather than because of the techniques.
This technique seems best if your problem is that your thoughts tend to often go down loopy, unproductive, distressing paths, in a way that you can self-diagnose with confidence. Which is totally a real thing! I used to find my brain making up imaginary offenses people had committed against me, and I would feel angry or vindictive for a moment. Fortunately I developed a thought pattern that immediately just notes “… and that NEVER ACTUALLY HAPPENED,” and then I move on from the moment. That’s a situation where it’s really easy to notice a bad thought pattern and change it, cutting out any real world action. And once I’d done it a couple times, I started noticing this as an overall cognitive strategy.
Another example is from my work as an engineer. During my first year or so doing research, I noticed several bad patterns of thought and behavior: throwing things out prematurely when I’d make a mistake, doing overly complex mental math, and trying to emergency correct mistakes rather than going to my desk and working out an actual plan of solution.
But in these cases, while “noticing my thoughts” was key to the solution, because it interrupted a bad pattern of behavior, it was noting the bad outcome, then working backwards to a specific root cause that got me there. Continuously monitoring my stream of thoughts was not part of this process. It seems like a technique of continuous thought-monitoring would be more important if the problem you were having was with your thoughts themselves. If your problem manifests as behavior, then paying attention to the stream of behavior and figuring out the root cause seems best.
Yeah, I considered explicitly leaving that note at the beginning but felt like this was just sufficiently different from the thing that led to their suicide that adding “WARNING! BUT ALSO I’M NOT THAT WORRIED?” didn’t seem overall worth it.
Romeosteven’s comment updates me a bit, though my current guess is this is still a fairly different reference class of problem (and the post comes with it’s own warnings about the thing romeo is pointing at, assuming I understand it properly)
That’s understandable. But it does seem like the sort of thing I’d want to hear about before trying such a technique. Hopefully people can take it for what it’s worth.(i.e I don’t think we should automatically discount such techniques or anything)
I think that’s somewhat reasonable in this case, but, want to flag that it should be possible at somepoint to reach an epistemic state where you can say “okay, yeah, it was mostly coincidence, or at least not relevant, that this happened to this person.” Like, if someone invented a car, and then used the car to commit suicide driving over a cliff, you might go “holy shit maybe I should be worried about cars and suicide?”, and if you didn’t know much about cars maybe this would be a reasonable thing to worry about at first. But, like, it shouldn’t be the case that forever after whenever someone sells a car they warn you that the guy who invented cars used them to commit suicide. It’s privileging a hypothesis.
I think in this case it’s less crazy than in the car case to worry about that, but, I do want to push back against the impulse to always have a disclaimer here.
In cases like this I strongly prefer to be given the facts (or at least pointed toward them) and allowed to make my own judgment as to how relevant they are.
Whether you choose to join the conversation and present the argument for their irrelevance is up to you, but sharing all the facts that your audience might consider important, rather than deciding for them that some apparently-relevant ones are best left unsaid, is IMO more respectful and reduces the risk of doing preventable harm in cases where your judgment is mistaken.
In the car case I think it’s obvious that car usage is not causally upstream of suicidality. If the inventor of the car died in a car accident, I do think that would be a relevant data point about the safety of cars, albeit not one that needs to be brought up every time. And in the real world, we do pretty universally talk about car crashes and how to avoid them when we’re teaching people to drive. From that perspective romeosteven’s comment is probably better and mine just got more upvotes because of the lurid details. (although, tail risks are important. And I think there’s a way in which the author’s personality can get imprinted in a text which makes the anecdote slightly more relevant than in the car case)
Is your worry more about “maybe this technique is more dangerous than it looks?” or “maybe people will follow up on this by generally following SquirrelInHell’s footsteps, and maybe not all those footsteps are safe?”
I think that might be true, but, at that level, I think it kinda makes more sense to put the warning over, like, the entirety of rationality techniques, and I think singling ones that SquirrelInHell wrote up doesn’t actually seem like the right abstraction.
Like, I do generally think there’s a failure mode to fall into here. I don’t think SquirrelInHell is the only person to have fallen into it.
This post does seem like it warrants some specific warnings (which the original post already included). But I think those warnings are mostly unrelated to what ultimately went wrong.
Worth noting that the reason SquirrelInHell is dead is that they committed suicide after becoming mentally unstable, likely in part due to experimentation with exotic self-modification techniques. This one in particular seems fine AFAICT, but, ya know, caveat utilitor.
This seems reasonable to note; at the same time, I think that a lot of people who end up badly after experimenting with exotic self-modification techniques do so despite rather than because of the techniques.
This technique seems best if your problem is that your thoughts tend to often go down loopy, unproductive, distressing paths, in a way that you can self-diagnose with confidence. Which is totally a real thing! I used to find my brain making up imaginary offenses people had committed against me, and I would feel angry or vindictive for a moment. Fortunately I developed a thought pattern that immediately just notes “… and that NEVER ACTUALLY HAPPENED,” and then I move on from the moment. That’s a situation where it’s really easy to notice a bad thought pattern and change it, cutting out any real world action. And once I’d done it a couple times, I started noticing this as an overall cognitive strategy.
Another example is from my work as an engineer. During my first year or so doing research, I noticed several bad patterns of thought and behavior: throwing things out prematurely when I’d make a mistake, doing overly complex mental math, and trying to emergency correct mistakes rather than going to my desk and working out an actual plan of solution.
But in these cases, while “noticing my thoughts” was key to the solution, because it interrupted a bad pattern of behavior, it was noting the bad outcome, then working backwards to a specific root cause that got me there. Continuously monitoring my stream of thoughts was not part of this process. It seems like a technique of continuous thought-monitoring would be more important if the problem you were having was with your thoughts themselves. If your problem manifests as behavior, then paying attention to the stream of behavior and figuring out the root cause seems best.
Yeah, I considered explicitly leaving that note at the beginning but felt like this was just sufficiently different from the thing that led to their suicide that adding “WARNING! BUT ALSO I’M NOT THAT WORRIED?” didn’t seem overall worth it.
Romeosteven’s comment updates me a bit, though my current guess is this is still a fairly different reference class of problem (and the post comes with it’s own warnings about the thing romeo is pointing at, assuming I understand it properly)
Man, it does make me sad that whenever I bring up this technique, there’s an obligatory version of this conversation.
That’s understandable. But it does seem like the sort of thing I’d want to hear about before trying such a technique. Hopefully people can take it for what it’s worth.(i.e I don’t think we should automatically discount such techniques or anything)
I think that’s somewhat reasonable in this case, but, want to flag that it should be possible at somepoint to reach an epistemic state where you can say “okay, yeah, it was mostly coincidence, or at least not relevant, that this happened to this person.” Like, if someone invented a car, and then used the car to commit suicide driving over a cliff, you might go “holy shit maybe I should be worried about cars and suicide?”, and if you didn’t know much about cars maybe this would be a reasonable thing to worry about at first. But, like, it shouldn’t be the case that forever after whenever someone sells a car they warn you that the guy who invented cars used them to commit suicide. It’s privileging a hypothesis.
I think in this case it’s less crazy than in the car case to worry about that, but, I do want to push back against the impulse to always have a disclaimer here.
In cases like this I strongly prefer to be given the facts (or at least pointed toward them) and allowed to make my own judgment as to how relevant they are.
Whether you choose to join the conversation and present the argument for their irrelevance is up to you, but sharing all the facts that your audience might consider important, rather than deciding for them that some apparently-relevant ones are best left unsaid, is IMO more respectful and reduces the risk of doing preventable harm in cases where your judgment is mistaken.
In the car case I think it’s obvious that car usage is not causally upstream of suicidality. If the inventor of the car died in a car accident, I do think that would be a relevant data point about the safety of cars, albeit not one that needs to be brought up every time. And in the real world, we do pretty universally talk about car crashes and how to avoid them when we’re teaching people to drive. From that perspective romeosteven’s comment is probably better and mine just got more upvotes because of the lurid details. (although, tail risks are important. And I think there’s a way in which the author’s personality can get imprinted in a text which makes the anecdote slightly more relevant than in the car case)
Is your worry more about “maybe this technique is more dangerous than it looks?” or “maybe people will follow up on this by generally following SquirrelInHell’s footsteps, and maybe not all those footsteps are safe?”
More the latter. Or more like, doing things like this technique too much/too hard could be dangerous.
I think that might be true, but, at that level, I think it kinda makes more sense to put the warning over, like, the entirety of rationality techniques, and I think singling ones that SquirrelInHell wrote up doesn’t actually seem like the right abstraction.
Like, I do generally think there’s a failure mode to fall into here. I don’t think SquirrelInHell is the only person to have fallen into it.
This post does seem like it warrants some specific warnings (which the original post already included). But I think those warnings are mostly unrelated to what ultimately went wrong.
Source/evidence? I believe you but this seems worth checking.