My message is: it can happen to you, and thinking it can’t is more dangerous than nothing.
With the balancing message: Some people are a lot less vulnerable to believing bullshit than others. For many on lesswrong their brains are biassed relative to the population towards devoting resources to bullshit prevention at the expense of engaging in optimal signalling. For these people actively focussing on second guessing themselves is a dangerous waste of time and effort.
Sometimes you are just more rational and pretending that you are not is humble but not rational or practical.
I can see that I’ve failed to convince you and I need to do better.
In my experience, the sort of thing you’ve written is a longer version of “It can’t happen to me, I’m far too smart for that” and a quite typical reaction to the notion that you, yes you, might have security holes. I don’t expect you to like that, but it is.
You really aren’t running OpenBSD with those less rational people running Windows.
I do think being able to make such statements of confidence in one’s immunity takes more detailed domain knowledge. Perhaps you are more immune and have knowledge and experience—but that isn’t what you said.
I am curious as to the specific basis you have for considering yourself more immune. Not just “I am more rational”, but something that’s actually put it to a test?
Put it this way, I have knowledge and experience of this stuff and I bother second-guessing myself.
(I can see that this bit is going to have to address the standard objection more.)
I can see that I’ve failed to convince you and I need to do better.
This is a failure mode common in when other-optimising. You assume that I need to be persuaded, put that as the bottom line and then work from there. There is no room for the possibility that I know more about my relative areas of weakness than you do. This is a rather bizarre position to take given that you don’t even have significant familiarity with the wedrifid online persona let alone me.
In my experience, the sort of thing you’ve written is a longer version of “It can’t happen to me, I’m far too smart for that” and a quite typical reaction to the notion that you, yes you, might have security holes. I don’t expect you to like that, but it is.
It isn’t so much that I dislike what you are saying as it is that it seems trivial and poorly calibrated to the context. Are you really telling a lesswrong frequenter that they may have security holes as though you are making some kind of novel suggestion that could trigger insecurity or offence?
I suggest that I understand the entirety of the point you are making and still respond with the grandparent. There is a limit to how much intellectual paranoia is helpful and under-confidence is a failure of epistemic rationality even if it is encouraged socially. This is a point that you either do not understand or have been careful to avoid acknowledging for the purpose of presenting your position.
I am curious as to the specific basis you have for considering yourself more immune. Not just “I am more rational”, but something that’s actually put it to a test?
I would be more inclined to answer such questions if they didn’t come with explicitly declared rhetorical intent.
I am curious as to the specific basis you have for considering yourself more immune. Not just “I am more rational”, but something that’s actually put it to a test?
I would be more inclined to answer such questions if they didn’t come with explicitly declared rhetorical intent.
No, I’m actually interested in knowing. If “nothing”, say that.
With the balancing message: Some people are a lot less vulnerable to believing bullshit than others. For many on lesswrong their brains are biassed relative to the population towards devoting resources to bullshit prevention at the expense of engaging in optimal signalling. For these people actively focussing on second guessing themselves is a dangerous waste of time and effort.
Sometimes you are just more rational and pretending that you are not is humble but not rational or practical.
I can see that I’ve failed to convince you and I need to do better.
In my experience, the sort of thing you’ve written is a longer version of “It can’t happen to me, I’m far too smart for that” and a quite typical reaction to the notion that you, yes you, might have security holes. I don’t expect you to like that, but it is.
You really aren’t running OpenBSD with those less rational people running Windows.
I do think being able to make such statements of confidence in one’s immunity takes more detailed domain knowledge. Perhaps you are more immune and have knowledge and experience—but that isn’t what you said.
I am curious as to the specific basis you have for considering yourself more immune. Not just “I am more rational”, but something that’s actually put it to a test?
Put it this way, I have knowledge and experience of this stuff and I bother second-guessing myself.
(I can see that this bit is going to have to address the standard objection more.)
This is a failure mode common in when other-optimising. You assume that I need to be persuaded, put that as the bottom line and then work from there. There is no room for the possibility that I know more about my relative areas of weakness than you do. This is a rather bizarre position to take given that you don’t even have significant familiarity with the wedrifid online persona let alone me.
It isn’t so much that I dislike what you are saying as it is that it seems trivial and poorly calibrated to the context. Are you really telling a lesswrong frequenter that they may have security holes as though you are making some kind of novel suggestion that could trigger insecurity or offence?
I suggest that I understand the entirety of the point you are making and still respond with the grandparent. There is a limit to how much intellectual paranoia is helpful and under-confidence is a failure of epistemic rationality even if it is encouraged socially. This is a point that you either do not understand or have been careful to avoid acknowledging for the purpose of presenting your position.
I would be more inclined to answer such questions if they didn’t come with explicitly declared rhetorical intent.
No, I’m actually interested in knowing. If “nothing”, say that.