It’s obvious that to interpret my words correctly (as not being obviously wrong), you need to consider only big (cumulative) profit. And again, even if you did win a million dollars, that still doesn’t count, only if you show that you were likely to win a million dollars (even if you didn’t).
The only way I can make sense of your comment is to assume that you’re defining the word lottery to mean a gamble with negative expected value. In that case, your claim is tautologically correct, but as far as I can tell, largely irrelevant to a situation such as this, where the point is that we don’t know the expected value of the gamble and are trying to discover it by looking at evidence of its returns.
That expected value is negative is a state of knowledge. We need careful studies to show whether a technique/medicine/etc is effective precisely because without such a study our state of knowledge shows that the expected value of the technique is negative. At the same time, we expect the new state of knowledge after the study to show that either the technique is useful, or that it’s not.
That’s one of the traps of woo: you often can’t efficiently demonstrate that it’s effective, and through intuition probably related to conservation of expected evidence you insist that if you don’t have a better method to show its effectiveness, the best available method should be enough, because it’s ridiculous to hold the claim to higher standard of proof on one side than on another. But you have to, the prior belief plays its part, the threshold to changing a decision may be too far away to cross by simple arguments. The intuitive thrust of the principle doesn’t carry over to expected utility because of the threshold, it may well be that you have a technique for which there is a potential test that could demonstrate that it’s effective, but the test is unavailable, and without performing the test the expected value of the technique remains negative.
I’m afraid I’m struggling to connect this to your originalobjections. Would you mind clarifying?
ETA: By way of attempting to clarify my issue with your objection, I think the lottery example differs from this situation in two important ways. AFAICT, the uselessness of evidence that a single person has won the lottery is a result of:
the fact that we usually know the odds of winning the lottery are very low, so evidence has little ability to shift our priors; and
that in addition to the evidence of the single winner, we also have evidence of incredibly many losers, so the sum of evidence does not favour a conclusion of profitability.
The analogy is this: using speculative self-help techniques corresponds to playing a lottery, in both cases you expect negative outcome, and in both cases making one more observation, even if it’s observation of success, even if you experience it personally, means very little for the estimation of expected outcome. There is no analogy in lottery for studies that support the efficacy of self-help techniques (or some medicine).
1) the range of conceivably effective self-help techniques is very large relative to the number of actually effective techniques
2) a technique that is negative-expected-value can look positive with small n
3) consequently, using small-n trials on lots of techniques is an inefficient way to look for effective ones, and is itself negative-expected-value, just like looking for the correct lottery number by playing the lottery.
In this analogy, it is the whole self-help space, not the one technique, that is like a lottery.
I don’t think the principle of charity generally extends so far as to make people reinterpret you when you don’t go to the trouble of phrasing your comments so they don’t sound obviously wrong.
If you see a claim that has one interpretation making it obviously wrong and another one sensible, and you expect a sensible claim, it’s a simple matter of robust communication to assume the sensible one and ignore the obviously wrong. It’s much more likely that the intended message behind the inapt textual transcription wasn’t the obviously wrong one, and the content of communication is that unvoiced thought, not the text used to communicate it.
it’s a simple matter of robust communication to assume the sensible one and ignore the obviously wrong.
But if the obvious interpretation of what you said was obviously wrong, then it’s your fault, not the reader’s, if you’re misunderstood.
the content of communication is that unvoiced thought, not the text used to communicate it.
All a reader can go by is the text used to communicate the thought. What we have on this site is text which responds to other text. I could just assume you said “Why yes, thoughtfulape, that’s a marvelous idea! You should do that nine times. Purple monkey dishwasher.” if I was expected to respond to things you didn’t say.
My point is that the prior under which you interpret the text is shaped by the expectations about the source of the text. If the text, taken alone, is seen as likely meaning something that you didn’t expect to be said, then the knowledge about what you expect to be said takes precedence over the knowledge of what a given piece of text could mean if taken out of context. Certainly, you can’t read minds without data, but the data is about minds, and that’s a significant factor in its interpretation.
If the text, taken alone, is seen as likely meaning something that you didn’t expect to be said, then the knowledge about what you expect to be said takes precedence
This is why people often can’t follow simple instructions for mental techniques—they do whatever they already believe is the right thing to do, not what the instructions actually say.
It’s obvious that to interpret my words correctly (as not being obviously wrong), you need to consider only big (cumulative) profit. And again, even if you did win a million dollars, that still doesn’t count, only if you show that you were likely to win a million dollars (even if you didn’t).
The only way I can make sense of your comment is to assume that you’re defining the word lottery to mean a gamble with negative expected value. In that case, your claim is tautologically correct, but as far as I can tell, largely irrelevant to a situation such as this, where the point is that we don’t know the expected value of the gamble and are trying to discover it by looking at evidence of its returns.
That expected value is negative is a state of knowledge. We need careful studies to show whether a technique/medicine/etc is effective precisely because without such a study our state of knowledge shows that the expected value of the technique is negative. At the same time, we expect the new state of knowledge after the study to show that either the technique is useful, or that it’s not.
That’s one of the traps of woo: you often can’t efficiently demonstrate that it’s effective, and through intuition probably related to conservation of expected evidence you insist that if you don’t have a better method to show its effectiveness, the best available method should be enough, because it’s ridiculous to hold the claim to higher standard of proof on one side than on another. But you have to, the prior belief plays its part, the threshold to changing a decision may be too far away to cross by simple arguments. The intuitive thrust of the principle doesn’t carry over to expected utility because of the threshold, it may well be that you have a technique for which there is a potential test that could demonstrate that it’s effective, but the test is unavailable, and without performing the test the expected value of the technique remains negative.
I’m afraid I’m struggling to connect this to your original objections. Would you mind clarifying?
ETA: By way of attempting to clarify my issue with your objection, I think the lottery example differs from this situation in two important ways. AFAICT, the uselessness of evidence that a single person has won the lottery is a result of:
the fact that we usually know the odds of winning the lottery are very low, so evidence has little ability to shift our priors; and
that in addition to the evidence of the single winner, we also have evidence of incredibly many losers, so the sum of evidence does not favour a conclusion of profitability.
Neither of these seem to be applicable here.
The analogy is this: using speculative self-help techniques corresponds to playing a lottery, in both cases you expect negative outcome, and in both cases making one more observation, even if it’s observation of success, even if you experience it personally, means very little for the estimation of expected outcome. There is no analogy in lottery for studies that support the efficacy of self-help techniques (or some medicine).
It sounds like you’re saying:
1) the range of conceivably effective self-help techniques is very large relative to the number of actually effective techniques
2) a technique that is negative-expected-value can look positive with small n
3) consequently, using small-n trials on lots of techniques is an inefficient way to look for effective ones, and is itself negative-expected-value, just like looking for the correct lottery number by playing the lottery.
In this analogy, it is the whole self-help space, not the one technique, that is like a lottery.
Am I on the right track?
I don’t think the principle of charity generally extends so far as to make people reinterpret you when you don’t go to the trouble of phrasing your comments so they don’t sound obviously wrong.
If you see a claim that has one interpretation making it obviously wrong and another one sensible, and you expect a sensible claim, it’s a simple matter of robust communication to assume the sensible one and ignore the obviously wrong. It’s much more likely that the intended message behind the inapt textual transcription wasn’t the obviously wrong one, and the content of communication is that unvoiced thought, not the text used to communicate it.
But if the obvious interpretation of what you said was obviously wrong, then it’s your fault, not the reader’s, if you’re misunderstood.
All a reader can go by is the text used to communicate the thought. What we have on this site is text which responds to other text. I could just assume you said “Why yes, thoughtfulape, that’s a marvelous idea! You should do that nine times. Purple monkey dishwasher.” if I was expected to respond to things you didn’t say.
My point is that the prior under which you interpret the text is shaped by the expectations about the source of the text. If the text, taken alone, is seen as likely meaning something that you didn’t expect to be said, then the knowledge about what you expect to be said takes precedence over the knowledge of what a given piece of text could mean if taken out of context. Certainly, you can’t read minds without data, but the data is about minds, and that’s a significant factor in its interpretation.
This is why people often can’t follow simple instructions for mental techniques—they do whatever they already believe is the right thing to do, not what the instructions actually say.
That’s overconfidence, a bias, but so is underconfidence.