I got the impression from Damien Broderick’s book that a lot of PSI researchers do understand physics and aren’t postulating that PSI phenomena use the sort of physical interactions gravity or radio waves use...
“Not understanding basic physics” doesn’t really seem to cut it in either case
“Not understanding basic physics” sounds like a harsh quasi-social criticism, like “failing at high-school material”. But that’s not exactly what’s meant here. Rather, what’s meant is more like “not being aware of how strong the evidence against psi from 20th-century physics research is”.
The Bayesian point here is that if a model M assigns a low probability to hypothesis H, then evidence in favor of M is evidence against H [EDIT: technically, this is not necessarily true, but it usually is in practice, and becomes more likely as P(H|M) approaches 0]. Hence each high-precision experiment that confirms quantum field theory counts the same as zillions of negative psi studies.
The Bayesian point here is that if a model M assigns a low probability to hypothesis H, then evidence in favor of M is evidence against H. Hence each high-precision experiment that confirms quantum field theory counts the same as zillions of negative psi studies.
Evidence distinguishes between not for individual models. There may be models that are consistent with the experiments that confirm quantum field theory but also give rise to explanations for anomalous cognition.
Evidence distinguishes between not for individual models.
By the Bayesian definition of evidence, “evidence for” a hypothesis (including a “model”, which is just a name for a complex conjunction of hypotheses) simply means an observation more likely to occur if the hypothesis is true than if it is false.
There may be models that are consistent with the experiments that confirm quantum field theory but also give rise to explanations for anomalous cognition.
Carroll claims that current data implies the probability of such models being correct is near zero. So I’d like to invoke Aumann here and ask what your explanation for the disagreement is. Where is Carroll’s (and others’) mistake?
including a “model”, which is just a name for a complex conjunction of hypotheses
If models are just complex conjunctions of hypotheses then the evidence that confirms models will often confirm some parts of the model more than others. Thus the evidence does little to distinguish the model from a different model which incorporates slightly different hypotheses.
By the Bayesian definition of evidence, “evidence for” a hypothesis (including a “model”, which is just a name for a complex conjunction of hypotheses) simply means an observation more likely to occur if the hypothesis is true than if it is false.
Yes, but this depends on what other hypotheses are considered in the “false” case.
One typically works with some limited ensemble of possible hypotheses
Explicitly, that may be the case; but at least implicitly, there is always (or at least there had better be) an additional “something not on this list” hypothesis that covers everything else
You appear to be thinking in terms of ad-hoc statistical techniques (“computable”, “one typically works...”), rather than fundamental laws governing belief. But the latter is what we’re interested in in this context: we want to know what’s true and how to think, not what we can publish and how to write it up.
Let me put it this way: excluding a hypothesis from the model space is merely the special case of setting its prior to zero. Whether a given piece of evidence counts for or against a hypothesis is in fact dependent on the priors of all other hypotheses, even if no hypothesis goes from possible to not or vice-versa.
As this is prior dependent, there is no objective measure of whether a hypothesis is supported or rejected by evidence.
(This is obviously true when we look at P(H_i|e). It’s a bit less so when we look at P(e|H) vs P(e|~H). This seems objective. It is objective in the case that H and ~H are atomic hypotheses with a well-defined rule for getting P(e|~H). But if ~H is an or of “all the other theories”, than P(e|~H) is dependent on the prior probabilities for each of the H_i that are the subcomponents of ~H. It’s also an utterly useless by itself for judging H. We want to know P(H|e) for that. (P(e|H) is of course why we want P(H), so we can make useful predictions.)
It is true that in the long run much evidence will eventually dominate any prior. But summarizing this as “log odds”, for instance is only useful for talking about comparing two specific hypotheses, not “this hypothesis” and “everything else”.
But I still have objections to most you say.
You’ve given an essentially operational definition of “evidence for” in terms of operations that can’t be done.
Explicitly, that may be the case; but at least implicitly, there is always (or at least there had better be) an additional “something not on this list” hypothesis that covers everything else.
Yes. The standard way to express that is that you can’t actually work with P(Hypothesis), only P(Hypothesis | Model Space).
You can then, of course expand your model spaces, if you find your model space is inadequate.
You appear to be thinking in terms of ad-hoc statistical techniques (“computable”,
“Computable” is hardly ad-hoc. It’s a fundamental restriction on how it is possible to reason.
we want to know what’s true and how to think,
If you want to know how to think, you had better pick a method that’s actually possible.
This really is just another facet of “all Bayesian probabilities are conditional.”
Whether a given piece of evidence counts for or against a hypothesis is in fact dependent on the priors of all other hypotheses
Yes, of course. The point is that if you’re using probability theory to actually reason, and not merely to set up a toy statistical model such as might appear in a scientific paper, you will in fact already be “considering” all possible hypotheses, not merely a small important-looking subset. Now it’s true that what you won’t be doing is enumerating every possible hypothesis on the most fine-grained level of description, and then computing the information-theoretic complexity of each one to determine its prior—since, as you point out, that’s computationally intractable. Instead, you’ll take your important-looking subset just as you would in the science paper, let’s say H1, H2, and H3, but then add to that another hypothesis H4, which represents the whole rest of hypothesis-space, or in other words “something I didn’t think of”/”my paradigm is wrong”/etc. And you have to assign a nonzero probability to H4.
Yes. The standard way to express that is that you can’t actually work with P(Hypothesis), only P(Hypothesis | Model Space).
No, see above. In science papers, “paradigm shifts” happen, and you “change your model space”. Not in abstract Bayesianism. In abstract Bayesianism, low-probability events happen, and you update accordingly. The result will look similar to “changing your model space”, because what happens is that when H4 turns out to be true (i.e. its probability is raised to something high), you then to start to carve up the H4 region of hypothesis-space more finely and incorporate these “new” sub-hypotheses into your “important-looking subset”.
To return to the issue at hand in this thread, here’s what’s going on as I see it: physicists, acting as Bayesians, have assigned very low probablity to psi being true given QFT, and they have assigned a very high probability to QFT. In so doing, they’ve already considered the possibility that psi may be consistent with QFT, and judged this possibility to be of near-negligible probability. That was done in the first step, where they said “P(psi|QFT) is small”. It doesn’t do to reply “well, their paradigm may be wrong”; yes, it may, but if you think the probability of that is higher than they do, then you have to confront their analysis. Sean Carroll’s post is a defense of the proposition that “P(psi|QFT) is small”; Jack’s comment is an assertion that “psi&QFT may be true”, which sounds like an assertion that “P(psi|QFT) is higher than Sean Carroll thinks it is”—in which case Jack would need to account somehow for Carroll being mistaken in his analysis.
“P(psi|QFT) is higher than Sean Carroll thinks it is”—in which case Jack would need to account somehow for Carroll being mistaken in his analysis.
This is basically my position. ETA: I may assign a high probability to “not all of the hypotheses that make up QFT are true” a position I believe I can hold while not disputing the experimental evidence supporting QFT (though such evidence does decrease the probability of any part of QFT being wrong).
I don’t think Carroll’s analysis comes close to showing that P(psi|QFT) is 1 in a billion. He took one case, a psychokinesis claim that no one in parapsychology endorses and showed how it was impossible given one interpretation of what the claim might mean. We can’t look at his analysis and take it as convincing evidence that the claims of parapsychologists aren’t consistent with QFT since Carroll doesn’t once mention any of the claims made by parapsychologists!
Now there are some studies purporting to show psychokinesis (though they are less convincing than the precognition studies and actually might just be a kind of precognition). Even in these cases no one in parapsychology thinks the perturbations are the result of EM or gravitational fields; Carroll pointing out that they can’t shouldn’t result in us updating on anything.
I actually think a physicist might be able to write a convincing case for why the claims of parapsychologists can’t be right. I think there is a good chance I don’t grasp just how inconsistent these claims are with known physics—and that is one of the reasons why fraud/methodology problems/publication bias still dominate my probability space regarding parapsychology. But Carroll hasn’t come close to writing such a case. I think the reason you think he has is that you’re not familiar with a) the actual claims of parapsychologists or b) the various but inconclusive attempts to explain parapsychology results without contradicting the experimental evidence confirming QFT.
The worked example he provides is what physics would require to exist (a new force that is somehow of at least comparable strength to electromagnetism but that has somehow never been detected by experiments so sensitive that they would detect any new force more than a billionth the strength of gravity) for telekinesis to exist at all. And there are indeed parapsychologists who claim telekinesis is worth investigating.
It is not unreasonable for Carroll, having given a worked example of applying extremely well-understood physics to the question, to then expect parapsychologists to then apply extremely well-understood physics to their other questions. His point (as he states in the article) is that they keep starting from an assumption that science knows nothing relevant to the questions parapsychologists are asking, rather than starting from an assumption that known science could be used to make testable, falsifiable predictions.
He doesn’t have to do the worked example for every phenomenon that parapsychology claims is worth serious investigation to make his point valid. Ignoring the existence of relevant known science is one reason parapsychology is a pseudoscience (a partial imitation) rather than science.
I could be wrong, but I think you added to this comment since I replied. Since all of my comments on the topic are getting downvoted without explanation I’ll be short.
And there are indeed parapsychologists who claim telekinesis is worth investigating.
But not spoon bending so much. In any case, being concerned about force fields is only worth while if you assume what is going on is a cause and effect, which many, maybe most of the attempts at explanation don’t.
This is really getting away from what Komponisto and I were talking about. I’m not really disputing the claim that parapsychology is a pseudo-science. I’m disputing the claim that Carroll’s analysis shows that the claims of parapsychology are fundamentally ruled out by current physics. I haven’t really thought about delineation issues regarding parapsychology.
His point is that they keep starting from an assumption that science knows nothing relevant to the questions parapsychologists are asking, rather than starting from an assumption that known science could be used to make testable, falsifiable predictions.
But he gives no evidence that parapsychologists start from this assumption. Plenty of parapsychologists know that no force fields produced by the brain could be responsible for the effects they think they’ve found. Thats sort of their point actually.
There are lots of silly people in the field who think the results imply dualism of course—but thats precisely why it would be nice to have materialists tackle the questions.
There are no significant results from parapsychologists who are aware of physics. Instead, we have results from parapsychologists that claim statistical significance that have obviously defective experimental design and/or (usually and) turn out to be unreplicable.
That is, you describe sophisticated parapsychologists but the prominent results are from unsophisticated ones.
“Not understanding basic physics” sounds like a harsh quasi-social criticism, like “failing at high-school material”. But that’s not exactly what’s meant here. Rather, what’s meant is more like “not being aware of how strong the evidence against psi from 20th-century physics research is”.
The Bayesian point here is that if a model M assigns a low probability to hypothesis H, then evidence in favor of M is evidence against H [EDIT: technically, this is not necessarily true, but it usually is in practice, and becomes more likely as P(H|M) approaches 0]. Hence each high-precision experiment that confirms quantum field theory counts the same as zillions of negative psi studies.
Evidence distinguishes between not for individual models. There may be models that are consistent with the experiments that confirm quantum field theory but also give rise to explanations for anomalous cognition.
By the Bayesian definition of evidence, “evidence for” a hypothesis (including a “model”, which is just a name for a complex conjunction of hypotheses) simply means an observation more likely to occur if the hypothesis is true than if it is false.
Carroll claims that current data implies the probability of such models being correct is near zero. So I’d like to invoke Aumann here and ask what your explanation for the disagreement is. Where is Carroll’s (and others’) mistake?
If models are just complex conjunctions of hypotheses then the evidence that confirms models will often confirm some parts of the model more than others. Thus the evidence does little to distinguish the model from a different model which incorporates slightly different hypotheses.
That is all I meant.
Yes, but this depends on what other hypotheses are considered in the “false” case.
The “false” case is the disjunction of all other possible hypotheses besides the one you’re considering.
That’s not computable. (EDIT: or even well defined). One typically works with some limited ensemble of possible hypotheses.
Explicitly, that may be the case; but at least implicitly, there is always (or at least there had better be) an additional “something not on this list” hypothesis that covers everything else
You appear to be thinking in terms of ad-hoc statistical techniques (“computable”, “one typically works...”), rather than fundamental laws governing belief. But the latter is what we’re interested in in this context: we want to know what’s true and how to think, not what we can publish and how to write it up.
Let me put it this way: excluding a hypothesis from the model space is merely the special case of setting its prior to zero. Whether a given piece of evidence counts for or against a hypothesis is in fact dependent on the priors of all other hypotheses, even if no hypothesis goes from possible to not or vice-versa.
As this is prior dependent, there is no objective measure of whether a hypothesis is supported or rejected by evidence.
(This is obviously true when we look at P(H_i|e). It’s a bit less so when we look at P(e|H) vs P(e|~H). This seems objective. It is objective in the case that H and ~H are atomic hypotheses with a well-defined rule for getting P(e|~H). But if ~H is an or of “all the other theories”, than P(e|~H) is dependent on the prior probabilities for each of the H_i that are the subcomponents of ~H. It’s also an utterly useless by itself for judging H. We want to know P(H|e) for that. (P(e|H) is of course why we want P(H), so we can make useful predictions.)
It is true that in the long run much evidence will eventually dominate any prior. But summarizing this as “log odds”, for instance is only useful for talking about comparing two specific hypotheses, not “this hypothesis” and “everything else”.
But I still have objections to most you say.
You’ve given an essentially operational definition of “evidence for” in terms of operations that can’t be done.
Yes. The standard way to express that is that you can’t actually work with P(Hypothesis), only P(Hypothesis | Model Space).
You can then, of course expand your model spaces, if you find your model space is inadequate.
“Computable” is hardly ad-hoc. It’s a fundamental restriction on how it is possible to reason.
If you want to know how to think, you had better pick a method that’s actually possible.
This really is just another facet of “all Bayesian probabilities are conditional.”
And you shouldn’t do that.
Yes, of course. The point is that if you’re using probability theory to actually reason, and not merely to set up a toy statistical model such as might appear in a scientific paper, you will in fact already be “considering” all possible hypotheses, not merely a small important-looking subset. Now it’s true that what you won’t be doing is enumerating every possible hypothesis on the most fine-grained level of description, and then computing the information-theoretic complexity of each one to determine its prior—since, as you point out, that’s computationally intractable. Instead, you’ll take your important-looking subset just as you would in the science paper, let’s say H1, H2, and H3, but then add to that another hypothesis H4, which represents the whole rest of hypothesis-space, or in other words “something I didn’t think of”/”my paradigm is wrong”/etc. And you have to assign a nonzero probability to H4.
No, see above. In science papers, “paradigm shifts” happen, and you “change your model space”. Not in abstract Bayesianism. In abstract Bayesianism, low-probability events happen, and you update accordingly. The result will look similar to “changing your model space”, because what happens is that when H4 turns out to be true (i.e. its probability is raised to something high), you then to start to carve up the H4 region of hypothesis-space more finely and incorporate these “new” sub-hypotheses into your “important-looking subset”.
To return to the issue at hand in this thread, here’s what’s going on as I see it: physicists, acting as Bayesians, have assigned very low probablity to psi being true given QFT, and they have assigned a very high probability to QFT. In so doing, they’ve already considered the possibility that psi may be consistent with QFT, and judged this possibility to be of near-negligible probability. That was done in the first step, where they said “P(psi|QFT) is small”. It doesn’t do to reply “well, their paradigm may be wrong”; yes, it may, but if you think the probability of that is higher than they do, then you have to confront their analysis. Sean Carroll’s post is a defense of the proposition that “P(psi|QFT) is small”; Jack’s comment is an assertion that “psi&QFT may be true”, which sounds like an assertion that “P(psi|QFT) is higher than Sean Carroll thinks it is”—in which case Jack would need to account somehow for Carroll being mistaken in his analysis.
This is basically my position. ETA: I may assign a high probability to “not all of the hypotheses that make up QFT are true” a position I believe I can hold while not disputing the experimental evidence supporting QFT (though such evidence does decrease the probability of any part of QFT being wrong).
I don’t think Carroll’s analysis comes close to showing that P(psi|QFT) is 1 in a billion. He took one case, a psychokinesis claim that no one in parapsychology endorses and showed how it was impossible given one interpretation of what the claim might mean. We can’t look at his analysis and take it as convincing evidence that the claims of parapsychologists aren’t consistent with QFT since Carroll doesn’t once mention any of the claims made by parapsychologists!
Now there are some studies purporting to show psychokinesis (though they are less convincing than the precognition studies and actually might just be a kind of precognition). Even in these cases no one in parapsychology thinks the perturbations are the result of EM or gravitational fields; Carroll pointing out that they can’t shouldn’t result in us updating on anything.
I actually think a physicist might be able to write a convincing case for why the claims of parapsychologists can’t be right. I think there is a good chance I don’t grasp just how inconsistent these claims are with known physics—and that is one of the reasons why fraud/methodology problems/publication bias still dominate my probability space regarding parapsychology. But Carroll hasn’t come close to writing such a case. I think the reason you think he has is that you’re not familiar with a) the actual claims of parapsychologists or b) the various but inconclusive attempts to explain parapsychology results without contradicting the experimental evidence confirming QFT.
The worked example he provides is what physics would require to exist (a new force that is somehow of at least comparable strength to electromagnetism but that has somehow never been detected by experiments so sensitive that they would detect any new force more than a billionth the strength of gravity) for telekinesis to exist at all. And there are indeed parapsychologists who claim telekinesis is worth investigating.
It is not unreasonable for Carroll, having given a worked example of applying extremely well-understood physics to the question, to then expect parapsychologists to then apply extremely well-understood physics to their other questions. His point (as he states in the article) is that they keep starting from an assumption that science knows nothing relevant to the questions parapsychologists are asking, rather than starting from an assumption that known science could be used to make testable, falsifiable predictions.
He doesn’t have to do the worked example for every phenomenon that parapsychology claims is worth serious investigation to make his point valid. Ignoring the existence of relevant known science is one reason parapsychology is a pseudoscience (a partial imitation) rather than science.
I could be wrong, but I think you added to this comment since I replied. Since all of my comments on the topic are getting downvoted without explanation I’ll be short.
But not spoon bending so much. In any case, being concerned about force fields is only worth while if you assume what is going on is a cause and effect, which many, maybe most of the attempts at explanation don’t.
This is really getting away from what Komponisto and I were talking about. I’m not really disputing the claim that parapsychology is a pseudo-science. I’m disputing the claim that Carroll’s analysis shows that the claims of parapsychology are fundamentally ruled out by current physics. I haven’t really thought about delineation issues regarding parapsychology.
But he gives no evidence that parapsychologists start from this assumption. Plenty of parapsychologists know that no force fields produced by the brain could be responsible for the effects they think they’ve found. Thats sort of their point actually.
There are lots of silly people in the field who think the results imply dualism of course—but thats precisely why it would be nice to have materialists tackle the questions.
There are no significant results from parapsychologists who are aware of physics. Instead, we have results from parapsychologists that claim statistical significance that have obviously defective experimental design and/or (usually and) turn out to be unreplicable.
That is, you describe sophisticated parapsychologists but the prominent results are from unsophisticated ones.
Cite?
ETA: Bem, for example, whose study initiated this discussion has a BA and did graduate work in physics.