Successfully tested hypotheses are more likely than untested hypotheses, but testable hypotheses are not more likely than untestable hypotheses. A lot of people commit this mistake; your post does not, but it does sort of suggest it.
The rational way to establish the probability of a hypothesis is by testing it.
If a hypothesis is untestable in principle then its probability is zero, or undefined if you prefer. There’s no way to assign any probability to it.
If it’s impractical to test a hypothesis—e.g. if it would cost a trillion dollars to build a suitable particle accelerator—then the hypothesis stays in limbo until its proponents figure out a test to perform. At some point a probability can be assigned to it, but not yet.
Either way, if you’re using “likeliness” to mean “probability” then it seems to me that testable hypotheses are “more likely” than untestable ones—insofar as we assign a probability to one and assign no probability to the other. If Bayes’s Theorem keeps returning “undefined”, you’re doing it wrong.
Not all untestable-in-principle hypotheses are meaningless. And you can’t refuse to assign a probability to a meaningful hypothesis; you can only pretend not to assign a probability, and then assign probabilities anyway each time you need to make a decision or answer a question that it’s relevant to, and these probabilities will be different in different contexts without any reason why there should be a difference.
If I correctly understand the distinction you’re making between “untestable” and “meaningless”, then the hypothesis “God rewards Christians with Heaven and everyone else goes to Hell” is untestable but not meaningless, correct?
I don’t bother to work Bayes’ Theorem on untestable hypotheses, simply because there are an infinite number of untestable hypotheses and I don’t have time to formally do math on them all. This is more or less equivalent to assigning them zero probability.
I stand by my claim that it’s improper to say that an untestable hypothesis is “more likely” or “less likely” than a testable hypothesis, or another untestable one. Just because people are known to assign arbitrary probabilities to untestable hypotheses, doesn’t make it a good or useful thing to do.
If I correctly understand the distinction you’re making between “untestable” and “meaningless”, then the hypothesis “God rewards Christians with Heaven and everyone else goes to Hell” is untestable but not meaningless, correct?
Yes, that’s right. But in the evpsych context almost all hypotheses are at least meaningful, so we’re drifting off the issue.
If you were unsure of evpsych story X, and you found a way to test it, would your probability for X go up? It shouldn’t, and that’s all I’m saying. The possibility of future evidence is not evidence.
Suppose our untestable-in-principle hypothesis is that undetectable dragons in your garage cause cancer. Then X is “undetectable garage dragon.” As far as I can tell, there is no way to assign a probability to an undetectable dragon.
What’s wrong with zero? An indetectable something is redundant and can be eliminated without loss; it has no consequences that the negation of its existence doesn’t also imply. You might as well treat it as impossible—if you don’t like giving zero probabilities, assign it whatever value you use for things-that-can’t-occur.
Successfully tested hypotheses are more likely than untested hypotheses, but testable hypotheses are not more likely than untestable hypotheses. A lot of people commit this mistake; your post does not, but it does sort of suggest it.
The rational way to establish the probability of a hypothesis is by testing it.
If a hypothesis is untestable in principle then its probability is zero, or undefined if you prefer. There’s no way to assign any probability to it.
If it’s impractical to test a hypothesis—e.g. if it would cost a trillion dollars to build a suitable particle accelerator—then the hypothesis stays in limbo until its proponents figure out a test to perform. At some point a probability can be assigned to it, but not yet.
Either way, if you’re using “likeliness” to mean “probability” then it seems to me that testable hypotheses are “more likely” than untestable ones—insofar as we assign a probability to one and assign no probability to the other. If Bayes’s Theorem keeps returning “undefined”, you’re doing it wrong.
Not all untestable-in-principle hypotheses are meaningless. And you can’t refuse to assign a probability to a meaningful hypothesis; you can only pretend not to assign a probability, and then assign probabilities anyway each time you need to make a decision or answer a question that it’s relevant to, and these probabilities will be different in different contexts without any reason why there should be a difference.
If I correctly understand the distinction you’re making between “untestable” and “meaningless”, then the hypothesis “God rewards Christians with Heaven and everyone else goes to Hell” is untestable but not meaningless, correct?
I don’t bother to work Bayes’ Theorem on untestable hypotheses, simply because there are an infinite number of untestable hypotheses and I don’t have time to formally do math on them all. This is more or less equivalent to assigning them zero probability.
I stand by my claim that it’s improper to say that an untestable hypothesis is “more likely” or “less likely” than a testable hypothesis, or another untestable one. Just because people are known to assign arbitrary probabilities to untestable hypotheses, doesn’t make it a good or useful thing to do.
Yes, that’s right. But in the evpsych context almost all hypotheses are at least meaningful, so we’re drifting off the issue.
If you were unsure of evpsych story X, and you found a way to test it, would your probability for X go up? It shouldn’t, and that’s all I’m saying. The possibility of future evidence is not evidence.
Bayes’ Theorem never returns “undefined”. In the absence of any evidence it returns the prior.
Bayes’ Theorem is undefined if p(X) is undefined.
Suppose our untestable-in-principle hypothesis is that undetectable dragons in your garage cause cancer. Then X is “undetectable garage dragon.” As far as I can tell, there is no way to assign a probability to an undetectable dragon.
Please correct me if I’m wrong.
Solomonoff induction. Presumably you agree the probability is less than .1, and once you’ve granted that, we’re “just haggling over the price”.
What’s wrong with zero? An indetectable something is redundant and can be eliminated without loss; it has no consequences that the negation of its existence doesn’t also imply. You might as well treat it as impossible—if you don’t like giving zero probabilities, assign it whatever value you use for things-that-can’t-occur.