Can someone link to a discussion, or answer a small misconception for me?
We know P(A & B) < P(A). So if you add details to a story, it becomes less plausible. Even though people are more likely to believe it.
However, If I do an experiment, and measure something which is implied by A&B, then I would think “A&B becomes more plausible then A”, Because A is more vague then A&B.
But this seems to be a contradiction.
I suppose, to me, adding more details to a story makes the story more plausible if those details imply the evidence. Sin(x) is an analytic function. If I know a complex differentiable function has roots on all multiples of pi, Saying the function is satisfied by Sin is more plausible then saying it’s satisfied by some analytic function.
I think...I’m screwing up the semantics, since sin is an analytic function. But this seems to me to be missing the point.
I read a technical explanation of a technical explanation, so I know specific theories are better then vague theories (provided the evidence is specific). I guess I’m asking for clarifications on how this is formally consistent with P(A) > P(A&B).
A&B gains more evidence than A from the experiment. It doesn’t (and can’t) become more probable.
Let’s have an example. Someone is flipping a coin repeatedly. The coin is either a fair one or a weighted one that comes up heads 3x as often as tails. (A = “coin is weighted in this way”.) The person doing the flipping might be honest, or might be reporting half the tails she flips (i.e., each one randomly with p=1/2) as heads. (B = “person is cheating in this way”.)
Let’s say that ahead of time you think A and B independently have probability 1⁄10.
Your experiment consists of getting the (alleged) results of a single coin flip, which you’re told was heads.
So. Beforehand the probability of A was 1⁄10 and that of B was 1⁄100.
The probability of your observed results is: 1⁄2 under (not-A,not-B); 3⁄4 under (not-A,B); 3⁄4 under (A.not-B); and 7⁄8 under (A,B).
So the posterior probabilities for the four possibilities are proportional to (81:9:9:1) times (4:6:6:7); that is, to (324:54:54:7). Which means the probability of A has gone up from 10% to about 14%, and the probability of A&B from 1% to about 1.6%.
So you’ve got more evidence for A&B than for A, which translates (more or less) to a larger relative gain in probability for A&B than for A. But A&B is still less likely.
If you repeat the experiment and keep getting heads, then A&B will always improve more than A alone. But the way this works is that after a long time almost all the probability of A comes from the case where A&B, so that A&B’s advantage in increase-in-probability gradually goes away.
So plausibility isn’t the only dimension for assessing how “good” a belief is.
A or not A is a certainty. I’m trying to formally understand why that statement tells me nothing about anything.
The motivating practical problem came from this question,
“guess the rule governing the following sequence”
11, 31, 41, 61, 71, 101, 131, …
I cried, “Ah the sequence is increasing!” With pride I looked into the back of the book and found the answer “primes ending in 1″.
I’m trying to zone in on what I did wrong.
If I had said instead, the sequence is a list of numbers—that would be stupider, but well inline with my previous logic.
My first attempt at explaining my mistake, was by arguing “it’s an increasing sequence” was actually less plausible then the real answer, since the real answer was making a much riskier claim. I think one can argue this without contradiction (the rule is either vague or specific, not both).
However, it’s often easy to show whether some infinite product is analytic. Making the jump that the product evaluates to sin, in particular, requires more evidence. But in some qualitative sense, establishing that later goal is much better. My guess was that establishing the equivalence is a more specific claim, making it more valuable.
In my attempt to formalize this, I tried to show this was represented by the probabilities. This is clearly false.
What should I read to understand this problem more formerly, or more precisely? Should I look up formal definitions of evidence?
“S is an increasing sequence” is a less specific hypothesis than “S consists of all prime numbers whose decimal representations end in 1, in increasing order”. But “The only constraint governing the generation of S was that it had to be an increasing sequence” is not a less specific hypothesis than “The only constraint governing the generation of S was that it had to consist of primes ending in 1, in increasing order”.
If given a question of the form “guess the rule governing such-and-such a sequence”, I would expect the intended answer to be one that uniquely identifies the sequence. So I’d give “the numbers are increasing” a much lower probability than “the numbers are the primes ending in 1, in increasing order”. (Recall, again, that the propositions whose probabilities we’re evaluating aren’t the things in quotation marks there; they’re “the rule is: the numbers are increasing” and “the rule is: the numbers are the primes (etc.)”.
Moving back to your question about analytic functions: Yes, more specific hypotheses may be more useful when true, and that might be a good reason to put effort into testing them rather than less specific, less useful hypotheses. But (as I think you appreciate) that doesn’t make any difference to the probabilities.
The subject concerned with the interplay between probabilities, preferences and actions is called decision theory; you might or might not find it worth looking up.
I think there’s some philosophical literature on questions like “what makes a good explanation?” (where a high probability for the alleged explanation is certainly a virtue, but not the only one); that seems directly relevant to your questions, but I’m afraid I’m not the right person to tell you who to read or what the best books or papers are. I’ll hazard a guess that well over 90% of philosophical work on the topic has close to zero (or even negative) value, but I’m making that guess on general principles rather than as a result of surveying the literature in this area. You might start with the Stanford Encyclopedia of Philosophy but I’ve no more than glanced at that article.
The motivating practical problem came from this question,
“guess the rule governing the following sequence” 11, 31, 41, 61, 71, 101, 131, …
I cried, “Ah the sequence is increasing!” With pride I looked into the back of the book and found the answer “primes ending in 1″.
I’m trying to zone in on what I did wrong.
If I had said instead, the sequence is a list of numbers—that would be stupider, but well inline with my previous logic.
My first attempt at explaining my mistake, was by arguing “it’s an increasing sequence” was actually less plausible then the real answer, since the real answer was making a much riskier claim. I think one can argue this without contradiction (the rule is either vague or specific, not both).
I think of it in terms of making a $100 bet.
So you have the sequence S: 11, 31, 41, 61, 71, 101, 131.
A: is the “bet” (i.e. hypothesis) that the sequence is increasing by primes ending in 1. There are very few sequences (below the number 150) you can write where you have an increasing sequence of primes ending in 1, so your “bet” is to go all in.
B: is the “bet” that the sequence is increasing. But a “sequence that’s increasing” spreads more of its money around so it’s not a very confident bet. Why does it spread more of its money around?
If we introduced a second sequence X: 14, 32, 42, 76, 96, 110, 125
You can still see that B can account for this sequence as well, whereas A does not. So B has to at least spread its betting money between the two sequences presented A and X just in case either of those are the answer presented in the back of the book. In reality there are an untold amount of sequences that B can account for besides the two here. Meaning that B has to spread its betting money to all of those sequences if B wants to “win” by “correctly guessing” what the answer was in the back of the book. This is what makes it a bad bet; a hypothesis that is too general.
This is a simple mathematical way you can compare the two “bets” via conditional probabilities:
Pr(A | S) is already all in because the A bet only fits something that looks like S. Pr(B | S) is less than all in because Pr(B | X) is also a possibility as well as any other increasing sequence of numbers, Pr(B | ???). This is a fancy way of saying that the strength of a hypothesis lies in what it can’t explain, not what it can; ask not what your hypothesis predicts, but what it excludes.
Going by what each bet excludes you can see that Pr(A | ??) < Pr(B | ??), even if we don’t have any hard and fast number for them. While there is a limited amount of 7 number patterns below 150 that are increasing, this is a much larger set than the amount of 7 number patterns below 150 that are increasing by primes ending in 1.
A&B cannot be more probable than A, but evidence may support A&B more than it supports A.
For example, suppose you have independent prior probabilities of 1⁄2 for A and for B. The prior probability of A&B is 1⁄4. If you are then told “A iff B,” the probability for A does not change but the probability of A&B goes up to 1⁄2.
The reason specific theories are better is not that they are more plausible, but that they contain more useful information.
A more specific explanation is better than a general explanation in the scientific sense exactly because it is more easily falsifiable. Your sentence
If I know a complex differentiable function has roots on all multiples of pi, Saying the function is satisfied by Sin is more plausible then saying it’s satisfied by some analytic function.
is completely wrong, as the set containing the sine function is most certainly contained in the set of all analytic functions, making it more plausible that “some analytic function has roots at all multiples of pi” than to say the same of sine, assuming we do not already know a great deal of information about sine.
However, If I do an experiment, and measure something which is implied by A&B, then I would think “A&B becomes more plausible then A”, Because A is more vague then A&B.
Plain and simply no. If evidence E implies A and B, formally E → A&B, then seperately E → A and E → B are true, increasing the probability of both seperately, making your conclusion invalid.
If A,B,C are binary, values of A and B are drawn from independent fair coins, and C = A XOR B, then measuring C = 1 constrains A,B to be either { 1, 1 } or { 0, 0 }, but does not constrain A alone at all.
Before we conditioned on C=1, all values of the joint variable A,B had probabilities 0.25, and all values of a single variable A had probabilities 0.5. After we conditioned on C=1, values { 0, 0 } and { 1, 1 } of A,B assume probabilities 0.5, and values { 0, 1 } and { 1, 0 } of A,B assume probabilities 0, values of a single variable A remain at probability 0.5.
By conditioning on C=1, you learn more about the joint variable A,B than about a single variable A (because your posterior for A,B changed, but your posterior for A did not), but that is not the same thing as the joint variable A,B being more plausible than the single variable A. In fact, it is still the case that p(A & B | C) ⇐ p(A | C) for all values of A,B.
edit: others below said the same, and often better.
I think we tend to intuitively “normalize” the likelihood of a complex statement. Our prior is probably Kolmogorov complexity, so if A is a 2-bit statement and B is a 3-bit statement, we would “expect” the probabilities to be P(A)=1/4, P(B)=1/8, P(A&B)=1/32. If our evidence leads us to adjust to say P(A)=1/3, P(A&B)=1/4, then while A&B is still less likely than A, there is some sense in which A&B is “higher above baseline”.
Coming from the other end, predictions, this sort of makes sense. Theories that are more specific are more useful. If we have a theory that this sequence consists of odd numbers, that lets us make some prediction about the next number. If our theory is that the numbers are all primes, we can make a more specific, and therefore more useful, prediction about the next number. So even though the theory that the sequence is odd is more likely than the theory that the sequence is prime, the latter is more useful. I think that’s where the idea that specific theories are better than vague theories comes from.
I’m guessing that the rule P(A & B) < P(A) is for independent variables (though it’s actually more accurate to say P(A & B) ⇐ P(A) ). If you have dependent variables, then you use Bayes Theorem to update. P(A & B) is different from P(A | B). P(A & B) ⇐ P(A) is always true, but not so for P(A | B) viz. P(A).
This is probably an incomplete or inadequate explanation, though. I think there was a thread about this a long time ago, but I can’t find it. My Google-fu is not that strong.
We know P(A & B) < P(A). So if you add details to a story, it becomes less plausible.
Not so. Stories usually are considerably more complicated than can be represented as ANDing of probabilities.
A simple example: Someone tells me that she read my email to Alice, let’s say I think that’s X% plausible. But then she adds details: she says that the email mentioned a particular cafe. This additional detail makes the plausibility of this story skyrocket (since I do know that the email did mention that cafe).
So maybe it’s worth saying explicitly what’s going on here: You’re comparing probabilities conditional on different information.
A = “Beth read my email to Alice”.
B = “Beth knows that my email to Alice mentioned the Dead Badger Cafe”.
I = “Beth told me she read my email to Alice”.
J = “Beth told me my email to Alice mentioned the Dead Badger Cafe”.
Now P(A&B|I) < P(A|I), and P(A&B|I&J) < P(A|I&J), but P(A&B|I&J) > P(A|I).
So there’s no contradiction; there’s nothing wrong with applying probabilities; but if you aren’t careful you can get confused. (For the avoidance of doubt, I am not claiming that Lumifer is or was confused.)
And, yes, I bet this sort of conditional-probability structure is an important part of why we find stories more plausible when they contain lots of details. Unfortunately, the way our brains apply this heuristic is far from perfect, and in particular it works even when we can’t or won’t check the details and we know that the person telling us the story knows this. So it leads us astray when we are faced with people who are unscrupulous and good at lying.
Um. I was just making a point that “we know P(A & B) ⇐ P(A)” is a true statement coming from math logic, while “if you add details to a story, it becomes less plausible” is a false statement coming from human interaction.
Not sure about your unrolling of the probabilities since P(B|A) = 1 which makes A and B essentially the same. If you want to express the whole thing in math logic terms you need notation as to who knows what.
[...] is a true statement coming from math logic, [...] is a false statement coming from human interaction
My reading of polymer’s statement is that he wasn’t using “plausible” as a psychological term, but as a rough synonym for “probable”. (polymer, if you’re reading: Was I right?)
P(B|A) = 1 which makes A and B essentially the same
No, P(B|A) is a little less than 1 because Beth might have read the email carelessly, or forgotten bits of it.
[EDITED to add: If whoever downvoted this would care to explain what they found objectionable about it, I’d have more chance of fixing it. It looks obviously innocuous to me even on rereading. Thanks!]
if you add details to a story, it becomes less plausible” is a false statement coming from human interaction.
I don’t care whether it’s false as a “human interaction”. I care whether the idea can be modeled by probabilities.
Is my usage of the word plausible in this way really that confusing? I’d like to know why… Probable, likely, credible, plausible, are all (rough) synonyms to me.
Can someone link to a discussion, or answer a small misconception for me?
We know P(A & B) < P(A). So if you add details to a story, it becomes less plausible. Even though people are more likely to believe it.
However, If I do an experiment, and measure something which is implied by A&B, then I would think “A&B becomes more plausible then A”, Because A is more vague then A&B.
But this seems to be a contradiction.
I suppose, to me, adding more details to a story makes the story more plausible if those details imply the evidence. Sin(x) is an analytic function. If I know a complex differentiable function has roots on all multiples of pi, Saying the function is satisfied by Sin is more plausible then saying it’s satisfied by some analytic function.
I think...I’m screwing up the semantics, since sin is an analytic function. But this seems to me to be missing the point.
I read a technical explanation of a technical explanation, so I know specific theories are better then vague theories (provided the evidence is specific). I guess I’m asking for clarifications on how this is formally consistent with P(A) > P(A&B).
A&B gains more evidence than A from the experiment. It doesn’t (and can’t) become more probable.
Let’s have an example. Someone is flipping a coin repeatedly. The coin is either a fair one or a weighted one that comes up heads 3x as often as tails. (A = “coin is weighted in this way”.) The person doing the flipping might be honest, or might be reporting half the tails she flips (i.e., each one randomly with p=1/2) as heads. (B = “person is cheating in this way”.)
Let’s say that ahead of time you think A and B independently have probability 1⁄10.
Your experiment consists of getting the (alleged) results of a single coin flip, which you’re told was heads.
So. Beforehand the probability of A was 1⁄10 and that of B was 1⁄100.
The probability of your observed results is: 1⁄2 under (not-A,not-B); 3⁄4 under (not-A,B); 3⁄4 under (A.not-B); and 7⁄8 under (A,B).
So the posterior probabilities for the four possibilities are proportional to (81:9:9:1) times (4:6:6:7); that is, to (324:54:54:7). Which means the probability of A has gone up from 10% to about 14%, and the probability of A&B from 1% to about 1.6%.
So you’ve got more evidence for A&B than for A, which translates (more or less) to a larger relative gain in probability for A&B than for A. But A&B is still less likely.
If you repeat the experiment and keep getting heads, then A&B will always improve more than A alone. But the way this works is that after a long time almost all the probability of A comes from the case where A&B, so that A&B’s advantage in increase-in-probability gradually goes away.
So plausibility isn’t the only dimension for assessing how “good” a belief is.
A or not A is a certainty. I’m trying to formally understand why that statement tells me nothing about anything.
The motivating practical problem came from this question,
“guess the rule governing the following sequence” 11, 31, 41, 61, 71, 101, 131, …
I cried, “Ah the sequence is increasing!” With pride I looked into the back of the book and found the answer “primes ending in 1″.
I’m trying to zone in on what I did wrong.
If I had said instead, the sequence is a list of numbers—that would be stupider, but well inline with my previous logic.
My first attempt at explaining my mistake, was by arguing “it’s an increasing sequence” was actually less plausible then the real answer, since the real answer was making a much riskier claim. I think one can argue this without contradiction (the rule is either vague or specific, not both).
However, it’s often easy to show whether some infinite product is analytic. Making the jump that the product evaluates to sin, in particular, requires more evidence. But in some qualitative sense, establishing that later goal is much better. My guess was that establishing the equivalence is a more specific claim, making it more valuable.
In my attempt to formalize this, I tried to show this was represented by the probabilities. This is clearly false.
What should I read to understand this problem more formerly, or more precisely? Should I look up formal definitions of evidence?
“S is an increasing sequence” is a less specific hypothesis than “S consists of all prime numbers whose decimal representations end in 1, in increasing order”. But “The only constraint governing the generation of S was that it had to be an increasing sequence” is not a less specific hypothesis than “The only constraint governing the generation of S was that it had to consist of primes ending in 1, in increasing order”.
If given a question of the form “guess the rule governing such-and-such a sequence”, I would expect the intended answer to be one that uniquely identifies the sequence. So I’d give “the numbers are increasing” a much lower probability than “the numbers are the primes ending in 1, in increasing order”. (Recall, again, that the propositions whose probabilities we’re evaluating aren’t the things in quotation marks there; they’re “the rule is: the numbers are increasing” and “the rule is: the numbers are the primes (etc.)”.
Moving back to your question about analytic functions: Yes, more specific hypotheses may be more useful when true, and that might be a good reason to put effort into testing them rather than less specific, less useful hypotheses. But (as I think you appreciate) that doesn’t make any difference to the probabilities.
The subject concerned with the interplay between probabilities, preferences and actions is called decision theory; you might or might not find it worth looking up.
I think there’s some philosophical literature on questions like “what makes a good explanation?” (where a high probability for the alleged explanation is certainly a virtue, but not the only one); that seems directly relevant to your questions, but I’m afraid I’m not the right person to tell you who to read or what the best books or papers are. I’ll hazard a guess that well over 90% of philosophical work on the topic has close to zero (or even negative) value, but I’m making that guess on general principles rather than as a result of surveying the literature in this area. You might start with the Stanford Encyclopedia of Philosophy but I’ve no more than glanced at that article.
I think of it in terms of making a $100 bet.
So you have the sequence S: 11, 31, 41, 61, 71, 101, 131.
A: is the “bet” (i.e. hypothesis) that the sequence is increasing by primes ending in 1. There are very few sequences (below the number 150) you can write where you have an increasing sequence of primes ending in 1, so your “bet” is to go all in.
B: is the “bet” that the sequence is increasing. But a “sequence that’s increasing” spreads more of its money around so it’s not a very confident bet. Why does it spread more of its money around?
If we introduced a second sequence X: 14, 32, 42, 76, 96, 110, 125
You can still see that B can account for this sequence as well, whereas A does not. So B has to at least spread its betting money between the two sequences presented A and X just in case either of those are the answer presented in the back of the book. In reality there are an untold amount of sequences that B can account for besides the two here. Meaning that B has to spread its betting money to all of those sequences if B wants to “win” by “correctly guessing” what the answer was in the back of the book. This is what makes it a bad bet; a hypothesis that is too general.
This is a simple mathematical way you can compare the two “bets” via conditional probabilities:
Pr(B | S) + Pr(B | X) + Pr(B | ??) = 1.00 and Pr(A | S) + Pr(A | X) + Pr(A | ??) = 1.00
Pr(A | S) is already all in because the A bet only fits something that looks like S. Pr(B | S) is less than all in because Pr(B | X) is also a possibility as well as any other increasing sequence of numbers, Pr(B | ???). This is a fancy way of saying that the strength of a hypothesis lies in what it can’t explain, not what it can; ask not what your hypothesis predicts, but what it excludes.
Going by what each bet excludes you can see that Pr(A | ??) < Pr(B | ??), even if we don’t have any hard and fast number for them. While there is a limited amount of 7 number patterns below 150 that are increasing, this is a much larger set than the amount of 7 number patterns below 150 that are increasing by primes ending in 1.
A&B cannot be more probable than A, but evidence may support A&B more than it supports A.
For example, suppose you have independent prior probabilities of 1⁄2 for A and for B. The prior probability of A&B is 1⁄4. If you are then told “A iff B,” the probability for A does not change but the probability of A&B goes up to 1⁄2.
The reason specific theories are better is not that they are more plausible, but that they contain more useful information.
A more specific explanation is better than a general explanation in the scientific sense exactly because it is more easily falsifiable. Your sentence
is completely wrong, as the set containing the sine function is most certainly contained in the set of all analytic functions, making it more plausible that “some analytic function has roots at all multiples of pi” than to say the same of sine, assuming we do not already know a great deal of information about sine.
Plain and simply no. If evidence E implies A and B, formally E → A&B, then seperately E → A and E → B are true, increasing the probability of both seperately, making your conclusion invalid.
If A,B,C are binary, values of A and B are drawn from independent fair coins, and C = A XOR B, then measuring C = 1 constrains A,B to be either { 1, 1 } or { 0, 0 }, but does not constrain A alone at all.
Before we conditioned on C=1, all values of the joint variable A,B had probabilities 0.25, and all values of a single variable A had probabilities 0.5. After we conditioned on C=1, values { 0, 0 } and { 1, 1 } of A,B assume probabilities 0.5, and values { 0, 1 } and { 1, 0 } of A,B assume probabilities 0, values of a single variable A remain at probability 0.5.
By conditioning on C=1, you learn more about the joint variable A,B than about a single variable A (because your posterior for A,B changed, but your posterior for A did not), but that is not the same thing as the joint variable A,B being more plausible than the single variable A. In fact, it is still the case that p(A & B | C) ⇐ p(A | C) for all values of A,B.
edit: others below said the same, and often better.
I think we tend to intuitively “normalize” the likelihood of a complex statement. Our prior is probably Kolmogorov complexity, so if A is a 2-bit statement and B is a 3-bit statement, we would “expect” the probabilities to be P(A)=1/4, P(B)=1/8, P(A&B)=1/32. If our evidence leads us to adjust to say P(A)=1/3, P(A&B)=1/4, then while A&B is still less likely than A, there is some sense in which A&B is “higher above baseline”.
Coming from the other end, predictions, this sort of makes sense. Theories that are more specific are more useful. If we have a theory that this sequence consists of odd numbers, that lets us make some prediction about the next number. If our theory is that the numbers are all primes, we can make a more specific, and therefore more useful, prediction about the next number. So even though the theory that the sequence is odd is more likely than the theory that the sequence is prime, the latter is more useful. I think that’s where the idea that specific theories are better than vague theories comes from.
P(A & B) ⇐ P(A)
I’m guessing that the rule P(A & B) < P(A) is for independent variables (though it’s actually more accurate to say P(A & B) ⇐ P(A) ). If you have dependent variables, then you use Bayes Theorem to update. P(A & B) is different from P(A | B). P(A & B) ⇐ P(A) is always true, but not so for P(A | B) viz. P(A).
This is probably an incomplete or inadequate explanation, though. I think there was a thread about this a long time ago, but I can’t find it. My Google-fu is not that strong.
Not so. Stories usually are considerably more complicated than can be represented as ANDing of probabilities.
A simple example: Someone tells me that she read my email to Alice, let’s say I think that’s X% plausible. But then she adds details: she says that the email mentioned a particular cafe. This additional detail makes the plausibility of this story skyrocket (since I do know that the email did mention that cafe).
So maybe it’s worth saying explicitly what’s going on here: You’re comparing probabilities conditional on different information.
A = “Beth read my email to Alice”. B = “Beth knows that my email to Alice mentioned the Dead Badger Cafe”. I = “Beth told me she read my email to Alice”. J = “Beth told me my email to Alice mentioned the Dead Badger Cafe”.
Now P(A&B|I) < P(A|I), and P(A&B|I&J) < P(A|I&J), but P(A&B|I&J) > P(A|I).
So there’s no contradiction; there’s nothing wrong with applying probabilities; but if you aren’t careful you can get confused. (For the avoidance of doubt, I am not claiming that Lumifer is or was confused.)
And, yes, I bet this sort of conditional-probability structure is an important part of why we find stories more plausible when they contain lots of details. Unfortunately, the way our brains apply this heuristic is far from perfect, and in particular it works even when we can’t or won’t check the details and we know that the person telling us the story knows this. So it leads us astray when we are faced with people who are unscrupulous and good at lying.
Um. I was just making a point that “we know P(A & B) ⇐ P(A)” is a true statement coming from math logic, while “if you add details to a story, it becomes less plausible” is a false statement coming from human interaction.
Not sure about your unrolling of the probabilities since P(B|A) = 1 which makes A and B essentially the same. If you want to express the whole thing in math logic terms you need notation as to who knows what.
My reading of polymer’s statement is that he wasn’t using “plausible” as a psychological term, but as a rough synonym for “probable”. (polymer, if you’re reading: Was I right?)
No, P(B|A) is a little less than 1 because Beth might have read the email carelessly, or forgotten bits of it.
[EDITED to add: If whoever downvoted this would care to explain what they found objectionable about it, I’d have more chance of fixing it. It looks obviously innocuous to me even on rereading. Thanks!]
I’m not quite sure what the following means:
I don’t care whether it’s false as a “human interaction”. I care whether the idea can be modeled by probabilities.
Is my usage of the word plausible in this way really that confusing? I’d like to know why… Probable, likely, credible, plausible, are all (rough) synonyms to me.