Have you read An Intuitive Explanation of Bayes’ Theorem, or any of the other explanations of Bayesian reasoning on this site?
I have read them repeatedly, and explained the concepts to others on multiple occassions.
Really? In that case you should hopefully be able to interact correctly with probabilities like p(Elias asserts X | X is true) and p(Elias asserts X | X is false).
It ought to prevent you from making errors like this:
Really? In that case you should hopefully be able to interact correctly with probabilities like p(Elias asserts X | X is true) and p(Elias asserts X | X is false).
Assuming he was able to explain them correctly, which I think we have a lot of reason to doubt.
Really? In that case you should hopefully be able to interact correctly with probabilities like p(Elias asserts X | X is true) and p(Elias asserts X | X is false).
If you meant those to be topical you’ve got your givens inverted.
It ought to prevent you from making erors like this:
Appeals to authority are always fallacious.
I’m sorry, but no matter how many times you reiterate it, until such time as you can provide a valid justification for the assertion that appeals to authority are not always fallacious, each iteration will be no more correct than the last time.
Appeals to authority are always fallacious. Some things that look like appeals to authority to casual observation upon closer examination turn out not to be. Such as the reference to the works of an authority-figure.
Really? In that case you should hopefully be able to interact correctly with probabilities like p(Elias asserts X | X is true) and p(Elias asserts X | X is false).
If you meant those to be topical you’ve got your givens inverted.
No. Actually I don’t. Those are base priors that are important when evaluating just how to update when given the new evidence “Elias asserts X”. Have you not just been telling us that you teach others how this mathematics works?
I’m sorry, but no matter how many times you reiterate it, until such time as you can provide a valid justification for the assertion that appeals to authority are not always fallacious, each iteration will be no more correct than the last time.
Why are you saying that? I didn’t just make the assertion again. I pointed to some of the Bayesian reasoning that it translates to.
If you have values for p(Elias asserts X | X is true) and p(Elias asserts X | X is false) that are not equal to each other and you gain information “Elias asserts X” your estimation for X is true must change or your thinking is just wrong. It is simple mathematics. That being the case supplying the information “Elias asserts X” to someone who already has information about Elias’s expertise is not fallacious. It is supplying information to them that should change their mind.
The above applies regardless of whether Elias has ever written any works on the subject or in any way supplied arguments. It applies even if Elias himself has no idea why he has the intuition “X”. It applies if Elias is a flipping black box that spits out statements. If you both you and the person you are speaking with have reason to believe p(Elias asserts X | X is true) > p(Elias asserts X | X is false) then supplying “Elias asserts X” as an argument in favour of X is not fallacious.
No. Actually I don’t. Those are base priors that are important when evaluating just how to update when given the new evidence “Elias asserts X”. Have you not just been telling us that you teach others how this mathematics works?
Alright, then, but you’re not going to like the answer: the probabilities in both cases are equal, barring some other influencing factor, such as Elias-the-Expert having justification for his assertion, or a direct means of observing X.
Unfortunately, this tells us exactly nothing for our current discussion.
It applies even if Elias himself has no idea why he has the intuition “X”.
This is the Blind Oracle argument. You are not going to find it persuasive.
It applies if Elias is a flipping black box that spits out statements.
It applies if Elias is a coin that flips over and over again and has come up heads a hundred times in a row and tails twenty times before that, therefore allowing us to assert that the probability that coin will come up heads is five times greater than the probability it will come up tails.
This, of course, is an eroneous conclusion, and represents a modal failure of the use of Bayesian belief-networks as assertions of matters of truth as opposed to merely being inductive beliefs.
And before you attempt to assert that fundamentally no truths are “knowable”, and that all we have are “beliefs of varying degrees of confidence”, let me just say that this, too, is a modal failure. But a subtle one: it is an assertion that because all we have is the map, there is no territory.
That is false—and there is a corollary as a result of that fact: the territory has a measurable impact on the map. And always will.
And before you attempt to assert that fundamentally no truths are “knowable”, and that all we have are “beliefs of varying degrees of confidence”, let me just say that this, too, is a modal failure. But a subtle one: it is an assertion that because all we have is the map, there is no territory.
Is there anyone else here who would have predicted that I was about say anything remotely like this? Why on earth would I be about to do that? It doesn’t seem relevant to the topic at hand, necessary for appeals to experts to be a source of evidence or even particularly coherent as a philosophy.
I am only slightly more likely to have replied with that argument than a monkey would have been if it pressed random letters on a keyboard—and then primarily because it was if nothing else grammatically well formed.
The patient is very, very confused. I think that this post allows us to finally offer the diagnosis: he is qualitatively confused.
Quote:
The only error that’s occurring here is your continued belief that beliefs are relevant to this conversation. They simply aren’t. We’re not discussing “what should you believe”—we are discussing “what should you hold to be true.”
Attempts to surgically remove this malignant belief from the patient have commenced.
Adding to wedrifid’s comment: Logos01, you seem to be committing a fallacy of gray here. Saying that we don’t have absolute certainty is not at all the same as saying that there is no territory; that is assuredly not what Bayesian epistemology is about.
No, wedrifid is not committing one, you are. Here’s why:
He is not arguing that reality is subjective or anything like that. See his comment. On the other hand, your argument seems to be that we need to have some kind of surefire knowledge in epistemology because Bayesian probabilities are insufficient. Why does an epistemology with lack a of absolute certainty mean that said epistemology is not good enough?
But it most assuredly does color how Bayesian beliefs are formed.
What have you read or seen that makes you think this is the case?
I wonder if the word “belief” is causing problems here. Everyone in this thread is using the term to mean “statement or piece of knowledge to which you assign some level of truth value.” Saying “I believe X” is the same as saying “I think X is probably true.” Is this also how you’re using the term?
Without commenting on the specifics here, I object to this formulation because it’s possible both or neither are committing (particular fallacy), and it’s even more likely that of two parties, each is committing a different fallacy.
Without commenting on the specifics here, I object to this formulation because it’s possible both or neither are committing (particular fallacy), and it’s even more likely that of two parties, each is committing a different fallacy.
The formulation is fine. It is just two independent claims. It does not mean “wedrifid is not making an error because you are”.
The subject isn’t “an error”, it’s “the fallacy of gray”.
I agree “No, wedrifid is not committing an error, you are,” hardly implies “wedrifid is not making an error because you are”.
“No, wedrifid is not committing the fallacy of gray, you are,” much more implies “wedrifid is not committing the fallacy of gray because you are” when wedrifid’s and Logos01′s statements are in direct conflict.
The subject isn’t “an error”, it’s “the fallacy of gray”.
Why on earth does it matter that I referred to the general case? We’re discussing the implication of the general formulation which I hope you don’t consider to be a special case that only applies to “The Fallacy of The Grey”. But if we are going to be stickler’s then we should use the actual formulation you object to:
No, wedrifid is not committing one, you are.
“No, wedrifid is not committing the fallacy of gray, you are,” much more implies “wedrifid is not committing the fallacy of gray because you are” when wedrifid’s and Logos01′s statements are in direct conflict.
Not especially. This sort of thing is going to said most often regarding statements in direct conflict. There is no more relationship implied between the two than can be expected for any two claims being mentioned in the same sentence or paragraph.
The fact that it would be so easy to write “because” but they didn’t is also evidence against any assertion of a link. There is a limit to how much you can blame a writer for the theoretical possibility that a reader could make a blatant error of comprehension.
I didn’t really mean “because” in that sense, for two reasons. The first is that it is the observer’s knowledge that is caused, not the party’s error, and the other is that the causation implied goes the other way. Not “because” as if the first party’s non-error caused the second’s error, but that the observer can tell that the second party is in error because the observer can see that the first isn’t committing the particular error.
a special case...statements in direct conflict
Not special, but there is a sliding scale.
Compare:
“No, wedrifid is not committing the Prosecutor’s fallacy, you are.” Whether or not one party is committing this fallacy has basically nothing to do with whether or not the other is. So I interpret this statement to probably mean: You are wrong when you claim that wedrifid is committing the Prosecutor’s fallacy. Also, you are committing the Prosecutor’s fallacy.”
“No, wedrifid is not reversing cause and effect, you are.” Knowing that one party is not reversing cause and effect is enough to know someone accusing that party of doing so is likely doing so him or herself! So I interpret this statement to probably mean: “Because I see that wedrifid is not reversing cause and effect, I conclude that you are.”
The fallacy of gray is in between the two above examples.
“Fallacy of The Grey” returns two google hits referring to the fallacy of gray, one from this site (and zero hits for “fallacy of the gray”).
He is not arguing that reality is subjective or anything like that.
I didn’t say he was.
On the other hand, your argument seems to be that we need to have some kind of surefire knowledge in epistemology because Bayesian probabilities are insufficient.
No. I was saying that Bayesian probabilistic-belief methodologies are effective at generating maps but they say almost nothing about how those maps correlate to the territory. And that it is, basically, possible to make those assertions. The practices are not the same, and that is the key difference.
What have you read or seen that makes you think this is the case?
It is fundamental to the nature of Bayesian belief-networks that they always assert statements in the form of probabilities. It is impossible to state a Bayesian belief except in the form of a probability.
From this there is a necessary conclusion.
Everyone in this thread is using the term to mean “statement or piece of knowledge to which you assign some level of truth value.” Saying “I believe X” is the same as saying “I think X is probably true.” Is this also how you’re using the term?
No. There is a difference categorically between the position, “I believe X is probably true” and “I believe X is true.”
I was saying that Bayesian probabilistic-belief methodologies are effective at generating maps but they say almost nothing about how those maps correlate to the territory.
It is fundamental to the nature of Bayesian belief-networks that they always assert statements in the form of probabilities. It is impossible to state a Bayesian belief except in the form of a probability.
Right. So why do you think this is insufficient for making maps that correlate to the territory? What assertions do you want to make about the territory that are not captured by this model?
No. There is a difference categorically between the position, “I believe X is probably true” and “I believe X is true.”
Right, but on LW “I believe X” is generally meant as the former, not the latter. This is probably part of the reason for all of the confusion and disagreement in this thread.
Alright, then, but you’re not going to like the answer: the probabilities in both cases are equal, barring some other influencing factor, such as Elias-the-Expert having justification for his assertion, or a direct means of observing X.
What? Of course Elias has some reason for believing what he believes. “Expert” doesn’t mean “someone who magically just knows stuff”. Somewhere along the lines the operation of physics has resulted in the bunch of particles called “Elias” to be configured in such a way as utterances by Elias about things like X are more likely to be true than false. This means that p(Elias asserts X | X is true) and p(Elias asserts X | X is false) are most certainly not equal. Claiming that they must be equal is just really peculiar.
This is the Blind Oracle argument.
This isn’t to do with blind oracles. It’s to do with trivial application of probability or rudimentary logic.
You are not going to find it persuasive.
In your interactions with people here I haven’t observed p(Logos01 is persuaded by X | X is sound reasoning) to be especially high. As such I cannot be expected to consider “Logos01 is not persuaded by something” to give much information at all about the soundness of a claim.
What? Of course Elias has some reason for believing what he believes.
Unless those reasons are justified—which we cannot know without knowing them—they cannot be held to be justifiable statements.
This is tautological.
Claiming that they must be equal is just really peculiar.
Not at all. You simply aren’t grasping why it is so. This is because you are thinking in terms of predictions and not in terms of concrete instances. To you, these are one-and-the-same, as you are used to thinking in the Bayesian probabilistic-belief manner.
I am telling you that this is an instance where that manner is flawed.
In your interactions with people here I haven’t observed p(Logos01 is persuaded by X | X is sound reasoning) to be especially high.
What you hold to be sound reasoning and what actually is sound reasoning are not equivalent.
As such I cannot be expected to consider “Logos01 is not persuaded by something” to give much information at all about the soundness of a claim.
If I had meant to imply that conclusion I would have phrased it so.
Really? In that case you should hopefully be able to interact correctly with probabilities like p(Elias asserts X | X is true) and p(Elias asserts X | X is false).
It ought to prevent you from making errors like this:
Assuming he was able to explain them correctly, which I think we have a lot of reason to doubt.
If you meant those to be topical you’ve got your givens inverted.
I’m sorry, but no matter how many times you reiterate it, until such time as you can provide a valid justification for the assertion that appeals to authority are not always fallacious, each iteration will be no more correct than the last time.
Appeals to authority are always fallacious. Some things that look like appeals to authority to casual observation upon closer examination turn out not to be. Such as the reference to the works of an authority-figure.
No. Actually I don’t. Those are base priors that are important when evaluating just how to update when given the new evidence “Elias asserts X”. Have you not just been telling us that you teach others how this mathematics works?
Why are you saying that? I didn’t just make the assertion again. I pointed to some of the Bayesian reasoning that it translates to.
If you have values for p(Elias asserts X | X is true) and p(Elias asserts X | X is false) that are not equal to each other and you gain information “Elias asserts X” your estimation for X is true must change or your thinking is just wrong. It is simple mathematics. That being the case supplying the information “Elias asserts X” to someone who already has information about Elias’s expertise is not fallacious. It is supplying information to them that should change their mind.
The above applies regardless of whether Elias has ever written any works on the subject or in any way supplied arguments. It applies even if Elias himself has no idea why he has the intuition “X”. It applies if Elias is a flipping black box that spits out statements. If you both you and the person you are speaking with have reason to believe p(Elias asserts X | X is true) > p(Elias asserts X | X is false) then supplying “Elias asserts X” as an argument in favour of X is not fallacious.
Alright, then, but you’re not going to like the answer: the probabilities in both cases are equal, barring some other influencing factor, such as Elias-the-Expert having justification for his assertion, or a direct means of observing X.
Unfortunately, this tells us exactly nothing for our current discussion.
This is the Blind Oracle argument. You are not going to find it persuasive.
It applies if Elias is a coin that flips over and over again and has come up heads a hundred times in a row and tails twenty times before that, therefore allowing us to assert that the probability that coin will come up heads is five times greater than the probability it will come up tails.
This, of course, is an eroneous conclusion, and represents a modal failure of the use of Bayesian belief-networks as assertions of matters of truth as opposed to merely being inductive beliefs.
And before you attempt to assert that fundamentally no truths are “knowable”, and that all we have are “beliefs of varying degrees of confidence”, let me just say that this, too, is a modal failure. But a subtle one: it is an assertion that because all we have is the map, there is no territory.
That is false—and there is a corollary as a result of that fact: the territory has a measurable impact on the map. And always will.
Is there anyone else here who would have predicted that I was about say anything remotely like this? Why on earth would I be about to do that? It doesn’t seem relevant to the topic at hand, necessary for appeals to experts to be a source of evidence or even particularly coherent as a philosophy.
I am only slightly more likely to have replied with that argument than a monkey would have been if it pressed random letters on a keyboard—and then primarily because it was if nothing else grammatically well formed.
The patient is very, very confused. I think that this post allows us to finally offer the diagnosis: he is qualitatively confused.
Quote:
Attempts to surgically remove this malignant belief from the patient have commenced.
Adding to wedrifid’s comment: Logos01, you seem to be committing a fallacy of gray here. Saying that we don’t have absolute certainty is not at all the same as saying that there is no territory; that is assuredly not what Bayesian epistemology is about.
Objecting to one, actually.
Not what it should be about, no. But it most assuredly does color how Bayesian beliefs are formed.
No, wedrifid is not committing one, you are. Here’s why:
He is not arguing that reality is subjective or anything like that. See his comment. On the other hand, your argument seems to be that we need to have some kind of surefire knowledge in epistemology because Bayesian probabilities are insufficient. Why does an epistemology with lack a of absolute certainty mean that said epistemology is not good enough?
What have you read or seen that makes you think this is the case?
I wonder if the word “belief” is causing problems here. Everyone in this thread is using the term to mean “statement or piece of knowledge to which you assign some level of truth value.” Saying “I believe X” is the same as saying “I think X is probably true.” Is this also how you’re using the term?
Without commenting on the specifics here, I object to this formulation because it’s possible both or neither are committing (particular fallacy), and it’s even more likely that of two parties, each is committing a different fallacy.
I didn’t mean to assert that it was an exclusive or, but I see how my wording implies that. Point taken and I’ll try to be more precise in the future.
The formulation is fine. It is just two independent claims. It does not mean “wedrifid is not making an error because you are”.
The subject isn’t “an error”, it’s “the fallacy of gray”.
I agree “No, wedrifid is not committing an error, you are,” hardly implies “wedrifid is not making an error because you are”.
“No, wedrifid is not committing the fallacy of gray, you are,” much more implies “wedrifid is not committing the fallacy of gray because you are” when wedrifid’s and Logos01′s statements are in direct conflict.
Why on earth does it matter that I referred to the general case? We’re discussing the implication of the general formulation which I hope you don’t consider to be a special case that only applies to “The Fallacy of The Grey”. But if we are going to be stickler’s then we should use the actual formulation you object to:
Not especially. This sort of thing is going to said most often regarding statements in direct conflict. There is no more relationship implied between the two than can be expected for any two claims being mentioned in the same sentence or paragraph.
The fact that it would be so easy to write “because” but they didn’t is also evidence against any assertion of a link. There is a limit to how much you can blame a writer for the theoretical possibility that a reader could make a blatant error of comprehension.
I didn’t really mean “because” in that sense, for two reasons. The first is that it is the observer’s knowledge that is caused, not the party’s error, and the other is that the causation implied goes the other way. Not “because” as if the first party’s non-error caused the second’s error, but that the observer can tell that the second party is in error because the observer can see that the first isn’t committing the particular error.
Not special, but there is a sliding scale.
Compare:
“No, wedrifid is not committing the Prosecutor’s fallacy, you are.” Whether or not one party is committing this fallacy has basically nothing to do with whether or not the other is. So I interpret this statement to probably mean: You are wrong when you claim that wedrifid is committing the Prosecutor’s fallacy. Also, you are committing the Prosecutor’s fallacy.”
“No, wedrifid is not reversing cause and effect, you are.” Knowing that one party is not reversing cause and effect is enough to know someone accusing that party of doing so is likely doing so him or herself! So I interpret this statement to probably mean: “Because I see that wedrifid is not reversing cause and effect, I conclude that you are.”
The fallacy of gray is in between the two above examples.
“Fallacy of The Grey” returns two google hits referring to the fallacy of gray, one from this site (and zero hits for “fallacy of the gray”).
This is actually the Fallacy of The Grey.
I didn’t say he was.
No. I was saying that Bayesian probabilistic-belief methodologies are effective at generating maps but they say almost nothing about how those maps correlate to the territory. And that it is, basically, possible to make those assertions. The practices are not the same, and that is the key difference.
It is fundamental to the nature of Bayesian belief-networks that they always assert statements in the form of probabilities. It is impossible to state a Bayesian belief except in the form of a probability.
From this there is a necessary conclusion.
No. There is a difference categorically between the position, “I believe X is probably true” and “I believe X is true.”
What does this mean?
Right. So why do you think this is insufficient for making maps that correlate to the territory? What assertions do you want to make about the territory that are not captured by this model?
Right, but on LW “I believe X” is generally meant as the former, not the latter. This is probably part of the reason for all of the confusion and disagreement in this thread.
What? Of course Elias has some reason for believing what he believes. “Expert” doesn’t mean “someone who magically just knows stuff”. Somewhere along the lines the operation of physics has resulted in the bunch of particles called “Elias” to be configured in such a way as utterances by Elias about things like X are more likely to be true than false. This means that p(Elias asserts X | X is true) and p(Elias asserts X | X is false) are most certainly not equal. Claiming that they must be equal is just really peculiar.
This isn’t to do with blind oracles. It’s to do with trivial application of probability or rudimentary logic.
In your interactions with people here I haven’t observed p(Logos01 is persuaded by X | X is sound reasoning) to be especially high. As such I cannot be expected to consider “Logos01 is not persuaded by something” to give much information at all about the soundness of a claim.
Unless those reasons are justified—which we cannot know without knowing them—they cannot be held to be justifiable statements.
This is tautological.
Not at all. You simply aren’t grasping why it is so. This is because you are thinking in terms of predictions and not in terms of concrete instances. To you, these are one-and-the-same, as you are used to thinking in the Bayesian probabilistic-belief manner.
I am telling you that this is an instance where that manner is flawed.
What you hold to be sound reasoning and what actually is sound reasoning are not equivalent.
If I had meant to imply that conclusion I would have phrased it so.