No. What you should do is ask for a justification of the belief. If you do not have the resources available to you to do so, you can fail-over to the trust system and simply accept the physicist’s statement unexamined—but utilization of the trust-system is an admission of failure to have justified beliefs.
If you want to increase the reliability of your probability estimate, you should ask for a justification. But if you do not increase your probability estimate contingent on the physicist’s claim until you receive information on how he established that belief, then you are mistreating evidence. You don’t treat his claim as evidence in addition to to evidence on which it was conditioned, you treat it as evidence of the evidence on which it was conditioned. Once you know the physicist’s belief, you cannot expect to raise your confidence in that belief upon receiving information on how he came to that conclusion. You should assign weight to his statement according to how much evidence you would expect a physicist in his position to have if he were making such a statement, and then when you learn what evidence he has you shift upwards or downwards depending on how the evidence compares to your expectation. If you revised upwards on the basis of the physicist’s say-so, and then revised further upwards based on his having about as much evidence as you would expect, that would be double-counting evidence, but if you do not revise upwards based on the physicist’s claim in the first place, that would be assuming zero correlation of his statement with reality.
Justification of belief cannot be “A person who usually is right in this field claims this is so” but can be “A person who I have reason to believe would have evidence on this matter related to me his assessment of said evidence.”
You do not need the person to relate their assessment of the evidence to revise your belief upward based on their statement, you only need to believe that it is more likely that they would make the claim if it were true than if it were not.
The difference here is between having a buddy who is a football buff who tells you what the Sportington Sports beat the Homeland Highlanders by last night—even though you don’t know whether he had access to a means of having said information—as opposed to the friend you know watched the game who tells you the scores.
Anything that is more likely if a belief is true than if it is false is evidence which should increase your probability estimate of that belief. Have you read An Intuitive Explanation of Bayes’ Theorem, or any of the other explanations of Bayesian reasoning on this site?
If you have a buddy who is a football buff who tells you that the Sportington Sports beat the Homeland Highlanders last night, then you should treat this as evidence that the Sportington Sports won, weighted according to your estimate of how likely his claim is to correlate with reality. If you know that he watched the game, you’re justified in assuming a very high correlation with reality (although you also have to condition your estimate on information aside from whether he is likely to know, such as how likely he is to lie.) If you do not know that he watched the game last night, you will have a different estimate of the strength of his claim’s correlation with reality.
Have you read An Intuitive Explanation of Bayes’ Theorem, or any of the other explanations of Bayesian reasoning on this site?
I have read them repeatedly, and explained the concepts to others on multiple occassions.
If you have a buddy who is a football buff who tells you that the Sportington Sports beat the Homeland Highlanders last night, then you should treat this as evidence that the Sportington Sports won,
Not until such time as you have a reason to believe that he has a justification for his belief beyond mere opinion. Otherwise, it is a mere assertion regardless of the source—it cannot have a correlation to reality if there is no vehicle through which the information he claims to have reached him other than his own imagination, however accurate that imagination might be.
You do not need the person to relate their assessment of the evidence to revise your belief upward based on their statement, you only need to believe that it is more likely that they would make the claim if it were true than if it were not.
Which requires a reason to believe that to be the case. Which in turn requires that you have a means of corroborating their claim in some manner; the least-sufficient of which being that they can relate observations that correlate to their claim, in the case of experts that is.
If you want to increase the reliability of your probability estimate, you should ask for a justification.
A probability estimate without reliability is no estimate. Revising beliefs based on unreliable information is unsound. Experts’ claims which cannot be corroborated are unsound information, and should have no weighting on your estimate of beliefs solely based on their source.
If an expert’s claims are frequently true, then it can become habitual to trust them without examination. However, trusting individuals rather than examining statements is an example of a necessary but broken heuristic. We find the risk of being wrong in such situations acceptable because the expected utility cost of being wrong in any given situation, as an aggregate, is far less than the expected utility cost of having to actually investigate all such claims.
The more such claims, further, fall in line with our own priors—that is, the less ‘extraordinary’ the claims appear to be to us—the more likely we are to not require proper evidence.
The trouble is, this is a failed system. While it might be perfectly rational—instrumentally—it is not a means of properly arriving at true beliefs.
I want to take this opportunity to once again note that what I’m describing in all of this is proper argumentation, not proper instrumentality. There is a difference between the two; and Eliezer’s many works are, as a whole, targetted at instrumental rationality—as is this site itself, in general. Instrumental rationality does not always concern itself with what is true as opposed to what is practically believable. It finds the above-described risk of variance in belief from truth an acceptable risk, when asserting beliefs.
This is an area where “Bayesian rationality” is insufficient—it fails to reliably distinguish between “what I believe” and “what I can confirm is true”. It does this for a number of reasons, one of which being a foundational variance between Bayesian assertions about what kind of thing a Bayesian network is measuring when it discussed probabilities as opposed to what a frequentist is asserting is being measured when frequentists discuss probabilities.
I do not fall totally in line with “Bayesian rationality” in this, and various other, topics, for exactly this reason.
There is a difference between the two; and Eliezer’s many works are, as a whole, targetted at instrumental rationality
What? No they aren’t. They are massively biased towards epistemic rationality. He has written a few posts on instrumental rationality but by and large they tend to be unremarkable. It’s the bulk of epistemic rationality posts that he is known for.
Have you read An Intuitive Explanation of Bayes’ Theorem, or any of the other explanations of Bayesian reasoning on this site?
I have read them repeatedly, and explained the concepts to others on multiple occassions.
Really? In that case you should hopefully be able to interact correctly with probabilities like p(Elias asserts X | X is true) and p(Elias asserts X | X is false).
It ought to prevent you from making errors like this:
Really? In that case you should hopefully be able to interact correctly with probabilities like p(Elias asserts X | X is true) and p(Elias asserts X | X is false).
Assuming he was able to explain them correctly, which I think we have a lot of reason to doubt.
Really? In that case you should hopefully be able to interact correctly with probabilities like p(Elias asserts X | X is true) and p(Elias asserts X | X is false).
If you meant those to be topical you’ve got your givens inverted.
It ought to prevent you from making erors like this:
Appeals to authority are always fallacious.
I’m sorry, but no matter how many times you reiterate it, until such time as you can provide a valid justification for the assertion that appeals to authority are not always fallacious, each iteration will be no more correct than the last time.
Appeals to authority are always fallacious. Some things that look like appeals to authority to casual observation upon closer examination turn out not to be. Such as the reference to the works of an authority-figure.
Really? In that case you should hopefully be able to interact correctly with probabilities like p(Elias asserts X | X is true) and p(Elias asserts X | X is false).
If you meant those to be topical you’ve got your givens inverted.
No. Actually I don’t. Those are base priors that are important when evaluating just how to update when given the new evidence “Elias asserts X”. Have you not just been telling us that you teach others how this mathematics works?
I’m sorry, but no matter how many times you reiterate it, until such time as you can provide a valid justification for the assertion that appeals to authority are not always fallacious, each iteration will be no more correct than the last time.
Why are you saying that? I didn’t just make the assertion again. I pointed to some of the Bayesian reasoning that it translates to.
If you have values for p(Elias asserts X | X is true) and p(Elias asserts X | X is false) that are not equal to each other and you gain information “Elias asserts X” your estimation for X is true must change or your thinking is just wrong. It is simple mathematics. That being the case supplying the information “Elias asserts X” to someone who already has information about Elias’s expertise is not fallacious. It is supplying information to them that should change their mind.
The above applies regardless of whether Elias has ever written any works on the subject or in any way supplied arguments. It applies even if Elias himself has no idea why he has the intuition “X”. It applies if Elias is a flipping black box that spits out statements. If you both you and the person you are speaking with have reason to believe p(Elias asserts X | X is true) > p(Elias asserts X | X is false) then supplying “Elias asserts X” as an argument in favour of X is not fallacious.
No. Actually I don’t. Those are base priors that are important when evaluating just how to update when given the new evidence “Elias asserts X”. Have you not just been telling us that you teach others how this mathematics works?
Alright, then, but you’re not going to like the answer: the probabilities in both cases are equal, barring some other influencing factor, such as Elias-the-Expert having justification for his assertion, or a direct means of observing X.
Unfortunately, this tells us exactly nothing for our current discussion.
It applies even if Elias himself has no idea why he has the intuition “X”.
This is the Blind Oracle argument. You are not going to find it persuasive.
It applies if Elias is a flipping black box that spits out statements.
It applies if Elias is a coin that flips over and over again and has come up heads a hundred times in a row and tails twenty times before that, therefore allowing us to assert that the probability that coin will come up heads is five times greater than the probability it will come up tails.
This, of course, is an eroneous conclusion, and represents a modal failure of the use of Bayesian belief-networks as assertions of matters of truth as opposed to merely being inductive beliefs.
And before you attempt to assert that fundamentally no truths are “knowable”, and that all we have are “beliefs of varying degrees of confidence”, let me just say that this, too, is a modal failure. But a subtle one: it is an assertion that because all we have is the map, there is no territory.
That is false—and there is a corollary as a result of that fact: the territory has a measurable impact on the map. And always will.
And before you attempt to assert that fundamentally no truths are “knowable”, and that all we have are “beliefs of varying degrees of confidence”, let me just say that this, too, is a modal failure. But a subtle one: it is an assertion that because all we have is the map, there is no territory.
Is there anyone else here who would have predicted that I was about say anything remotely like this? Why on earth would I be about to do that? It doesn’t seem relevant to the topic at hand, necessary for appeals to experts to be a source of evidence or even particularly coherent as a philosophy.
I am only slightly more likely to have replied with that argument than a monkey would have been if it pressed random letters on a keyboard—and then primarily because it was if nothing else grammatically well formed.
The patient is very, very confused. I think that this post allows us to finally offer the diagnosis: he is qualitatively confused.
Quote:
The only error that’s occurring here is your continued belief that beliefs are relevant to this conversation. They simply aren’t. We’re not discussing “what should you believe”—we are discussing “what should you hold to be true.”
Attempts to surgically remove this malignant belief from the patient have commenced.
Adding to wedrifid’s comment: Logos01, you seem to be committing a fallacy of gray here. Saying that we don’t have absolute certainty is not at all the same as saying that there is no territory; that is assuredly not what Bayesian epistemology is about.
No, wedrifid is not committing one, you are. Here’s why:
He is not arguing that reality is subjective or anything like that. See his comment. On the other hand, your argument seems to be that we need to have some kind of surefire knowledge in epistemology because Bayesian probabilities are insufficient. Why does an epistemology with lack a of absolute certainty mean that said epistemology is not good enough?
But it most assuredly does color how Bayesian beliefs are formed.
What have you read or seen that makes you think this is the case?
I wonder if the word “belief” is causing problems here. Everyone in this thread is using the term to mean “statement or piece of knowledge to which you assign some level of truth value.” Saying “I believe X” is the same as saying “I think X is probably true.” Is this also how you’re using the term?
Without commenting on the specifics here, I object to this formulation because it’s possible both or neither are committing (particular fallacy), and it’s even more likely that of two parties, each is committing a different fallacy.
Without commenting on the specifics here, I object to this formulation because it’s possible both or neither are committing (particular fallacy), and it’s even more likely that of two parties, each is committing a different fallacy.
The formulation is fine. It is just two independent claims. It does not mean “wedrifid is not making an error because you are”.
The subject isn’t “an error”, it’s “the fallacy of gray”.
I agree “No, wedrifid is not committing an error, you are,” hardly implies “wedrifid is not making an error because you are”.
“No, wedrifid is not committing the fallacy of gray, you are,” much more implies “wedrifid is not committing the fallacy of gray because you are” when wedrifid’s and Logos01′s statements are in direct conflict.
The subject isn’t “an error”, it’s “the fallacy of gray”.
Why on earth does it matter that I referred to the general case? We’re discussing the implication of the general formulation which I hope you don’t consider to be a special case that only applies to “The Fallacy of The Grey”. But if we are going to be stickler’s then we should use the actual formulation you object to:
No, wedrifid is not committing one, you are.
“No, wedrifid is not committing the fallacy of gray, you are,” much more implies “wedrifid is not committing the fallacy of gray because you are” when wedrifid’s and Logos01′s statements are in direct conflict.
Not especially. This sort of thing is going to said most often regarding statements in direct conflict. There is no more relationship implied between the two than can be expected for any two claims being mentioned in the same sentence or paragraph.
The fact that it would be so easy to write “because” but they didn’t is also evidence against any assertion of a link. There is a limit to how much you can blame a writer for the theoretical possibility that a reader could make a blatant error of comprehension.
I didn’t really mean “because” in that sense, for two reasons. The first is that it is the observer’s knowledge that is caused, not the party’s error, and the other is that the causation implied goes the other way. Not “because” as if the first party’s non-error caused the second’s error, but that the observer can tell that the second party is in error because the observer can see that the first isn’t committing the particular error.
a special case...statements in direct conflict
Not special, but there is a sliding scale.
Compare:
“No, wedrifid is not committing the Prosecutor’s fallacy, you are.” Whether or not one party is committing this fallacy has basically nothing to do with whether or not the other is. So I interpret this statement to probably mean: You are wrong when you claim that wedrifid is committing the Prosecutor’s fallacy. Also, you are committing the Prosecutor’s fallacy.”
“No, wedrifid is not reversing cause and effect, you are.” Knowing that one party is not reversing cause and effect is enough to know someone accusing that party of doing so is likely doing so him or herself! So I interpret this statement to probably mean: “Because I see that wedrifid is not reversing cause and effect, I conclude that you are.”
The fallacy of gray is in between the two above examples.
“Fallacy of The Grey” returns two google hits referring to the fallacy of gray, one from this site (and zero hits for “fallacy of the gray”).
He is not arguing that reality is subjective or anything like that.
I didn’t say he was.
On the other hand, your argument seems to be that we need to have some kind of surefire knowledge in epistemology because Bayesian probabilities are insufficient.
No. I was saying that Bayesian probabilistic-belief methodologies are effective at generating maps but they say almost nothing about how those maps correlate to the territory. And that it is, basically, possible to make those assertions. The practices are not the same, and that is the key difference.
What have you read or seen that makes you think this is the case?
It is fundamental to the nature of Bayesian belief-networks that they always assert statements in the form of probabilities. It is impossible to state a Bayesian belief except in the form of a probability.
From this there is a necessary conclusion.
Everyone in this thread is using the term to mean “statement or piece of knowledge to which you assign some level of truth value.” Saying “I believe X” is the same as saying “I think X is probably true.” Is this also how you’re using the term?
No. There is a difference categorically between the position, “I believe X is probably true” and “I believe X is true.”
I was saying that Bayesian probabilistic-belief methodologies are effective at generating maps but they say almost nothing about how those maps correlate to the territory.
It is fundamental to the nature of Bayesian belief-networks that they always assert statements in the form of probabilities. It is impossible to state a Bayesian belief except in the form of a probability.
Right. So why do you think this is insufficient for making maps that correlate to the territory? What assertions do you want to make about the territory that are not captured by this model?
No. There is a difference categorically between the position, “I believe X is probably true” and “I believe X is true.”
Right, but on LW “I believe X” is generally meant as the former, not the latter. This is probably part of the reason for all of the confusion and disagreement in this thread.
Alright, then, but you’re not going to like the answer: the probabilities in both cases are equal, barring some other influencing factor, such as Elias-the-Expert having justification for his assertion, or a direct means of observing X.
What? Of course Elias has some reason for believing what he believes. “Expert” doesn’t mean “someone who magically just knows stuff”. Somewhere along the lines the operation of physics has resulted in the bunch of particles called “Elias” to be configured in such a way as utterances by Elias about things like X are more likely to be true than false. This means that p(Elias asserts X | X is true) and p(Elias asserts X | X is false) are most certainly not equal. Claiming that they must be equal is just really peculiar.
This is the Blind Oracle argument.
This isn’t to do with blind oracles. It’s to do with trivial application of probability or rudimentary logic.
You are not going to find it persuasive.
In your interactions with people here I haven’t observed p(Logos01 is persuaded by X | X is sound reasoning) to be especially high. As such I cannot be expected to consider “Logos01 is not persuaded by something” to give much information at all about the soundness of a claim.
What? Of course Elias has some reason for believing what he believes.
Unless those reasons are justified—which we cannot know without knowing them—they cannot be held to be justifiable statements.
This is tautological.
Claiming that they must be equal is just really peculiar.
Not at all. You simply aren’t grasping why it is so. This is because you are thinking in terms of predictions and not in terms of concrete instances. To you, these are one-and-the-same, as you are used to thinking in the Bayesian probabilistic-belief manner.
I am telling you that this is an instance where that manner is flawed.
In your interactions with people here I haven’t observed p(Logos01 is persuaded by X | X is sound reasoning) to be especially high.
What you hold to be sound reasoning and what actually is sound reasoning are not equivalent.
As such I cannot be expected to consider “Logos01 is not persuaded by something” to give much information at all about the soundness of a claim.
If I had meant to imply that conclusion I would have phrased it so.
Not until such time as you have a reason to believe that he has a justification for his belief beyond mere opinion. Otherwise, it is a mere assertion regardless of the source—it cannot have a correlation to reality if there is no vehicle through which the information he claims to have reached him other than his own imagination, however accurate that imagination might be.
If you know that your friend more often makes statements such as this when they are true than when they are false, then you know that his claim is relevant evidence, so you should adjust your confidence up. If he reliably either watches the game, or finds out the result by calling a friend or checking online, and you have only known him to make declarations about which team won a game when he knows which team won, then you have reason to believe that his statement is strongly correlated with reality, even if you don’t know the mechanism by which he came to decide to say that the Sportington Sports won.
If you happen to know that your friend has just gotten out of a locked room with no television, phone reception or internet access where he spent the last couple of days, then you should assume an extremely low correlation of his statement with reality. But if you do not know the mechanism, you must weight his statement according to the strength that you expect his mechanism for establishing correlation with the truth has.
There is a permanent object outside my window. You do not know what it is, and if you try to assign probabilities to all the things it could be, you will assign a very low probability to the correct object. You should assign pretty high confidence that I know what the object outside my window is, so if I tell you, then you can assign much higher probability to that object than before I told you, without my having to tell you why I know. You have reason to have a pretty high confidence in the belief that I am an authority on what is outside my window, and that I have reliable mechanisms for establishing it.
If I tell you what is outside my window, you will probably guess that the most likely mechanism by which I found out was by looking at it, so that will dominate your assessment of my statement’s correlation with the truth (along with an adjustment for the possibility that I would lie.) If I tell you that I am blind, type with a braille keyboard, and have a voice synthesizer for reading text to me online, and I know what is outside my window because someone told me, then you should adjust your probability that my claim of what is outside my window is correct downwards, both on increased probability that I am being dishonest, and on the decreased reliability of my mechanism (I could have been lied to.) If I tell you that I am blind and psychic fairies told me what is outside my window, you should adjust your probability that my claim is correlated with reality down much further.
The “trust mechanism,” as you call it, is not a device that exists separate from issues of evidence and probability. It is one of the most common ways that we reason about probabilities, basing our confidence in others’ statements on what we know about their likely mechanisms and motives.
This is an area where “Bayesian rationality” is insufficient—it fails to reliably distinguish between “what I believe” and “what I can confirm is true”.
You can’t confirm that anything is true with absolute certainty, you can only be more or less confident. If your belief is not conditioned on evidence, you’re doing something wrong, but there is no point where a “mere belief” transitions into confirmed knowledge. Your probability estimates go up and down based on how much evidence you have, and some evidence is much stronger than others, but there is no set of evidence that “counts for actually knowing things” separate from that which doesn’t.
If you know that your friend more often makes statements such as this when they are true than when they are false, then you know that his claim is relevant evidence
This is like claiming that because a coin came up heads twenty times and tails ten times it is 2x more likely to come up heads this time. Absent some other reason to justify the correlation between your friend’s accuracy and the current instance, such beliefs are invalid.
If he reliably either watches the game, or finds out the result by calling a friend or checking online, and you have only known him to make declarations about which team won a game when he knows which team won,
Yup. I said as much.
The “trust mechanism,” as you call it, is not a device that exists separate from issues of evidence and probability.
Yes, actually, it is a separate mechanism.
You can’t confirm that anything is true with absolute certainty, you can only be more or less confident.
Yes, yes. That is the Bayesian standard statement. I’m not persuaded by it. It is, by the way, a foundational error to assert that absolute knowledge is the only form of knowledge. This is one of my major objections to standard Bayesian doctrine in general; the notion that there is no such thing as knowledge but only beliefs of varying confidence.
Bayesian probability assessments work very well for making predictions and modeling unknowns, but that’s just not sufficient to the question of what constitutes knowledge, what is known, and/or what is true.
And with that, I’m done here. This conversation’s gotten boring, to be quite frank, and I’m tired of having people essentially reiterate the same claims over and over at me from multiple angles. I’ve heard it before, and it’s no more convincing now than it was previously.
This is frustrating for me as well, and you can quit if you want, but I’m going to make one more point which I don’t think will be a reiteration of something you’ve heard previously.
Suppose that you have a circle of friends who you talk to regularly, and a person uses some sort of threat to force you to write down every declarative statement they make in a journal, whether they provided justifications or not, until you collect ten thousand of them.
Now suppose that they have a way of testing the truth of these statements with very high confidence. They make a credible threat that you must correctly estimate the number of the statements in the journal that are true, with a small margin of error, or they will blow up New York. If you simply file a large number of his statements under “trust mechanism,” and fail to assign a probability which will allow you to guess what proportion are right or wrong, millions of people will die. There is an actual right answer which will save those people’s lives, and you want to maximize your chances of getting it. What do you do?
Let’s replace the journal with a log of a trillion statements. You have a computer that can add the figures up quickly, and you still have to get very close to the right number to save millions of lives. Do you want the computer to file statements under “trust mechanism” or “confirmed knowledge” so that it can better determine the correct number of correct statements, or would you rather each statement be tagged with an appropriate probability, so that it can add them up to determine what number of statements it expects to be true?
… appeal to consequences. Well, that is new in this conversation. It’s not very constructive though.
Also, you’re conflating predictions with instantiations.
That being said:
They make a credible threat that you must correctly estimate the number of the statements in the journal that are true, with a small margin of error, or they will blow up New York. [...] What do you do?
I would, without access to said test myself, be forced to resign myself to the destruction of New York.
If you simply file a large number of his statements under “trust mechanism,”
That’s not what a trust-system is. It is, simply put, the practice of trusting that something is so because the expected utility-cost of being wrong is lower than the expected utility-cost of investigating a given claim. This practice is a foible; a failing—one that is engaged in out of necessity because humans have a limit to their available cognitive resources.
Do you want the computer to file statements under “trust mechanism” or “confirmed knowledge” so that it can better determine the correct number of correct statements, or would you rather each statement be tagged with an appropriate probability,
What one wants is irrelevant. What has occurred is relevant. If you haven’t investigated a given claim directly, then you’ve got nothing but whatever available trust-systems are at hand to operate on.
That doesn’t make them valid claims.
Finally: you’re introducing another unlike-variable by abstracting from individual instances to averaged aggregate.
TL;DR—your post is not-even-wrong. On many points.
… appeal to consequences. Well, that is new in this conversation. It’s not very constructive though.
If your conception of rationality leads to worse consequences than doing something differently, you should do something differently. Do you think it’s impossible to do better than resigning yourself to the destruction of New York?
That’s not what a trust-system is. It is, simply put, the practice of trusting that something is so because the expected utility-cost of being wrong is lower than the expected utility-cost of investigating a given claim. This practice is a foible; a failing—one that is engaged in out of necessity because humans have a limit to their available cognitive resources.
The utility cost of being wrong can fluctuate. Your life may hinge tomorrow on a piece of information you did not consider investigating today. If you find yourself in a situation where you must make an important decision hinging on little information, you can do no better than your best estimate, but if you decide that you are not justified in holding forth an estimate at all, you will have rationalized yourself into helplessness.
Humans have bounded rationality. Computationally optimized Jupiter Brains have bounded rationality. Nothing can have unlimited cognitive resources in this universe, but with high levels of computational power and effective weighting of evidence it is possible to know how much confidence you should have based on any given amount of information.
Finally: you’re introducing another unlike-variable by abstracting from individual instances to averaged aggregate.
You can get the expected number of true statements just by adding the probabilities of truth of each statement. It’s like judging how many heads you should expect to get in a series of coin flips. .5 + .5 + .5..… The same formula works even if the probabilities are not all the same.
This is like claiming that because a coin came up heads twenty times and tails ten times it is 2x more likely to come up heads this time.
If you don’t assume that the coin is fair, then certainly a coin coming up heads twenty times and tails ten times is evidence in favor of it being more likely to come up heads next time, because it’s evidence that it’s weighted so that it favours heads.
Similarly if a person is weighted so that they favor truth, their claims are evidence in favour of that truth.
Absent some other reason to justify the correlation between your friend’s accuracy and the current instance, such beliefs are invalid.
Beliefs like trusting the trustworthy and not trustring the untrustworthy, whether you consider them “valid” beliefs or not, are likely to lead one to make correct predictions about the state of the world. So such beliefs are valid in the only way that matters for epistemic and instrumental rationality both.
If you don’t assume that the coin is fair, then certainly a coin coming up heads twenty times and tails ten times is evidence in favor of it being more likely to come up heads next time, because it’s evidence that it’s weighted so that it favours heads.
Or you could make a direct observation, (such as by weighing it with a fine tool, or placing it on a balancing tool) and know.
Similarly if a person is weighted so that they favor truth, their claims are evidence in favour of that truth.
Not unless they have an ability to provide their justification for a given instantiation. It would be sufficient for trusting them if you are not concerned with what is true as opposed to what is “likely true”. There’s a difference between these, categorically: one is an affirmation—the other is a belief.
So such beliefs are valid in the only way that matters for epistemic and instrumental rationality both.
Incorrect. And we are now as far as this conversation is going to go. You hold to Bayesian rationality as axiomatically true of rationality. I do not.
Or you could make a direct observation, (such as by weighing it with a fine tool, or placing it on a balancing tool) and know.
And in the absence of the ability to make direct observations? If there are two eye-witness testimonies to a crime, and one of the eye-witnesses is a notorious liar with every incentive to lie, and one of them is famous for his honesty and has no incentive to lie—which way would you have your judgment lean?
SPOCK: “If I let go of a hammer on a planet that has a positive gravity, I need not see it fall to know that it has in fact fallen. [...] Gentlemen, human beings have characteristics just as inanimate objects do. It is impossible for Captain Kirk to act out of panic or malice. It is not his nature.”
I very much like this quote, because it was one of the first times when I saw determinism, in the sense of predictability, being ennobling.
If there are two eye-witness testimonies to a crime
I have already stated that witness testimonials are valid for weighting beliefs. In the somewhere-parent topic of authorities; this is the equivalent of referencing the work of an authority on a topic.
This is like claiming that because a coin came up heads twenty times and tails ten times it is 2x more likely to come up heads this time. Absent some other reason to justify the correlation between your friend’s accuracy and the current instance, such beliefs are invalid.
If in 30 coin flips have occurred with it being that far off, I should move my probability estimate sllightly towards the coin being weighted to one side. If for example, the coin instead had all 30 flips heads, I presume you would update in the direction of the coin being weighted to be more likely to come down on one side. It won’t be 2x as more likely because the hypothesis that the coin is actually fair started with a very large prior. Moreover, the easy ways to make a coin weighted make it always come out on one side. But the essential Bayesian update in this context makes sense to put a higher probability on the coin being weighted to be more likely to comes up heads than tales.
“Bayesian probability assessments work very well for making predictions and modeling unknowns, but that’s just not sufficient to the question of what constitutes knowledge, what is known, and/or what is true.”
Bayesian probability assessments are an extremely poor tool for assertions of truth.
If you want to increase the reliability of your probability estimate, you should ask for a justification. But if you do not increase your probability estimate contingent on the physicist’s claim until you receive information on how he established that belief, then you are mistreating evidence. You don’t treat his claim as evidence in addition to to evidence on which it was conditioned, you treat it as evidence of the evidence on which it was conditioned. Once you know the physicist’s belief, you cannot expect to raise your confidence in that belief upon receiving information on how he came to that conclusion. You should assign weight to his statement according to how much evidence you would expect a physicist in his position to have if he were making such a statement, and then when you learn what evidence he has you shift upwards or downwards depending on how the evidence compares to your expectation. If you revised upwards on the basis of the physicist’s say-so, and then revised further upwards based on his having about as much evidence as you would expect, that would be double-counting evidence, but if you do not revise upwards based on the physicist’s claim in the first place, that would be assuming zero correlation of his statement with reality.
You do not need the person to relate their assessment of the evidence to revise your belief upward based on their statement, you only need to believe that it is more likely that they would make the claim if it were true than if it were not.
Anything that is more likely if a belief is true than if it is false is evidence which should increase your probability estimate of that belief. Have you read An Intuitive Explanation of Bayes’ Theorem, or any of the other explanations of Bayesian reasoning on this site?
If you have a buddy who is a football buff who tells you that the Sportington Sports beat the Homeland Highlanders last night, then you should treat this as evidence that the Sportington Sports won, weighted according to your estimate of how likely his claim is to correlate with reality. If you know that he watched the game, you’re justified in assuming a very high correlation with reality (although you also have to condition your estimate on information aside from whether he is likely to know, such as how likely he is to lie.) If you do not know that he watched the game last night, you will have a different estimate of the strength of his claim’s correlation with reality.
I have read them repeatedly, and explained the concepts to others on multiple occassions.
Not until such time as you have a reason to believe that he has a justification for his belief beyond mere opinion. Otherwise, it is a mere assertion regardless of the source—it cannot have a correlation to reality if there is no vehicle through which the information he claims to have reached him other than his own imagination, however accurate that imagination might be.
Which requires a reason to believe that to be the case. Which in turn requires that you have a means of corroborating their claim in some manner; the least-sufficient of which being that they can relate observations that correlate to their claim, in the case of experts that is.
A probability estimate without reliability is no estimate. Revising beliefs based on unreliable information is unsound. Experts’ claims which cannot be corroborated are unsound information, and should have no weighting on your estimate of beliefs solely based on their source.
If an expert’s claims are frequently true, then it can become habitual to trust them without examination. However, trusting individuals rather than examining statements is an example of a necessary but broken heuristic. We find the risk of being wrong in such situations acceptable because the expected utility cost of being wrong in any given situation, as an aggregate, is far less than the expected utility cost of having to actually investigate all such claims.
The more such claims, further, fall in line with our own priors—that is, the less ‘extraordinary’ the claims appear to be to us—the more likely we are to not require proper evidence.
The trouble is, this is a failed system. While it might be perfectly rational—instrumentally—it is not a means of properly arriving at true beliefs.
I want to take this opportunity to once again note that what I’m describing in all of this is proper argumentation, not proper instrumentality. There is a difference between the two; and Eliezer’s many works are, as a whole, targetted at instrumental rationality—as is this site itself, in general. Instrumental rationality does not always concern itself with what is true as opposed to what is practically believable. It finds the above-described risk of variance in belief from truth an acceptable risk, when asserting beliefs.
This is an area where “Bayesian rationality” is insufficient—it fails to reliably distinguish between “what I believe” and “what I can confirm is true”. It does this for a number of reasons, one of which being a foundational variance between Bayesian assertions about what kind of thing a Bayesian network is measuring when it discussed probabilities as opposed to what a frequentist is asserting is being measured when frequentists discuss probabilities.
I do not fall totally in line with “Bayesian rationality” in this, and various other, topics, for exactly this reason.
What? No they aren’t. They are massively biased towards epistemic rationality. He has written a few posts on instrumental rationality but by and large they tend to be unremarkable. It’s the bulk of epistemic rationality posts that he is known for.
Really? In that case you should hopefully be able to interact correctly with probabilities like p(Elias asserts X | X is true) and p(Elias asserts X | X is false).
It ought to prevent you from making errors like this:
Assuming he was able to explain them correctly, which I think we have a lot of reason to doubt.
If you meant those to be topical you’ve got your givens inverted.
I’m sorry, but no matter how many times you reiterate it, until such time as you can provide a valid justification for the assertion that appeals to authority are not always fallacious, each iteration will be no more correct than the last time.
Appeals to authority are always fallacious. Some things that look like appeals to authority to casual observation upon closer examination turn out not to be. Such as the reference to the works of an authority-figure.
No. Actually I don’t. Those are base priors that are important when evaluating just how to update when given the new evidence “Elias asserts X”. Have you not just been telling us that you teach others how this mathematics works?
Why are you saying that? I didn’t just make the assertion again. I pointed to some of the Bayesian reasoning that it translates to.
If you have values for p(Elias asserts X | X is true) and p(Elias asserts X | X is false) that are not equal to each other and you gain information “Elias asserts X” your estimation for X is true must change or your thinking is just wrong. It is simple mathematics. That being the case supplying the information “Elias asserts X” to someone who already has information about Elias’s expertise is not fallacious. It is supplying information to them that should change their mind.
The above applies regardless of whether Elias has ever written any works on the subject or in any way supplied arguments. It applies even if Elias himself has no idea why he has the intuition “X”. It applies if Elias is a flipping black box that spits out statements. If you both you and the person you are speaking with have reason to believe p(Elias asserts X | X is true) > p(Elias asserts X | X is false) then supplying “Elias asserts X” as an argument in favour of X is not fallacious.
Alright, then, but you’re not going to like the answer: the probabilities in both cases are equal, barring some other influencing factor, such as Elias-the-Expert having justification for his assertion, or a direct means of observing X.
Unfortunately, this tells us exactly nothing for our current discussion.
This is the Blind Oracle argument. You are not going to find it persuasive.
It applies if Elias is a coin that flips over and over again and has come up heads a hundred times in a row and tails twenty times before that, therefore allowing us to assert that the probability that coin will come up heads is five times greater than the probability it will come up tails.
This, of course, is an eroneous conclusion, and represents a modal failure of the use of Bayesian belief-networks as assertions of matters of truth as opposed to merely being inductive beliefs.
And before you attempt to assert that fundamentally no truths are “knowable”, and that all we have are “beliefs of varying degrees of confidence”, let me just say that this, too, is a modal failure. But a subtle one: it is an assertion that because all we have is the map, there is no territory.
That is false—and there is a corollary as a result of that fact: the territory has a measurable impact on the map. And always will.
Is there anyone else here who would have predicted that I was about say anything remotely like this? Why on earth would I be about to do that? It doesn’t seem relevant to the topic at hand, necessary for appeals to experts to be a source of evidence or even particularly coherent as a philosophy.
I am only slightly more likely to have replied with that argument than a monkey would have been if it pressed random letters on a keyboard—and then primarily because it was if nothing else grammatically well formed.
The patient is very, very confused. I think that this post allows us to finally offer the diagnosis: he is qualitatively confused.
Quote:
Attempts to surgically remove this malignant belief from the patient have commenced.
Adding to wedrifid’s comment: Logos01, you seem to be committing a fallacy of gray here. Saying that we don’t have absolute certainty is not at all the same as saying that there is no territory; that is assuredly not what Bayesian epistemology is about.
Objecting to one, actually.
Not what it should be about, no. But it most assuredly does color how Bayesian beliefs are formed.
No, wedrifid is not committing one, you are. Here’s why:
He is not arguing that reality is subjective or anything like that. See his comment. On the other hand, your argument seems to be that we need to have some kind of surefire knowledge in epistemology because Bayesian probabilities are insufficient. Why does an epistemology with lack a of absolute certainty mean that said epistemology is not good enough?
What have you read or seen that makes you think this is the case?
I wonder if the word “belief” is causing problems here. Everyone in this thread is using the term to mean “statement or piece of knowledge to which you assign some level of truth value.” Saying “I believe X” is the same as saying “I think X is probably true.” Is this also how you’re using the term?
Without commenting on the specifics here, I object to this formulation because it’s possible both or neither are committing (particular fallacy), and it’s even more likely that of two parties, each is committing a different fallacy.
I didn’t mean to assert that it was an exclusive or, but I see how my wording implies that. Point taken and I’ll try to be more precise in the future.
The formulation is fine. It is just two independent claims. It does not mean “wedrifid is not making an error because you are”.
The subject isn’t “an error”, it’s “the fallacy of gray”.
I agree “No, wedrifid is not committing an error, you are,” hardly implies “wedrifid is not making an error because you are”.
“No, wedrifid is not committing the fallacy of gray, you are,” much more implies “wedrifid is not committing the fallacy of gray because you are” when wedrifid’s and Logos01′s statements are in direct conflict.
Why on earth does it matter that I referred to the general case? We’re discussing the implication of the general formulation which I hope you don’t consider to be a special case that only applies to “The Fallacy of The Grey”. But if we are going to be stickler’s then we should use the actual formulation you object to:
Not especially. This sort of thing is going to said most often regarding statements in direct conflict. There is no more relationship implied between the two than can be expected for any two claims being mentioned in the same sentence or paragraph.
The fact that it would be so easy to write “because” but they didn’t is also evidence against any assertion of a link. There is a limit to how much you can blame a writer for the theoretical possibility that a reader could make a blatant error of comprehension.
I didn’t really mean “because” in that sense, for two reasons. The first is that it is the observer’s knowledge that is caused, not the party’s error, and the other is that the causation implied goes the other way. Not “because” as if the first party’s non-error caused the second’s error, but that the observer can tell that the second party is in error because the observer can see that the first isn’t committing the particular error.
Not special, but there is a sliding scale.
Compare:
“No, wedrifid is not committing the Prosecutor’s fallacy, you are.” Whether or not one party is committing this fallacy has basically nothing to do with whether or not the other is. So I interpret this statement to probably mean: You are wrong when you claim that wedrifid is committing the Prosecutor’s fallacy. Also, you are committing the Prosecutor’s fallacy.”
“No, wedrifid is not reversing cause and effect, you are.” Knowing that one party is not reversing cause and effect is enough to know someone accusing that party of doing so is likely doing so him or herself! So I interpret this statement to probably mean: “Because I see that wedrifid is not reversing cause and effect, I conclude that you are.”
The fallacy of gray is in between the two above examples.
“Fallacy of The Grey” returns two google hits referring to the fallacy of gray, one from this site (and zero hits for “fallacy of the gray”).
This is actually the Fallacy of The Grey.
I didn’t say he was.
No. I was saying that Bayesian probabilistic-belief methodologies are effective at generating maps but they say almost nothing about how those maps correlate to the territory. And that it is, basically, possible to make those assertions. The practices are not the same, and that is the key difference.
It is fundamental to the nature of Bayesian belief-networks that they always assert statements in the form of probabilities. It is impossible to state a Bayesian belief except in the form of a probability.
From this there is a necessary conclusion.
No. There is a difference categorically between the position, “I believe X is probably true” and “I believe X is true.”
What does this mean?
Right. So why do you think this is insufficient for making maps that correlate to the territory? What assertions do you want to make about the territory that are not captured by this model?
Right, but on LW “I believe X” is generally meant as the former, not the latter. This is probably part of the reason for all of the confusion and disagreement in this thread.
What? Of course Elias has some reason for believing what he believes. “Expert” doesn’t mean “someone who magically just knows stuff”. Somewhere along the lines the operation of physics has resulted in the bunch of particles called “Elias” to be configured in such a way as utterances by Elias about things like X are more likely to be true than false. This means that p(Elias asserts X | X is true) and p(Elias asserts X | X is false) are most certainly not equal. Claiming that they must be equal is just really peculiar.
This isn’t to do with blind oracles. It’s to do with trivial application of probability or rudimentary logic.
In your interactions with people here I haven’t observed p(Logos01 is persuaded by X | X is sound reasoning) to be especially high. As such I cannot be expected to consider “Logos01 is not persuaded by something” to give much information at all about the soundness of a claim.
Unless those reasons are justified—which we cannot know without knowing them—they cannot be held to be justifiable statements.
This is tautological.
Not at all. You simply aren’t grasping why it is so. This is because you are thinking in terms of predictions and not in terms of concrete instances. To you, these are one-and-the-same, as you are used to thinking in the Bayesian probabilistic-belief manner.
I am telling you that this is an instance where that manner is flawed.
What you hold to be sound reasoning and what actually is sound reasoning are not equivalent.
If I had meant to imply that conclusion I would have phrased it so.
If you know that your friend more often makes statements such as this when they are true than when they are false, then you know that his claim is relevant evidence, so you should adjust your confidence up. If he reliably either watches the game, or finds out the result by calling a friend or checking online, and you have only known him to make declarations about which team won a game when he knows which team won, then you have reason to believe that his statement is strongly correlated with reality, even if you don’t know the mechanism by which he came to decide to say that the Sportington Sports won.
If you happen to know that your friend has just gotten out of a locked room with no television, phone reception or internet access where he spent the last couple of days, then you should assume an extremely low correlation of his statement with reality. But if you do not know the mechanism, you must weight his statement according to the strength that you expect his mechanism for establishing correlation with the truth has.
There is a permanent object outside my window. You do not know what it is, and if you try to assign probabilities to all the things it could be, you will assign a very low probability to the correct object. You should assign pretty high confidence that I know what the object outside my window is, so if I tell you, then you can assign much higher probability to that object than before I told you, without my having to tell you why I know. You have reason to have a pretty high confidence in the belief that I am an authority on what is outside my window, and that I have reliable mechanisms for establishing it.
If I tell you what is outside my window, you will probably guess that the most likely mechanism by which I found out was by looking at it, so that will dominate your assessment of my statement’s correlation with the truth (along with an adjustment for the possibility that I would lie.) If I tell you that I am blind, type with a braille keyboard, and have a voice synthesizer for reading text to me online, and I know what is outside my window because someone told me, then you should adjust your probability that my claim of what is outside my window is correct downwards, both on increased probability that I am being dishonest, and on the decreased reliability of my mechanism (I could have been lied to.) If I tell you that I am blind and psychic fairies told me what is outside my window, you should adjust your probability that my claim is correlated with reality down much further.
The “trust mechanism,” as you call it, is not a device that exists separate from issues of evidence and probability. It is one of the most common ways that we reason about probabilities, basing our confidence in others’ statements on what we know about their likely mechanisms and motives.
You can’t confirm that anything is true with absolute certainty, you can only be more or less confident. If your belief is not conditioned on evidence, you’re doing something wrong, but there is no point where a “mere belief” transitions into confirmed knowledge. Your probability estimates go up and down based on how much evidence you have, and some evidence is much stronger than others, but there is no set of evidence that “counts for actually knowing things” separate from that which doesn’t.
This is like claiming that because a coin came up heads twenty times and tails ten times it is 2x more likely to come up heads this time. Absent some other reason to justify the correlation between your friend’s accuracy and the current instance, such beliefs are invalid.
Yup. I said as much.
Yes, actually, it is a separate mechanism.
Yes, yes. That is the Bayesian standard statement. I’m not persuaded by it. It is, by the way, a foundational error to assert that absolute knowledge is the only form of knowledge. This is one of my major objections to standard Bayesian doctrine in general; the notion that there is no such thing as knowledge but only beliefs of varying confidence.
Bayesian probability assessments work very well for making predictions and modeling unknowns, but that’s just not sufficient to the question of what constitutes knowledge, what is known, and/or what is true.
And with that, I’m done here. This conversation’s gotten boring, to be quite frank, and I’m tired of having people essentially reiterate the same claims over and over at me from multiple angles. I’ve heard it before, and it’s no more convincing now than it was previously.
This is frustrating for me as well, and you can quit if you want, but I’m going to make one more point which I don’t think will be a reiteration of something you’ve heard previously.
Suppose that you have a circle of friends who you talk to regularly, and a person uses some sort of threat to force you to write down every declarative statement they make in a journal, whether they provided justifications or not, until you collect ten thousand of them.
Now suppose that they have a way of testing the truth of these statements with very high confidence. They make a credible threat that you must correctly estimate the number of the statements in the journal that are true, with a small margin of error, or they will blow up New York. If you simply file a large number of his statements under “trust mechanism,” and fail to assign a probability which will allow you to guess what proportion are right or wrong, millions of people will die. There is an actual right answer which will save those people’s lives, and you want to maximize your chances of getting it. What do you do?
Let’s replace the journal with a log of a trillion statements. You have a computer that can add the figures up quickly, and you still have to get very close to the right number to save millions of lives. Do you want the computer to file statements under “trust mechanism” or “confirmed knowledge” so that it can better determine the correct number of correct statements, or would you rather each statement be tagged with an appropriate probability, so that it can add them up to determine what number of statements it expects to be true?
… appeal to consequences. Well, that is new in this conversation. It’s not very constructive though.
Also, you’re conflating predictions with instantiations.
That being said:
I would, without access to said test myself, be forced to resign myself to the destruction of New York.
That’s not what a trust-system is. It is, simply put, the practice of trusting that something is so because the expected utility-cost of being wrong is lower than the expected utility-cost of investigating a given claim. This practice is a foible; a failing—one that is engaged in out of necessity because humans have a limit to their available cognitive resources.
What one wants is irrelevant. What has occurred is relevant. If you haven’t investigated a given claim directly, then you’ve got nothing but whatever available trust-systems are at hand to operate on.
That doesn’t make them valid claims.
Finally: you’re introducing another unlike-variable by abstracting from individual instances to averaged aggregate.
TL;DR—your post is not-even-wrong. On many points.
If your conception of rationality leads to worse consequences than doing something differently, you should do something differently. Do you think it’s impossible to do better than resigning yourself to the destruction of New York?
The utility cost of being wrong can fluctuate. Your life may hinge tomorrow on a piece of information you did not consider investigating today. If you find yourself in a situation where you must make an important decision hinging on little information, you can do no better than your best estimate, but if you decide that you are not justified in holding forth an estimate at all, you will have rationalized yourself into helplessness.
Humans have bounded rationality. Computationally optimized Jupiter Brains have bounded rationality. Nothing can have unlimited cognitive resources in this universe, but with high levels of computational power and effective weighting of evidence it is possible to know how much confidence you should have based on any given amount of information.
You can get the expected number of true statements just by adding the probabilities of truth of each statement. It’s like judging how many heads you should expect to get in a series of coin flips. .5 + .5 + .5..… The same formula works even if the probabilities are not all the same.
Apparently not.
If you don’t assume that the coin is fair, then certainly a coin coming up heads twenty times and tails ten times is evidence in favor of it being more likely to come up heads next time, because it’s evidence that it’s weighted so that it favours heads.
Similarly if a person is weighted so that they favor truth, their claims are evidence in favour of that truth.
Beliefs like trusting the trustworthy and not trustring the untrustworthy, whether you consider them “valid” beliefs or not, are likely to lead one to make correct predictions about the state of the world. So such beliefs are valid in the only way that matters for epistemic and instrumental rationality both.
Or you could make a direct observation, (such as by weighing it with a fine tool, or placing it on a balancing tool) and know.
Not unless they have an ability to provide their justification for a given instantiation. It would be sufficient for trusting them if you are not concerned with what is true as opposed to what is “likely true”. There’s a difference between these, categorically: one is an affirmation—the other is a belief.
Incorrect. And we are now as far as this conversation is going to go. You hold to Bayesian rationality as axiomatically true of rationality. I do not.
And in the absence of the ability to make direct observations? If there are two eye-witness testimonies to a crime, and one of the eye-witnesses is a notorious liar with every incentive to lie, and one of them is famous for his honesty and has no incentive to lie—which way would you have your judgment lean?
SPOCK: “If I let go of a hammer on a planet that has a positive gravity, I need not see it fall to know that it has in fact fallen. [...] Gentlemen, human beings have characteristics just as inanimate objects do. It is impossible for Captain Kirk to act out of panic or malice. It is not his nature.”
I very much like this quote, because it was one of the first times when I saw determinism, in the sense of predictability, being ennobling.
I have already stated that witness testimonials are valid for weighting beliefs. In the somewhere-parent topic of authorities; this is the equivalent of referencing the work of an authority on a topic.
If in 30 coin flips have occurred with it being that far off, I should move my probability estimate sllightly towards the coin being weighted to one side. If for example, the coin instead had all 30 flips heads, I presume you would update in the direction of the coin being weighted to be more likely to come down on one side. It won’t be 2x as more likely because the hypothesis that the coin is actually fair started with a very large prior. Moreover, the easy ways to make a coin weighted make it always come out on one side. But the essential Bayesian update in this context makes sense to put a higher probability on the coin being weighted to be more likely to comes up heads than tales.
“Bayesian probability assessments work very well for making predictions and modeling unknowns, but that’s just not sufficient to the question of what constitutes knowledge, what is known, and/or what is true.”
Bayesian probability assessments are an extremely poor tool for assertions of truth.