Not until such time as you have a reason to believe that he has a justification for his belief beyond mere opinion. Otherwise, it is a mere assertion regardless of the source—it cannot have a correlation to reality if there is no vehicle through which the information he claims to have reached him other than his own imagination, however accurate that imagination might be.
If you know that your friend more often makes statements such as this when they are true than when they are false, then you know that his claim is relevant evidence, so you should adjust your confidence up. If he reliably either watches the game, or finds out the result by calling a friend or checking online, and you have only known him to make declarations about which team won a game when he knows which team won, then you have reason to believe that his statement is strongly correlated with reality, even if you don’t know the mechanism by which he came to decide to say that the Sportington Sports won.
If you happen to know that your friend has just gotten out of a locked room with no television, phone reception or internet access where he spent the last couple of days, then you should assume an extremely low correlation of his statement with reality. But if you do not know the mechanism, you must weight his statement according to the strength that you expect his mechanism for establishing correlation with the truth has.
There is a permanent object outside my window. You do not know what it is, and if you try to assign probabilities to all the things it could be, you will assign a very low probability to the correct object. You should assign pretty high confidence that I know what the object outside my window is, so if I tell you, then you can assign much higher probability to that object than before I told you, without my having to tell you why I know. You have reason to have a pretty high confidence in the belief that I am an authority on what is outside my window, and that I have reliable mechanisms for establishing it.
If I tell you what is outside my window, you will probably guess that the most likely mechanism by which I found out was by looking at it, so that will dominate your assessment of my statement’s correlation with the truth (along with an adjustment for the possibility that I would lie.) If I tell you that I am blind, type with a braille keyboard, and have a voice synthesizer for reading text to me online, and I know what is outside my window because someone told me, then you should adjust your probability that my claim of what is outside my window is correct downwards, both on increased probability that I am being dishonest, and on the decreased reliability of my mechanism (I could have been lied to.) If I tell you that I am blind and psychic fairies told me what is outside my window, you should adjust your probability that my claim is correlated with reality down much further.
The “trust mechanism,” as you call it, is not a device that exists separate from issues of evidence and probability. It is one of the most common ways that we reason about probabilities, basing our confidence in others’ statements on what we know about their likely mechanisms and motives.
This is an area where “Bayesian rationality” is insufficient—it fails to reliably distinguish between “what I believe” and “what I can confirm is true”.
You can’t confirm that anything is true with absolute certainty, you can only be more or less confident. If your belief is not conditioned on evidence, you’re doing something wrong, but there is no point where a “mere belief” transitions into confirmed knowledge. Your probability estimates go up and down based on how much evidence you have, and some evidence is much stronger than others, but there is no set of evidence that “counts for actually knowing things” separate from that which doesn’t.
If you know that your friend more often makes statements such as this when they are true than when they are false, then you know that his claim is relevant evidence
This is like claiming that because a coin came up heads twenty times and tails ten times it is 2x more likely to come up heads this time. Absent some other reason to justify the correlation between your friend’s accuracy and the current instance, such beliefs are invalid.
If he reliably either watches the game, or finds out the result by calling a friend or checking online, and you have only known him to make declarations about which team won a game when he knows which team won,
Yup. I said as much.
The “trust mechanism,” as you call it, is not a device that exists separate from issues of evidence and probability.
Yes, actually, it is a separate mechanism.
You can’t confirm that anything is true with absolute certainty, you can only be more or less confident.
Yes, yes. That is the Bayesian standard statement. I’m not persuaded by it. It is, by the way, a foundational error to assert that absolute knowledge is the only form of knowledge. This is one of my major objections to standard Bayesian doctrine in general; the notion that there is no such thing as knowledge but only beliefs of varying confidence.
Bayesian probability assessments work very well for making predictions and modeling unknowns, but that’s just not sufficient to the question of what constitutes knowledge, what is known, and/or what is true.
And with that, I’m done here. This conversation’s gotten boring, to be quite frank, and I’m tired of having people essentially reiterate the same claims over and over at me from multiple angles. I’ve heard it before, and it’s no more convincing now than it was previously.
This is frustrating for me as well, and you can quit if you want, but I’m going to make one more point which I don’t think will be a reiteration of something you’ve heard previously.
Suppose that you have a circle of friends who you talk to regularly, and a person uses some sort of threat to force you to write down every declarative statement they make in a journal, whether they provided justifications or not, until you collect ten thousand of them.
Now suppose that they have a way of testing the truth of these statements with very high confidence. They make a credible threat that you must correctly estimate the number of the statements in the journal that are true, with a small margin of error, or they will blow up New York. If you simply file a large number of his statements under “trust mechanism,” and fail to assign a probability which will allow you to guess what proportion are right or wrong, millions of people will die. There is an actual right answer which will save those people’s lives, and you want to maximize your chances of getting it. What do you do?
Let’s replace the journal with a log of a trillion statements. You have a computer that can add the figures up quickly, and you still have to get very close to the right number to save millions of lives. Do you want the computer to file statements under “trust mechanism” or “confirmed knowledge” so that it can better determine the correct number of correct statements, or would you rather each statement be tagged with an appropriate probability, so that it can add them up to determine what number of statements it expects to be true?
… appeal to consequences. Well, that is new in this conversation. It’s not very constructive though.
Also, you’re conflating predictions with instantiations.
That being said:
They make a credible threat that you must correctly estimate the number of the statements in the journal that are true, with a small margin of error, or they will blow up New York. [...] What do you do?
I would, without access to said test myself, be forced to resign myself to the destruction of New York.
If you simply file a large number of his statements under “trust mechanism,”
That’s not what a trust-system is. It is, simply put, the practice of trusting that something is so because the expected utility-cost of being wrong is lower than the expected utility-cost of investigating a given claim. This practice is a foible; a failing—one that is engaged in out of necessity because humans have a limit to their available cognitive resources.
Do you want the computer to file statements under “trust mechanism” or “confirmed knowledge” so that it can better determine the correct number of correct statements, or would you rather each statement be tagged with an appropriate probability,
What one wants is irrelevant. What has occurred is relevant. If you haven’t investigated a given claim directly, then you’ve got nothing but whatever available trust-systems are at hand to operate on.
That doesn’t make them valid claims.
Finally: you’re introducing another unlike-variable by abstracting from individual instances to averaged aggregate.
TL;DR—your post is not-even-wrong. On many points.
… appeal to consequences. Well, that is new in this conversation. It’s not very constructive though.
If your conception of rationality leads to worse consequences than doing something differently, you should do something differently. Do you think it’s impossible to do better than resigning yourself to the destruction of New York?
That’s not what a trust-system is. It is, simply put, the practice of trusting that something is so because the expected utility-cost of being wrong is lower than the expected utility-cost of investigating a given claim. This practice is a foible; a failing—one that is engaged in out of necessity because humans have a limit to their available cognitive resources.
The utility cost of being wrong can fluctuate. Your life may hinge tomorrow on a piece of information you did not consider investigating today. If you find yourself in a situation where you must make an important decision hinging on little information, you can do no better than your best estimate, but if you decide that you are not justified in holding forth an estimate at all, you will have rationalized yourself into helplessness.
Humans have bounded rationality. Computationally optimized Jupiter Brains have bounded rationality. Nothing can have unlimited cognitive resources in this universe, but with high levels of computational power and effective weighting of evidence it is possible to know how much confidence you should have based on any given amount of information.
Finally: you’re introducing another unlike-variable by abstracting from individual instances to averaged aggregate.
You can get the expected number of true statements just by adding the probabilities of truth of each statement. It’s like judging how many heads you should expect to get in a series of coin flips. .5 + .5 + .5..… The same formula works even if the probabilities are not all the same.
This is like claiming that because a coin came up heads twenty times and tails ten times it is 2x more likely to come up heads this time.
If you don’t assume that the coin is fair, then certainly a coin coming up heads twenty times and tails ten times is evidence in favor of it being more likely to come up heads next time, because it’s evidence that it’s weighted so that it favours heads.
Similarly if a person is weighted so that they favor truth, their claims are evidence in favour of that truth.
Absent some other reason to justify the correlation between your friend’s accuracy and the current instance, such beliefs are invalid.
Beliefs like trusting the trustworthy and not trustring the untrustworthy, whether you consider them “valid” beliefs or not, are likely to lead one to make correct predictions about the state of the world. So such beliefs are valid in the only way that matters for epistemic and instrumental rationality both.
If you don’t assume that the coin is fair, then certainly a coin coming up heads twenty times and tails ten times is evidence in favor of it being more likely to come up heads next time, because it’s evidence that it’s weighted so that it favours heads.
Or you could make a direct observation, (such as by weighing it with a fine tool, or placing it on a balancing tool) and know.
Similarly if a person is weighted so that they favor truth, their claims are evidence in favour of that truth.
Not unless they have an ability to provide their justification for a given instantiation. It would be sufficient for trusting them if you are not concerned with what is true as opposed to what is “likely true”. There’s a difference between these, categorically: one is an affirmation—the other is a belief.
So such beliefs are valid in the only way that matters for epistemic and instrumental rationality both.
Incorrect. And we are now as far as this conversation is going to go. You hold to Bayesian rationality as axiomatically true of rationality. I do not.
Or you could make a direct observation, (such as by weighing it with a fine tool, or placing it on a balancing tool) and know.
And in the absence of the ability to make direct observations? If there are two eye-witness testimonies to a crime, and one of the eye-witnesses is a notorious liar with every incentive to lie, and one of them is famous for his honesty and has no incentive to lie—which way would you have your judgment lean?
SPOCK: “If I let go of a hammer on a planet that has a positive gravity, I need not see it fall to know that it has in fact fallen. [...] Gentlemen, human beings have characteristics just as inanimate objects do. It is impossible for Captain Kirk to act out of panic or malice. It is not his nature.”
I very much like this quote, because it was one of the first times when I saw determinism, in the sense of predictability, being ennobling.
If there are two eye-witness testimonies to a crime
I have already stated that witness testimonials are valid for weighting beliefs. In the somewhere-parent topic of authorities; this is the equivalent of referencing the work of an authority on a topic.
This is like claiming that because a coin came up heads twenty times and tails ten times it is 2x more likely to come up heads this time. Absent some other reason to justify the correlation between your friend’s accuracy and the current instance, such beliefs are invalid.
If in 30 coin flips have occurred with it being that far off, I should move my probability estimate sllightly towards the coin being weighted to one side. If for example, the coin instead had all 30 flips heads, I presume you would update in the direction of the coin being weighted to be more likely to come down on one side. It won’t be 2x as more likely because the hypothesis that the coin is actually fair started with a very large prior. Moreover, the easy ways to make a coin weighted make it always come out on one side. But the essential Bayesian update in this context makes sense to put a higher probability on the coin being weighted to be more likely to comes up heads than tales.
“Bayesian probability assessments work very well for making predictions and modeling unknowns, but that’s just not sufficient to the question of what constitutes knowledge, what is known, and/or what is true.”
Bayesian probability assessments are an extremely poor tool for assertions of truth.
If you know that your friend more often makes statements such as this when they are true than when they are false, then you know that his claim is relevant evidence, so you should adjust your confidence up. If he reliably either watches the game, or finds out the result by calling a friend or checking online, and you have only known him to make declarations about which team won a game when he knows which team won, then you have reason to believe that his statement is strongly correlated with reality, even if you don’t know the mechanism by which he came to decide to say that the Sportington Sports won.
If you happen to know that your friend has just gotten out of a locked room with no television, phone reception or internet access where he spent the last couple of days, then you should assume an extremely low correlation of his statement with reality. But if you do not know the mechanism, you must weight his statement according to the strength that you expect his mechanism for establishing correlation with the truth has.
There is a permanent object outside my window. You do not know what it is, and if you try to assign probabilities to all the things it could be, you will assign a very low probability to the correct object. You should assign pretty high confidence that I know what the object outside my window is, so if I tell you, then you can assign much higher probability to that object than before I told you, without my having to tell you why I know. You have reason to have a pretty high confidence in the belief that I am an authority on what is outside my window, and that I have reliable mechanisms for establishing it.
If I tell you what is outside my window, you will probably guess that the most likely mechanism by which I found out was by looking at it, so that will dominate your assessment of my statement’s correlation with the truth (along with an adjustment for the possibility that I would lie.) If I tell you that I am blind, type with a braille keyboard, and have a voice synthesizer for reading text to me online, and I know what is outside my window because someone told me, then you should adjust your probability that my claim of what is outside my window is correct downwards, both on increased probability that I am being dishonest, and on the decreased reliability of my mechanism (I could have been lied to.) If I tell you that I am blind and psychic fairies told me what is outside my window, you should adjust your probability that my claim is correlated with reality down much further.
The “trust mechanism,” as you call it, is not a device that exists separate from issues of evidence and probability. It is one of the most common ways that we reason about probabilities, basing our confidence in others’ statements on what we know about their likely mechanisms and motives.
You can’t confirm that anything is true with absolute certainty, you can only be more or less confident. If your belief is not conditioned on evidence, you’re doing something wrong, but there is no point where a “mere belief” transitions into confirmed knowledge. Your probability estimates go up and down based on how much evidence you have, and some evidence is much stronger than others, but there is no set of evidence that “counts for actually knowing things” separate from that which doesn’t.
This is like claiming that because a coin came up heads twenty times and tails ten times it is 2x more likely to come up heads this time. Absent some other reason to justify the correlation between your friend’s accuracy and the current instance, such beliefs are invalid.
Yup. I said as much.
Yes, actually, it is a separate mechanism.
Yes, yes. That is the Bayesian standard statement. I’m not persuaded by it. It is, by the way, a foundational error to assert that absolute knowledge is the only form of knowledge. This is one of my major objections to standard Bayesian doctrine in general; the notion that there is no such thing as knowledge but only beliefs of varying confidence.
Bayesian probability assessments work very well for making predictions and modeling unknowns, but that’s just not sufficient to the question of what constitutes knowledge, what is known, and/or what is true.
And with that, I’m done here. This conversation’s gotten boring, to be quite frank, and I’m tired of having people essentially reiterate the same claims over and over at me from multiple angles. I’ve heard it before, and it’s no more convincing now than it was previously.
This is frustrating for me as well, and you can quit if you want, but I’m going to make one more point which I don’t think will be a reiteration of something you’ve heard previously.
Suppose that you have a circle of friends who you talk to regularly, and a person uses some sort of threat to force you to write down every declarative statement they make in a journal, whether they provided justifications or not, until you collect ten thousand of them.
Now suppose that they have a way of testing the truth of these statements with very high confidence. They make a credible threat that you must correctly estimate the number of the statements in the journal that are true, with a small margin of error, or they will blow up New York. If you simply file a large number of his statements under “trust mechanism,” and fail to assign a probability which will allow you to guess what proportion are right or wrong, millions of people will die. There is an actual right answer which will save those people’s lives, and you want to maximize your chances of getting it. What do you do?
Let’s replace the journal with a log of a trillion statements. You have a computer that can add the figures up quickly, and you still have to get very close to the right number to save millions of lives. Do you want the computer to file statements under “trust mechanism” or “confirmed knowledge” so that it can better determine the correct number of correct statements, or would you rather each statement be tagged with an appropriate probability, so that it can add them up to determine what number of statements it expects to be true?
… appeal to consequences. Well, that is new in this conversation. It’s not very constructive though.
Also, you’re conflating predictions with instantiations.
That being said:
I would, without access to said test myself, be forced to resign myself to the destruction of New York.
That’s not what a trust-system is. It is, simply put, the practice of trusting that something is so because the expected utility-cost of being wrong is lower than the expected utility-cost of investigating a given claim. This practice is a foible; a failing—one that is engaged in out of necessity because humans have a limit to their available cognitive resources.
What one wants is irrelevant. What has occurred is relevant. If you haven’t investigated a given claim directly, then you’ve got nothing but whatever available trust-systems are at hand to operate on.
That doesn’t make them valid claims.
Finally: you’re introducing another unlike-variable by abstracting from individual instances to averaged aggregate.
TL;DR—your post is not-even-wrong. On many points.
If your conception of rationality leads to worse consequences than doing something differently, you should do something differently. Do you think it’s impossible to do better than resigning yourself to the destruction of New York?
The utility cost of being wrong can fluctuate. Your life may hinge tomorrow on a piece of information you did not consider investigating today. If you find yourself in a situation where you must make an important decision hinging on little information, you can do no better than your best estimate, but if you decide that you are not justified in holding forth an estimate at all, you will have rationalized yourself into helplessness.
Humans have bounded rationality. Computationally optimized Jupiter Brains have bounded rationality. Nothing can have unlimited cognitive resources in this universe, but with high levels of computational power and effective weighting of evidence it is possible to know how much confidence you should have based on any given amount of information.
You can get the expected number of true statements just by adding the probabilities of truth of each statement. It’s like judging how many heads you should expect to get in a series of coin flips. .5 + .5 + .5..… The same formula works even if the probabilities are not all the same.
Apparently not.
If you don’t assume that the coin is fair, then certainly a coin coming up heads twenty times and tails ten times is evidence in favor of it being more likely to come up heads next time, because it’s evidence that it’s weighted so that it favours heads.
Similarly if a person is weighted so that they favor truth, their claims are evidence in favour of that truth.
Beliefs like trusting the trustworthy and not trustring the untrustworthy, whether you consider them “valid” beliefs or not, are likely to lead one to make correct predictions about the state of the world. So such beliefs are valid in the only way that matters for epistemic and instrumental rationality both.
Or you could make a direct observation, (such as by weighing it with a fine tool, or placing it on a balancing tool) and know.
Not unless they have an ability to provide their justification for a given instantiation. It would be sufficient for trusting them if you are not concerned with what is true as opposed to what is “likely true”. There’s a difference between these, categorically: one is an affirmation—the other is a belief.
Incorrect. And we are now as far as this conversation is going to go. You hold to Bayesian rationality as axiomatically true of rationality. I do not.
And in the absence of the ability to make direct observations? If there are two eye-witness testimonies to a crime, and one of the eye-witnesses is a notorious liar with every incentive to lie, and one of them is famous for his honesty and has no incentive to lie—which way would you have your judgment lean?
SPOCK: “If I let go of a hammer on a planet that has a positive gravity, I need not see it fall to know that it has in fact fallen. [...] Gentlemen, human beings have characteristics just as inanimate objects do. It is impossible for Captain Kirk to act out of panic or malice. It is not his nature.”
I very much like this quote, because it was one of the first times when I saw determinism, in the sense of predictability, being ennobling.
I have already stated that witness testimonials are valid for weighting beliefs. In the somewhere-parent topic of authorities; this is the equivalent of referencing the work of an authority on a topic.
If in 30 coin flips have occurred with it being that far off, I should move my probability estimate sllightly towards the coin being weighted to one side. If for example, the coin instead had all 30 flips heads, I presume you would update in the direction of the coin being weighted to be more likely to come down on one side. It won’t be 2x as more likely because the hypothesis that the coin is actually fair started with a very large prior. Moreover, the easy ways to make a coin weighted make it always come out on one side. But the essential Bayesian update in this context makes sense to put a higher probability on the coin being weighted to be more likely to comes up heads than tales.
“Bayesian probability assessments work very well for making predictions and modeling unknowns, but that’s just not sufficient to the question of what constitutes knowledge, what is known, and/or what is true.”
Bayesian probability assessments are an extremely poor tool for assertions of truth.