This is frustrating for me as well, and you can quit if you want, but I’m going to make one more point which I don’t think will be a reiteration of something you’ve heard previously.
Suppose that you have a circle of friends who you talk to regularly, and a person uses some sort of threat to force you to write down every declarative statement they make in a journal, whether they provided justifications or not, until you collect ten thousand of them.
Now suppose that they have a way of testing the truth of these statements with very high confidence. They make a credible threat that you must correctly estimate the number of the statements in the journal that are true, with a small margin of error, or they will blow up New York. If you simply file a large number of his statements under “trust mechanism,” and fail to assign a probability which will allow you to guess what proportion are right or wrong, millions of people will die. There is an actual right answer which will save those people’s lives, and you want to maximize your chances of getting it. What do you do?
Let’s replace the journal with a log of a trillion statements. You have a computer that can add the figures up quickly, and you still have to get very close to the right number to save millions of lives. Do you want the computer to file statements under “trust mechanism” or “confirmed knowledge” so that it can better determine the correct number of correct statements, or would you rather each statement be tagged with an appropriate probability, so that it can add them up to determine what number of statements it expects to be true?
… appeal to consequences. Well, that is new in this conversation. It’s not very constructive though.
Also, you’re conflating predictions with instantiations.
That being said:
They make a credible threat that you must correctly estimate the number of the statements in the journal that are true, with a small margin of error, or they will blow up New York. [...] What do you do?
I would, without access to said test myself, be forced to resign myself to the destruction of New York.
If you simply file a large number of his statements under “trust mechanism,”
That’s not what a trust-system is. It is, simply put, the practice of trusting that something is so because the expected utility-cost of being wrong is lower than the expected utility-cost of investigating a given claim. This practice is a foible; a failing—one that is engaged in out of necessity because humans have a limit to their available cognitive resources.
Do you want the computer to file statements under “trust mechanism” or “confirmed knowledge” so that it can better determine the correct number of correct statements, or would you rather each statement be tagged with an appropriate probability,
What one wants is irrelevant. What has occurred is relevant. If you haven’t investigated a given claim directly, then you’ve got nothing but whatever available trust-systems are at hand to operate on.
That doesn’t make them valid claims.
Finally: you’re introducing another unlike-variable by abstracting from individual instances to averaged aggregate.
TL;DR—your post is not-even-wrong. On many points.
… appeal to consequences. Well, that is new in this conversation. It’s not very constructive though.
If your conception of rationality leads to worse consequences than doing something differently, you should do something differently. Do you think it’s impossible to do better than resigning yourself to the destruction of New York?
That’s not what a trust-system is. It is, simply put, the practice of trusting that something is so because the expected utility-cost of being wrong is lower than the expected utility-cost of investigating a given claim. This practice is a foible; a failing—one that is engaged in out of necessity because humans have a limit to their available cognitive resources.
The utility cost of being wrong can fluctuate. Your life may hinge tomorrow on a piece of information you did not consider investigating today. If you find yourself in a situation where you must make an important decision hinging on little information, you can do no better than your best estimate, but if you decide that you are not justified in holding forth an estimate at all, you will have rationalized yourself into helplessness.
Humans have bounded rationality. Computationally optimized Jupiter Brains have bounded rationality. Nothing can have unlimited cognitive resources in this universe, but with high levels of computational power and effective weighting of evidence it is possible to know how much confidence you should have based on any given amount of information.
Finally: you’re introducing another unlike-variable by abstracting from individual instances to averaged aggregate.
You can get the expected number of true statements just by adding the probabilities of truth of each statement. It’s like judging how many heads you should expect to get in a series of coin flips. .5 + .5 + .5..… The same formula works even if the probabilities are not all the same.
This is frustrating for me as well, and you can quit if you want, but I’m going to make one more point which I don’t think will be a reiteration of something you’ve heard previously.
Suppose that you have a circle of friends who you talk to regularly, and a person uses some sort of threat to force you to write down every declarative statement they make in a journal, whether they provided justifications or not, until you collect ten thousand of them.
Now suppose that they have a way of testing the truth of these statements with very high confidence. They make a credible threat that you must correctly estimate the number of the statements in the journal that are true, with a small margin of error, or they will blow up New York. If you simply file a large number of his statements under “trust mechanism,” and fail to assign a probability which will allow you to guess what proportion are right or wrong, millions of people will die. There is an actual right answer which will save those people’s lives, and you want to maximize your chances of getting it. What do you do?
Let’s replace the journal with a log of a trillion statements. You have a computer that can add the figures up quickly, and you still have to get very close to the right number to save millions of lives. Do you want the computer to file statements under “trust mechanism” or “confirmed knowledge” so that it can better determine the correct number of correct statements, or would you rather each statement be tagged with an appropriate probability, so that it can add them up to determine what number of statements it expects to be true?
… appeal to consequences. Well, that is new in this conversation. It’s not very constructive though.
Also, you’re conflating predictions with instantiations.
That being said:
I would, without access to said test myself, be forced to resign myself to the destruction of New York.
That’s not what a trust-system is. It is, simply put, the practice of trusting that something is so because the expected utility-cost of being wrong is lower than the expected utility-cost of investigating a given claim. This practice is a foible; a failing—one that is engaged in out of necessity because humans have a limit to their available cognitive resources.
What one wants is irrelevant. What has occurred is relevant. If you haven’t investigated a given claim directly, then you’ve got nothing but whatever available trust-systems are at hand to operate on.
That doesn’t make them valid claims.
Finally: you’re introducing another unlike-variable by abstracting from individual instances to averaged aggregate.
TL;DR—your post is not-even-wrong. On many points.
If your conception of rationality leads to worse consequences than doing something differently, you should do something differently. Do you think it’s impossible to do better than resigning yourself to the destruction of New York?
The utility cost of being wrong can fluctuate. Your life may hinge tomorrow on a piece of information you did not consider investigating today. If you find yourself in a situation where you must make an important decision hinging on little information, you can do no better than your best estimate, but if you decide that you are not justified in holding forth an estimate at all, you will have rationalized yourself into helplessness.
Humans have bounded rationality. Computationally optimized Jupiter Brains have bounded rationality. Nothing can have unlimited cognitive resources in this universe, but with high levels of computational power and effective weighting of evidence it is possible to know how much confidence you should have based on any given amount of information.
You can get the expected number of true statements just by adding the probabilities of truth of each statement. It’s like judging how many heads you should expect to get in a series of coin flips. .5 + .5 + .5..… The same formula works even if the probabilities are not all the same.
Apparently not.