… appeal to consequences. Well, that is new in this conversation. It’s not very constructive though.
Also, you’re conflating predictions with instantiations.
That being said:
They make a credible threat that you must correctly estimate the number of the statements in the journal that are true, with a small margin of error, or they will blow up New York. [...] What do you do?
I would, without access to said test myself, be forced to resign myself to the destruction of New York.
If you simply file a large number of his statements under “trust mechanism,”
That’s not what a trust-system is. It is, simply put, the practice of trusting that something is so because the expected utility-cost of being wrong is lower than the expected utility-cost of investigating a given claim. This practice is a foible; a failing—one that is engaged in out of necessity because humans have a limit to their available cognitive resources.
Do you want the computer to file statements under “trust mechanism” or “confirmed knowledge” so that it can better determine the correct number of correct statements, or would you rather each statement be tagged with an appropriate probability,
What one wants is irrelevant. What has occurred is relevant. If you haven’t investigated a given claim directly, then you’ve got nothing but whatever available trust-systems are at hand to operate on.
That doesn’t make them valid claims.
Finally: you’re introducing another unlike-variable by abstracting from individual instances to averaged aggregate.
TL;DR—your post is not-even-wrong. On many points.
… appeal to consequences. Well, that is new in this conversation. It’s not very constructive though.
If your conception of rationality leads to worse consequences than doing something differently, you should do something differently. Do you think it’s impossible to do better than resigning yourself to the destruction of New York?
That’s not what a trust-system is. It is, simply put, the practice of trusting that something is so because the expected utility-cost of being wrong is lower than the expected utility-cost of investigating a given claim. This practice is a foible; a failing—one that is engaged in out of necessity because humans have a limit to their available cognitive resources.
The utility cost of being wrong can fluctuate. Your life may hinge tomorrow on a piece of information you did not consider investigating today. If you find yourself in a situation where you must make an important decision hinging on little information, you can do no better than your best estimate, but if you decide that you are not justified in holding forth an estimate at all, you will have rationalized yourself into helplessness.
Humans have bounded rationality. Computationally optimized Jupiter Brains have bounded rationality. Nothing can have unlimited cognitive resources in this universe, but with high levels of computational power and effective weighting of evidence it is possible to know how much confidence you should have based on any given amount of information.
Finally: you’re introducing another unlike-variable by abstracting from individual instances to averaged aggregate.
You can get the expected number of true statements just by adding the probabilities of truth of each statement. It’s like judging how many heads you should expect to get in a series of coin flips. .5 + .5 + .5..… The same formula works even if the probabilities are not all the same.
… appeal to consequences. Well, that is new in this conversation. It’s not very constructive though.
Also, you’re conflating predictions with instantiations.
That being said:
I would, without access to said test myself, be forced to resign myself to the destruction of New York.
That’s not what a trust-system is. It is, simply put, the practice of trusting that something is so because the expected utility-cost of being wrong is lower than the expected utility-cost of investigating a given claim. This practice is a foible; a failing—one that is engaged in out of necessity because humans have a limit to their available cognitive resources.
What one wants is irrelevant. What has occurred is relevant. If you haven’t investigated a given claim directly, then you’ve got nothing but whatever available trust-systems are at hand to operate on.
That doesn’t make them valid claims.
Finally: you’re introducing another unlike-variable by abstracting from individual instances to averaged aggregate.
TL;DR—your post is not-even-wrong. On many points.
If your conception of rationality leads to worse consequences than doing something differently, you should do something differently. Do you think it’s impossible to do better than resigning yourself to the destruction of New York?
The utility cost of being wrong can fluctuate. Your life may hinge tomorrow on a piece of information you did not consider investigating today. If you find yourself in a situation where you must make an important decision hinging on little information, you can do no better than your best estimate, but if you decide that you are not justified in holding forth an estimate at all, you will have rationalized yourself into helplessness.
Humans have bounded rationality. Computationally optimized Jupiter Brains have bounded rationality. Nothing can have unlimited cognitive resources in this universe, but with high levels of computational power and effective weighting of evidence it is possible to know how much confidence you should have based on any given amount of information.
You can get the expected number of true statements just by adding the probabilities of truth of each statement. It’s like judging how many heads you should expect to get in a series of coin flips. .5 + .5 + .5..… The same formula works even if the probabilities are not all the same.
Apparently not.