What purpose are you after with this query? It sounds dangerously much like a semantic discussion, though I may be failing to see something obvious.
“Wrong”, you say, “no new information has been injected here, I have simply pointed out how to reason rationally.
I’m not sure if this line makes sense. If somebody points out the correct way to interpret some piece of evidence, then that correct way of interpreting it is information. Procedural knowledge is knowledge, just as much as declarative.
Putting it in another way: if you were writing a computer program to do something, you might hard-code into it some way of doing things, or you might build some sort of search algorithm that would let it find the appropriate way of doing things. Here, hard-coding corresponds to a friend telling you how something should be interpreted, and the program discovering itself corresponds to a person discovering it herself. If you hard-code it, you are still adding extra lines of code into the program—that is, adding information.
To me also, the post sounds more like it’s equivocating on the definition of “rationality” than asking a question of the form either “What should I do?” or “What should I expect?”
What purpose are you after with this query? It sounds dangerously much like a semantic discussion, though I may be failing to see something obvious.
Fair question, I should’ve gotten this clear in my mind before I wrote. My observation is that there are people who reason effectively given their limited computation power and others who do not (hence the existence of this blog), and my question is by what criteria we can distinguish them given that the Bayesian definition of rationality seems to falter here.
If somebody points out the correct way to interpret some piece of evidence, then that correct way of interpreting it is information. Procedural knowledge is knowledge, just as much as declarative.
I would agree except that this seems to imply that probabilities generated by a random number generator should be considered rational since it “lacks” the procedural knowledge to do otherwise. This is not just semantics because we perceive a real performance difference between a random number generator and a program that multiplies out likelihoods and priors, and we would like to understand the nature of that difference.
What purpose are you after with this query? It sounds dangerously much like a semantic discussion, though I may be failing to see something obvious.
I’m not sure if this line makes sense. If somebody points out the correct way to interpret some piece of evidence, then that correct way of interpreting it is information. Procedural knowledge is knowledge, just as much as declarative.
Putting it in another way: if you were writing a computer program to do something, you might hard-code into it some way of doing things, or you might build some sort of search algorithm that would let it find the appropriate way of doing things. Here, hard-coding corresponds to a friend telling you how something should be interpreted, and the program discovering itself corresponds to a person discovering it herself. If you hard-code it, you are still adding extra lines of code into the program—that is, adding information.
To me also, the post sounds more like it’s equivocating on the definition of “rationality” than asking a question of the form either “What should I do?” or “What should I expect?”
Fair question, I should’ve gotten this clear in my mind before I wrote. My observation is that there are people who reason effectively given their limited computation power and others who do not (hence the existence of this blog), and my question is by what criteria we can distinguish them given that the Bayesian definition of rationality seems to falter here.
I would agree except that this seems to imply that probabilities generated by a random number generator should be considered rational since it “lacks” the procedural knowledge to do otherwise. This is not just semantics because we perceive a real performance difference between a random number generator and a program that multiplies out likelihoods and priors, and we would like to understand the nature of that difference.