The GP’s comment doesn’t sit right with me for this reason.
The entity that has the arbitrary axioms in the case of humanity is the genotype, not the phenotype. It has discovered non-arbitrary heuristics that we use day to day. Those heuristics have helped us survive in this harsh world over the millennia, so have been put to the test and found adequate.
At first, I thought this was a reasonable comment, but then it occurred to me that the non-arbitrary heuristics were optimized for self-perpetuation, not true beliefs.
Self-perpetuation is a flavour of winning, no? So while I wouldn’t argue that our non-arbitrary heuristics our optimized for true beliefs, they should have some relation to instrumental rationality for self-perpetuation.
I’d argue that the sets of agents optimised for instrumental rationality and epistemic rationality are not disjoint sets. So optimising for self-perpetuation might optimise for true beliefs, dependent upon what part of the system space you are exploring. We may or may not be in that space, but our starting heuristics are more likely to be better than those picked from an arbitrary space for truth seeking.
I do not disagree with the parent. I think a defense of the use of the term “arbitrary” in the root comment could be mounted on semantic grounds, but I prefer to give only the short short version: arbitrary can mean things other than “chosen at random”.
I’m surprised that a post that basically does nothing but acknowledge inductive bias is presently at −2.
I had not read that part. Thanks.
I do not see any difference in inductive bias as it is written there and dictionary and wikipedia definitions of faith:
The GP’s comment doesn’t sit right with me for this reason.
The entity that has the arbitrary axioms in the case of humanity is the genotype, not the phenotype. It has discovered non-arbitrary heuristics that we use day to day. Those heuristics have helped us survive in this harsh world over the millennia, so have been put to the test and found adequate.
At first, I thought this was a reasonable comment, but then it occurred to me that the non-arbitrary heuristics were optimized for self-perpetuation, not true beliefs.
Self-perpetuation is a flavour of winning, no? So while I wouldn’t argue that our non-arbitrary heuristics our optimized for true beliefs, they should have some relation to instrumental rationality for self-perpetuation.
I’d argue that the sets of agents optimised for instrumental rationality and epistemic rationality are not disjoint sets. So optimising for self-perpetuation might optimise for true beliefs, dependent upon what part of the system space you are exploring. We may or may not be in that space, but our starting heuristics are more likely to be better than those picked from an arbitrary space for truth seeking.
I do not disagree with the parent. I think a defense of the use of the term “arbitrary” in the root comment could be mounted on semantic grounds, but I prefer to give only the short short version: arbitrary can mean things other than “chosen at random”.