Self-perpetuation is a flavour of winning, no? So while I wouldn’t argue that our non-arbitrary heuristics our optimized for true beliefs, they should have some relation to instrumental rationality for self-perpetuation.
I’d argue that the sets of agents optimised for instrumental rationality and epistemic rationality are not disjoint sets. So optimising for self-perpetuation might optimise for true beliefs, dependent upon what part of the system space you are exploring. We may or may not be in that space, but our starting heuristics are more likely to be better than those picked from an arbitrary space for truth seeking.
I do not disagree with the parent. I think a defense of the use of the term “arbitrary” in the root comment could be mounted on semantic grounds, but I prefer to give only the short short version: arbitrary can mean things other than “chosen at random”.
Self-perpetuation is a flavour of winning, no? So while I wouldn’t argue that our non-arbitrary heuristics our optimized for true beliefs, they should have some relation to instrumental rationality for self-perpetuation.
I’d argue that the sets of agents optimised for instrumental rationality and epistemic rationality are not disjoint sets. So optimising for self-perpetuation might optimise for true beliefs, dependent upon what part of the system space you are exploring. We may or may not be in that space, but our starting heuristics are more likely to be better than those picked from an arbitrary space for truth seeking.
I do not disagree with the parent. I think a defense of the use of the term “arbitrary” in the root comment could be mounted on semantic grounds, but I prefer to give only the short short version: arbitrary can mean things other than “chosen at random”.