If you don’t justify your beliefs, how are they less arbitrary than those of a Bayesian? You may say they are tied to the truth (in a non-infinite-regress-laden way) by the truth-seeking process of criticism, forming new ideas, etc. However this is also what ties a Bayesian to the truth. The Bayesian is restricted (in theory) to updating probabilities based on evidence, but we tend to accept absence/presence/content of criticisms as evidence (though we insist on talking about the truth or falsity of statements, rather than whether they’re “good ideas” (except insofar as we’re actually discussing a statement about another statement)). Like your method, this one moves towards the truth in a largely unconstrained way, using many different sorts of reasons it can come upon. Also like yours, it fails to explode on lack of justification; if you can’t find any evidence, you merely use the prior; it may not actually point you to the best thing, but, well, what else are you going to do?
The clear difference I see is that the Bayesian epistemology quantifies uncertainty and puts a mathematical model around it; this model doesn’t actually match how we reason under uncertainty, but it’s a useful idealization of it. Your epistemology does not quantify uncertainty, does not lay out criteria for criticisms, etc.; it seems to be based on verbally describing what reasonable people do, but as a prescription it’s useless (unless you already knew how to think and just needed a reminder of some step). In particular it doesn’t ground out to math, so even in very simple toy examples where an agent knows what toy example it’s in and what its goal is, it’s unclear how your epistemology should be used, while Bayesian probability easily gives optimal prescriptions.
If you don’t justify your beliefs, how are they less arbitrary than those of a Bayesian? You may say they are tied to the truth (in a non-infinite-regress-laden way) by the truth-seeking process of criticism, forming new ideas, etc. However this is also what ties a Bayesian to the truth. The Bayesian is restricted (in theory) to updating probabilities based on evidence, but we tend to accept absence/presence/content of criticisms as evidence (though we insist on talking about the truth or falsity of statements, rather than whether they’re “good ideas” (except insofar as we’re actually discussing a statement about another statement)). Like your method, this one moves towards the truth in a largely unconstrained way, using many different sorts of reasons it can come upon. Also like yours, it fails to explode on lack of justification; if you can’t find any evidence, you merely use the prior; it may not actually point you to the best thing, but, well, what else are you going to do?
The clear difference I see is that the Bayesian epistemology quantifies uncertainty and puts a mathematical model around it; this model doesn’t actually match how we reason under uncertainty, but it’s a useful idealization of it. Your epistemology does not quantify uncertainty, does not lay out criteria for criticisms, etc.; it seems to be based on verbally describing what reasonable people do, but as a prescription it’s useless (unless you already knew how to think and just needed a reminder of some step). In particular it doesn’t ground out to math, so even in very simple toy examples where an agent knows what toy example it’s in and what its goal is, it’s unclear how your epistemology should be used, while Bayesian probability easily gives optimal prescriptions.
Have I got this right?