I am arguing against tool-boxism, on the grounds that if it were accepted as true (I don’t think it can actually be true in a meaningful sense) you basically give up on the ability to converge on truth in an objective sense. Any kind of objective principles would not be tool-boxism.
It seems that those who feel that tool-boxism is false, seem to converge on Bayesianism as a set of principles, not that they are the full story, or that there are no other consequences or ways to extend them, but as a set of principles with no domain in which they can both be meaniningfully applied and where they give the wrong answer.
I am arguing against tool-boxism, on the grounds that if it were accepted as true (I don’t think it can actually be true in a meaningful sense) you basically give up on the ability to converge on truth in an objective sense.
You need to distinguish between truth and usefulness. If the justification of using different tools is purely on the basis of efficiency (in the limit, being able to solve a problem at all), then nothing is implied about the ability to converge on truth. Toolbox-ism does not necessarily imply pluralism in the resulting maps. There is also a thing where people advocate the use of multiple theories with different content, leading to an overall pluralism/relativism, but in view of the usefulness/truth distinction that is a different thing.
It seems that those who feel that tool-boxism is false, seem to converge on Bayesianism as a set of principles, not that they are the full story,
If they are not the full story, then you need other tools. You are saying contradictory things. Sometimes you say Bayes is the only tool you need, sometimes you say it can only do one thing.
but as a set of principles with no domain in which they can both be meaniningfully applied and where they give the wrong answer.
Not giving the wrong answer is not a sufficient criterion for giving the right answer. To get the right answer, you need to get the hypothesis that corresponds to reality, somehow, and you need to confirm it. Recall that Bayes does not give you any method for generating hypotheses, let alone one guaranteed to generate the one true on in an acceptable period of time. So Bayes does not guarantee truth—truth as correspondence, that is.
I am arguing against tool-boxism, on the grounds that if it were accepted as true (I don’t think it can actually be true in a meaningful sense) you basically give up on the ability to converge on truth in an objective sense. Any kind of objective principles would not be tool-boxism.
This sounds like you argue against it on the grounds that you don’t like a state of affairs where tool-boxism is true, so you assume it isn’t. This seems to me like motivated reasoning.
It’s structurally similar to the person who says they are believing in God because if God doesn’t exist that would mean that life is meaningless.
I don’t think it’s possible to have unmotivated reasoning. Nearly all reasoning begins by assuming a set of propositions, such as axioms, to be true, before following all the implications. If I believe objectivity is true, then I want to know what follows from it. Note that Cox’s theorem proceeds similarly, by forming a set of desiderata first, and then finding a set of rules that satisfies them. Do you not consider this chain of reasoning to be valid?
(If I strongly believed “life is meaningless” to be false, and I believed that “God does not exist implies life is meaningless” then concluding from those that God exists is logically valid. Whether or not the two first propositions are themselves valid is another question)
There’s motivation and there’s motivation. Bad motivation is when an object-level proposition is taken as the necessary output of an epistemological process, and the epistemology is chose to beg the question. Good motivation is avoiding question-begging in your epistemology.
One thing about that chain of reasoning is that it’s very unbayesian. We have catch-phrases like “0 and 1 aren’t probabilities”. Even if they are, how do you get your 1 as probability for the thesis of objectivity being true?
I guess this is a pretty subtle point to make, so I’ll try to state it more clearly again. Let’s assume tool-boxism is true in some deep ontological sense, such that, for any given problem in which we want to discover the truth, there are multiple sets of reasoning principles which each output a different answer. No one agrees on which principles are correct for each problem, everyone is guided by some combination of intuition, innate preferences, habit, tradition, culture, or whim. This is indeed the current situation in which we find ourselves, but if tool boxism is indeed true, then that suggests this is the best we can do, i.e., objectivity is false. Rationalists at least seem to posit that objectivity is true.
It also means that all reasoning is necessarily motivated reasoning, if it has to be guided by subjective preferences. But even if objectivity is true, motivated reasoning is still a valid intellectual process, and probably the only possible process until that objective set of reasoning principles is discovered fully. Note that Cox’s theorem is based on motivated reasoning, in the sense that a set of desiderata is established first, before trying to determine a set of principles that satisfy those desiderata.
This is a nearly universal form of reasoning, especially in science, where one tries to establish a set of laws that agree with things that are found empirically. I don’t know if it’s possible to disentangle preferences entirely from beliefs.
I am arguing against tool-boxism, on the grounds that if it were accepted as true (I don’t think it can actually be true in a meaningful sense) you basically give up on the ability to converge on truth in an objective sense. Any kind of objective principles would not be tool-boxism.
It seems that those who feel that tool-boxism is false, seem to converge on Bayesianism as a set of principles, not that they are the full story, or that there are no other consequences or ways to extend them, but as a set of principles with no domain in which they can both be meaniningfully applied and where they give the wrong answer.
You need to distinguish between truth and usefulness. If the justification of using different tools is purely on the basis of efficiency (in the limit, being able to solve a problem at all), then nothing is implied about the ability to converge on truth. Toolbox-ism does not necessarily imply pluralism in the resulting maps. There is also a thing where people advocate the use of multiple theories with different content, leading to an overall pluralism/relativism, but in view of the usefulness/truth distinction that is a different thing.
If they are not the full story, then you need other tools. You are saying contradictory things. Sometimes you say Bayes is the only tool you need, sometimes you say it can only do one thing.
Not giving the wrong answer is not a sufficient criterion for giving the right answer. To get the right answer, you need to get the hypothesis that corresponds to reality, somehow, and you need to confirm it. Recall that Bayes does not give you any method for generating hypotheses, let alone one guaranteed to generate the one true on in an acceptable period of time. So Bayes does not guarantee truth—truth as correspondence, that is.
This sounds like you argue against it on the grounds that you don’t like a state of affairs where tool-boxism is true, so you assume it isn’t. This seems to me like motivated reasoning.
It’s structurally similar to the person who says they are believing in God because if God doesn’t exist that would mean that life is meaningless.
I don’t think it’s possible to have unmotivated reasoning. Nearly all reasoning begins by assuming a set of propositions, such as axioms, to be true, before following all the implications. If I believe objectivity is true, then I want to know what follows from it. Note that Cox’s theorem proceeds similarly, by forming a set of desiderata first, and then finding a set of rules that satisfies them. Do you not consider this chain of reasoning to be valid?
(If I strongly believed “life is meaningless” to be false, and I believed that “God does not exist implies life is meaningless” then concluding from those that God exists is logically valid. Whether or not the two first propositions are themselves valid is another question)
There’s motivation and there’s motivation. Bad motivation is when an object-level proposition is taken as the necessary output of an epistemological process, and the epistemology is chose to beg the question. Good motivation is avoiding question-begging in your epistemology.
One thing about that chain of reasoning is that it’s very unbayesian. We have catch-phrases like “0 and 1 aren’t probabilities”. Even if they are, how do you get your 1 as probability for the thesis of objectivity being true?
I guess this is a pretty subtle point to make, so I’ll try to state it more clearly again. Let’s assume tool-boxism is true in some deep ontological sense, such that, for any given problem in which we want to discover the truth, there are multiple sets of reasoning principles which each output a different answer. No one agrees on which principles are correct for each problem, everyone is guided by some combination of intuition, innate preferences, habit, tradition, culture, or whim. This is indeed the current situation in which we find ourselves, but if tool boxism is indeed true, then that suggests this is the best we can do, i.e., objectivity is false. Rationalists at least seem to posit that objectivity is true.
It also means that all reasoning is necessarily motivated reasoning, if it has to be guided by subjective preferences. But even if objectivity is true, motivated reasoning is still a valid intellectual process, and probably the only possible process until that objective set of reasoning principles is discovered fully. Note that Cox’s theorem is based on motivated reasoning, in the sense that a set of desiderata is established first, before trying to determine a set of principles that satisfy those desiderata.
This is a nearly universal form of reasoning, especially in science, where one tries to establish a set of laws that agree with things that are found empirically. I don’t know if it’s possible to disentangle preferences entirely from beliefs.