I use the word prove because I’m doing it deductively in math. I already linked you to the 2+2=3 thing, I believe. Also, the question of how I would, for example, change AI design if a well-known theorem is wrong (pretend it is the future and the best theorems proving Bayesianism are better-known and I am working on AI design) is both extremely hard to answer and unlikely to be necessary. Well unlikely is the wrong word; what is P(X | “There are no probabilities”)? :)
Probably the most damning criticism you’ll find, curi, is that fallibilism isn’t useful to the Bayesian.
The fundamental disagreement here is somewhere in the following statement:
“There exist true things, and we have a means of determining how likely it is for any given statement to be true. Furthermore, a statement that has a high likelihood of being true should be believed over a similar statement with a lower likelihood of being true.”
I suspect your disagreement is in one of several places.
1) You disagree that there even exist epistemically “true” facts.
2) That we can determine how likely something is to be true.
or
3) That likelihood of being true (as defined by us) is reason to believe the truth of something.
I can actually flesh out your objections to all of these things.
For 1, you could probably successfully argue that we aren’t capable of determining if we’ve ever actually arrived at a true epistemic statement because real certainty doesn’t exist, thus the existence or nonexistence of true epistemic statements is on the same epistemological footing as the existence of God—i.e. shaky to the point of not concerning oneself with them all together.
2 basically ties in with the above directly.
3 is a whole ’nother ball game, and I don’t think it’s really been broached yet by anyone, but it’s certainly a valid point of contention. I’ll leave it out unless you’d like to pursue it.
The Bayesian counter to all of these is simply, “That doesn’t really do anything for me.”
Declaring we have certainty, and quantifying it as best we can is incredibly useful. I can pick up an apple and let go. It will fall to the ground. I have an incredibly huge amount of certainty in my ability to repeat that experiment.
That I cannot foresee the philosophical paradigm that will uproot my hypothesis that dropped apples fall to the ground is not a very good reason to reject my relative certainty in the soundness of my hypothesis. Such a apples-aren’t-falling-when-dropped paradigm would literally (and necessarily) uproot everything else we know about the world.
Basically, what I’m trying to say is that all you’re ever going to get out of a Bayesian is, “No, I disagree. I think we can have certainty.” And the only way you could disprove conclusions made by Bayesians are through means the Bayesian would have already seen, and thus the Bayesian would have already rejected said conclusion.
You’ve already outlined that the fallibilist will just keep tweaking explanations until an explanation with no criticism is reached. I think you might find Bayesianism more palatable if you just pretend that we aren’t trying to find certainty, just say we’re trying to minimize criticism.
This probably hasn’t been a very satisfying answer. I certainly agree it’s useful to have an understanding of the biases to our certainties. I also think Bayesianism happens to build that into itself quite well. Personally, I don’t think there’s anything I’m absolutely certain about, because to claim so would be silly.
Small nitpick: I don’t like your use of the word ‘certainty’ here. Especially in philosophy, it has too much of a connotation of “literally impossible for me to be wrong” rather than “so ridiculously unlikely that I’m wrong that we can just ignore it”, which may cause confusion.
Where don’t you like it? I don’t think anyone actually argues for your first definition, because, like I said, it’s silly. I think curi’s point is that fallibilism is predicated on your second definition not (ever?) being a valid claim.
My point is that the things we are “certain” about (as per your second definition) probably coincide almost exactly with “statements without criticism” as per curi’s definition(s).
It is a silly definition, but people are silly enough that I hear it often enough to be wary of it.
My point is that the things we are “certain” about (as per your second definition) probably coincide almost exactly with “statements without criticism” as per curi’s definition(s).
I interpreted this as the first definition. I guess we should see what curi says.
I think we have very different goals, and that the Popperian ones are better.
There is more to epistemology, and to philosophy, than math.
I’d say you are practically trying to eliminate all philosophy. And that saying you have an epistemology at all is very misleading, because epistemology is a philosophical field.
I think we have very different goals, and that the Popperian ones are better.
So could you be more precise in how you think the goals differ and why the Popperian goals are better?
There is more to epistemology, and to philosophy, than math.
I ’d say you are practically trying to eliminate all philosophy. And that saying you have an epistemology at all is very misleading, because epistemology is a philosophical field.
Huh? Do you mean that because the Bayesians have made precise mathematical claims it somehow ceases to be an epistemological system? What does that even mean? I don’t incidentally know what it means to eliminate philosophy, but areas can certainly be carved off from philosophy into other branches. Indeed, this is generally what happens. Philosophy is the big grab bag of things that we don’t have a very good precise feel for. As we get more precise understanding things break off. For example, biology broke off from philosophy (when it broke off isn’t clear, but certainly by 1900 it was a separate field) with the philosophers now only focusing on the remaining tough issues like how to define “species”. Similarly, economics broke off. Again, where it broke off is tough (that’s why Bentham and Adam Smith are often both classified as philosophers). A recent break off has been psychology, which some might argue is still in the process. One thing that most people would still see as clearly in the philosophy realm is moral reasoning. Indeed, some would argue that the ultimate goal of philosophy is to eliminate itself.
If it helps at all, in claiming that the Bayesians lack an epistemology or are not trying to philosophy it might help to taboo both epistemology and philosophy and restate those statements. What do those claims mean in a precise way?
Different people are telling me different things. I have been told some very strong instrumentalist and anti-philosophy arguments in my discussions here. I don’t know just how representative of all Bayesians that is.
For example, moral philosophy has been trashed by everyone who spoke to me about it so far. I get told its meaningless, or that Bayesian epistemology cannot create moral knowledge. No one has yet been like “oh my god, epistemology should be able to create moral and other philosophical (non-empirical, non-observational) knowledge! Bayesian stuff is wrong since it can’t!” Rather, people don’t seem to mind, and will argue at length that e.g. explanatory knowledge and non-empirical knowledge don’t exist or are worthless and prediction is everything.
By “philosophy” I mean things which can’t be experimentally/empirically tested (as opposed to “science” by which I mean things that can be). So for philosophy, no observations are directly relevant.
Make sense? Where do you stand on these issues?
And the way I think Popperian goals are better is that they value explanations which help us understand the world instead of being instrumentalist, positivist, anti-philosophical, or anything like that.
For example, moral philosophy has been trashed by everyone who spoke to me about it so far.
Have you never dealt with people who aren’t moral realists before?
And the way I think Popperian goals are better is that they value explanations which help us understand the world instead of being instrumentalist, positivist, anti-philosophical, or anything like that.
You are going to have to expand on this. I’m still confused by what you mean by anti-philosophical. I also don’t see why “instrumentalist” is a negative. The Bayesian doesn’t have a problem with trying to understand the world: the way they measure that understanding is how well they can predict things. And Bayesianism is not the same as positivist by most definitions of that term, so how are you defining an approach as positivist and why do you consider that to be a bad thing?
In order for any philosophy to be valid, the human brain must be able to evaluate deductive arguments; they are a huge component of philosophy, with many often being needed to argue a single idea. Wondering what to do in case these are wrong is not only unnecessary but impossible.
I don’t have any criticism of deductive logic itself. But I do have criticisms of some of the premises i expect you to use. For example, they won’t all be deductively argued for themselves. That raises the problem of: how will you sort out good ideas from bad ideas for use as premises? That gets into various proposed solutions to that problem, such as induction or Popperian epistemology. But if you get into that, right in the premises of your supposed proof, then it won’t be much of a proof because so much substantive content in the premises will be non-deductive.
Do you agree with the premises I have used in the discussion of Dutch books and VNM-utility so far? There it is basically “a decision precess that we actually care about must have the following properties” and that’s it. I did skim over inferring probabilities from Dutch books and VNM axiom 3 and there may be some hidden premises in the former.
Do you agree with the premises I have used in the discussion of Dutch books and VNM-utility so far?
I don’t think so. You said we have to assign probabilities to avoid getting Dutch Booked. I want an example of that. I got an example where probabilities weren’t mentioned, which did not convince me they were needed.
I use the word prove because I’m doing it deductively in math. I already linked you to the 2+2=3 thing, I believe. Also, the question of how I would, for example, change AI design if a well-known theorem is wrong (pretend it is the future and the best theorems proving Bayesianism are better-known and I am working on AI design) is both extremely hard to answer and unlikely to be necessary. Well unlikely is the wrong word; what is P(X | “There are no probabilities”)? :)
Probably the most damning criticism you’ll find, curi, is that fallibilism isn’t useful to the Bayesian.
The fundamental disagreement here is somewhere in the following statement:
“There exist true things, and we have a means of determining how likely it is for any given statement to be true. Furthermore, a statement that has a high likelihood of being true should be believed over a similar statement with a lower likelihood of being true.”
I suspect your disagreement is in one of several places.
1) You disagree that there even exist epistemically “true” facts. 2) That we can determine how likely something is to be true. or 3) That likelihood of being true (as defined by us) is reason to believe the truth of something.
I can actually flesh out your objections to all of these things.
For 1, you could probably successfully argue that we aren’t capable of determining if we’ve ever actually arrived at a true epistemic statement because real certainty doesn’t exist, thus the existence or nonexistence of true epistemic statements is on the same epistemological footing as the existence of God—i.e. shaky to the point of not concerning oneself with them all together.
2 basically ties in with the above directly.
3 is a whole ’nother ball game, and I don’t think it’s really been broached yet by anyone, but it’s certainly a valid point of contention. I’ll leave it out unless you’d like to pursue it.
The Bayesian counter to all of these is simply, “That doesn’t really do anything for me.”
Declaring we have certainty, and quantifying it as best we can is incredibly useful. I can pick up an apple and let go. It will fall to the ground. I have an incredibly huge amount of certainty in my ability to repeat that experiment.
That I cannot foresee the philosophical paradigm that will uproot my hypothesis that dropped apples fall to the ground is not a very good reason to reject my relative certainty in the soundness of my hypothesis. Such a apples-aren’t-falling-when-dropped paradigm would literally (and necessarily) uproot everything else we know about the world.
Basically, what I’m trying to say is that all you’re ever going to get out of a Bayesian is, “No, I disagree. I think we can have certainty.” And the only way you could disprove conclusions made by Bayesians are through means the Bayesian would have already seen, and thus the Bayesian would have already rejected said conclusion.
You’ve already outlined that the fallibilist will just keep tweaking explanations until an explanation with no criticism is reached. I think you might find Bayesianism more palatable if you just pretend that we aren’t trying to find certainty, just say we’re trying to minimize criticism.
This probably hasn’t been a very satisfying answer. I certainly agree it’s useful to have an understanding of the biases to our certainties. I also think Bayesianism happens to build that into itself quite well. Personally, I don’t think there’s anything I’m absolutely certain about, because to claim so would be silly.
Small nitpick: I don’t like your use of the word ‘certainty’ here. Especially in philosophy, it has too much of a connotation of “literally impossible for me to be wrong” rather than “so ridiculously unlikely that I’m wrong that we can just ignore it”, which may cause confusion.
Where don’t you like it? I don’t think anyone actually argues for your first definition, because, like I said, it’s silly. I think curi’s point is that fallibilism is predicated on your second definition not (ever?) being a valid claim.
My point is that the things we are “certain” about (as per your second definition) probably coincide almost exactly with “statements without criticism” as per curi’s definition(s).
It is a silly definition, but people are silly enough that I hear it often enough to be wary of it.
I interpreted this as the first definition. I guess we should see what curi says.
people genrally try to have their cake and eat it: they want certainty to mean “cannot be wrong”, but only on the basis that they feel sure.
I think we have very different goals, and that the Popperian ones are better.
There is more to epistemology, and to philosophy, than math.
I’d say you are practically trying to eliminate all philosophy. And that saying you have an epistemology at all is very misleading, because epistemology is a philosophical field.
So could you be more precise in how you think the goals differ and why the Popperian goals are better?
Huh? Do you mean that because the Bayesians have made precise mathematical claims it somehow ceases to be an epistemological system? What does that even mean? I don’t incidentally know what it means to eliminate philosophy, but areas can certainly be carved off from philosophy into other branches. Indeed, this is generally what happens. Philosophy is the big grab bag of things that we don’t have a very good precise feel for. As we get more precise understanding things break off. For example, biology broke off from philosophy (when it broke off isn’t clear, but certainly by 1900 it was a separate field) with the philosophers now only focusing on the remaining tough issues like how to define “species”. Similarly, economics broke off. Again, where it broke off is tough (that’s why Bentham and Adam Smith are often both classified as philosophers). A recent break off has been psychology, which some might argue is still in the process. One thing that most people would still see as clearly in the philosophy realm is moral reasoning. Indeed, some would argue that the ultimate goal of philosophy is to eliminate itself.
If it helps at all, in claiming that the Bayesians lack an epistemology or are not trying to philosophy it might help to taboo both epistemology and philosophy and restate those statements. What do those claims mean in a precise way?
Different people are telling me different things. I have been told some very strong instrumentalist and anti-philosophy arguments in my discussions here. I don’t know just how representative of all Bayesians that is.
For example, moral philosophy has been trashed by everyone who spoke to me about it so far. I get told its meaningless, or that Bayesian epistemology cannot create moral knowledge. No one has yet been like “oh my god, epistemology should be able to create moral and other philosophical (non-empirical, non-observational) knowledge! Bayesian stuff is wrong since it can’t!” Rather, people don’t seem to mind, and will argue at length that e.g. explanatory knowledge and non-empirical knowledge don’t exist or are worthless and prediction is everything.
By “philosophy” I mean things which can’t be experimentally/empirically tested (as opposed to “science” by which I mean things that can be). So for philosophy, no observations are directly relevant.
Make sense? Where do you stand on these issues?
And the way I think Popperian goals are better is that they value explanations which help us understand the world instead of being instrumentalist, positivist, anti-philosophical, or anything like that.
Have you never dealt with people who aren’t moral realists before?
You are going to have to expand on this. I’m still confused by what you mean by anti-philosophical. I also don’t see why “instrumentalist” is a negative. The Bayesian doesn’t have a problem with trying to understand the world: the way they measure that understanding is how well they can predict things. And Bayesianism is not the same as positivist by most definitions of that term, so how are you defining an approach as positivist and why do you consider that to be a bad thing?
In order for any philosophy to be valid, the human brain must be able to evaluate deductive arguments; they are a huge component of philosophy, with many often being needed to argue a single idea. Wondering what to do in case these are wrong is not only unnecessary but impossible.
I don’t have any criticism of deductive logic itself. But I do have criticisms of some of the premises i expect you to use. For example, they won’t all be deductively argued for themselves. That raises the problem of: how will you sort out good ideas from bad ideas for use as premises? That gets into various proposed solutions to that problem, such as induction or Popperian epistemology. But if you get into that, right in the premises of your supposed proof, then it won’t be much of a proof because so much substantive content in the premises will be non-deductive.
Do you agree with the premises I have used in the discussion of Dutch books and VNM-utility so far? There it is basically “a decision precess that we actually care about must have the following properties” and that’s it. I did skim over inferring probabilities from Dutch books and VNM axiom 3 and there may be some hidden premises in the former.
I don’t think so. You said we have to assign probabilities to avoid getting Dutch Booked. I want an example of that. I got an example where probabilities weren’t mentioned, which did not convince me they were needed.