The following is an idea that’s been nagging at me for a while, and I finally have it clear enough in my mind to at least try to state it. Any feedback would be highly appreciated (especially if what I say is confusing!).
I think there are cases where you shouldn’t assign a probability to your beliefs. Most Bayesian updating is a form of computation, and you need to assign a probability to that computation being a reasonable thing to update on. Unless I have confidence in the procedure that I’m using to update my beliefs, I shouldn’t take the number I get at the end as something to update my beliefs to...yes, I should update my beliefs towards that number, but possibly only a very tiny amount.
Now here is the problem you run into. When I start out, I probably haven’t even chosen a prior. Choosing that prior is itself a computation that I have to make. If I have a low degree of confidence in the reasonability of that computation, then what am I supposed to do? It seems silly to take the result of that computation as my prior, as the result is probably meaningless. I basically want to update by a small amount away from “nothing”, but “nothing” isn’t a probabilitym so it’s quite unclear what to do in probabilistic terms. (Note that once we start to have higher confidence in our calculations, then we can essentially update away from the “nothing” state by saying that almost any possible prior would have led to something close to our current belief.)
I appreciate the upvotes but I can’t imagine that I expressed things so clearly that no one is confused / has points of disagreement / clarification / etc., especially since this idea isn’t even clear in my head yet. If someone wants to take the time to help me clarify my views here, or to point out flaws in my thinking, then I would appreciate it!
ETA: I wonder if complaining about upvotes without any comments is as frowned upon as complaining about downvotes without any comments. I guess I’m about to find out...
The following is an idea that’s been nagging at me for a while, and I finally have it clear enough in my mind to at least try to state it. Any feedback would be highly appreciated (especially if what I say is confusing!).
I think there are cases where you shouldn’t assign a probability to your beliefs. Most Bayesian updating is a form of computation, and you need to assign a probability to that computation being a reasonable thing to update on. Unless I have confidence in the procedure that I’m using to update my beliefs, I shouldn’t take the number I get at the end as something to update my beliefs to...yes, I should update my beliefs towards that number, but possibly only a very tiny amount.
Now here is the problem you run into. When I start out, I probably haven’t even chosen a prior. Choosing that prior is itself a computation that I have to make. If I have a low degree of confidence in the reasonability of that computation, then what am I supposed to do? It seems silly to take the result of that computation as my prior, as the result is probably meaningless. I basically want to update by a small amount away from “nothing”, but “nothing” isn’t a probabilitym so it’s quite unclear what to do in probabilistic terms. (Note that once we start to have higher confidence in our calculations, then we can essentially update away from the “nothing” state by saying that almost any possible prior would have led to something close to our current belief.)
I appreciate the upvotes but I can’t imagine that I expressed things so clearly that no one is confused / has points of disagreement / clarification / etc., especially since this idea isn’t even clear in my head yet. If someone wants to take the time to help me clarify my views here, or to point out flaws in my thinking, then I would appreciate it!
ETA: I wonder if complaining about upvotes without any comments is as frowned upon as complaining about downvotes without any comments. I guess I’m about to find out...