Does the average LW user actually maintain a list of probabilities for their beliefs? Or is Bayesian probabilistic reasoning just some gold standard that no-one here actually does? If the former, what kinds of stuff do you have on your list?
Or is Bayesian probabilistic reasoning just some gold standard that no-one here actually does?
It isn’t really possible since in many cases it isn’t even computable let alone feasible for currently existing human brains. Approximations are the best we can do, but I still consider it the best available epistemological framework for reasons similar to those given by Jaynes.
If the former, what kinds of stuff do you have on your list?
Does the average LW user actually maintain a list of probabilities for their beliefs? Or is Bayesian probabilistic reasoning just some gold standard that no-one here actually does?
People’s brains can barely manage to multiply three-digit numbers together, so no human can do “Bayesian probabilistic reasoning”. So for humans it’s at best “the latter while using various practical tips to approximate the benefits of former” (e.g. being willing to express your certainty in a belief numerically when such a number is asked for you in a discussion).
What ArisKatsaris said is accurate—given our hardware, it wouldn’t actually be a good thing to keep track of explicit probabilities for everything.
I try to put numbers on things if I have to make an important decision, and I have enough time to sit down and sketch it out. The last time I did that, I combined it with drawing graphs, and found I was actually using the drawings more—now I wonder if that’s a more intuitive way to handle it. (The way I visualize probabilities is splitting a bar up into segments, with the length of the segments in proportion to the length of the whole bar indicating the probability.)
One of my friends does keep explicit probabilities on unknowns that have a big affect on his life. I’m not sure what all he uses them for. Sometimes it gets… interesting, when I know his value for an unknown that will also affect one of my decisions, and I know he has access to more information than I do, but I’m not sure whether I trust his calibration. I’m still not really sure what the correct way to handle this is.
It’s a gold standard—true Bayesian reasoning is actually pretty much impossible in practice. But you can get a lot of mileage off of the simple approximation: “What’s my current belief, how unlikely is this evidence, oh hey I should/shouldn’t change my mind now.”
Putting numbers on things forces you to be more objective about the evidence, and also lets you catch things like “Wait, this evidence is pretty good—it’s got an odds ratio of a hundred to one—but my prior should be so low that I still shouldn’t believe it.”
With actual symbols and specific numbers? no. But I do visualize approximate graphs over probability distributions over configuration spaces and stuff like that, and I tend to use the related but simpler theorems in fermi calculations.
Does the average LW user actually maintain a list of probabilities for their beliefs? Or is Bayesian probabilistic reasoning just some gold standard that no-one here actually does? If the former, what kinds of stuff do you have on your list?
No, but some try.
It isn’t really possible since in many cases it isn’t even computable let alone feasible for currently existing human brains. Approximations are the best we can do, but I still consider it the best available epistemological framework for reasons similar to those given by Jaynes.
Stuff like this.
People’s brains can barely manage to multiply three-digit numbers together, so no human can do “Bayesian probabilistic reasoning”. So for humans it’s at best “the latter while using various practical tips to approximate the benefits of former” (e.g. being willing to express your certainty in a belief numerically when such a number is asked for you in a discussion).
What ArisKatsaris said is accurate—given our hardware, it wouldn’t actually be a good thing to keep track of explicit probabilities for everything.
I try to put numbers on things if I have to make an important decision, and I have enough time to sit down and sketch it out. The last time I did that, I combined it with drawing graphs, and found I was actually using the drawings more—now I wonder if that’s a more intuitive way to handle it. (The way I visualize probabilities is splitting a bar up into segments, with the length of the segments in proportion to the length of the whole bar indicating the probability.)
One of my friends does keep explicit probabilities on unknowns that have a big affect on his life. I’m not sure what all he uses them for. Sometimes it gets… interesting, when I know his value for an unknown that will also affect one of my decisions, and I know he has access to more information than I do, but I’m not sure whether I trust his calibration. I’m still not really sure what the correct way to handle this is.
It’s a gold standard—true Bayesian reasoning is actually pretty much impossible in practice. But you can get a lot of mileage off of the simple approximation: “What’s my current belief, how unlikely is this evidence, oh hey I should/shouldn’t change my mind now.”
Putting numbers on things forces you to be more objective about the evidence, and also lets you catch things like “Wait, this evidence is pretty good—it’s got an odds ratio of a hundred to one—but my prior should be so low that I still shouldn’t believe it.”
With actual symbols and specific numbers? no. But I do visualize approximate graphs over probability distributions over configuration spaces and stuff like that, and I tend to use the related but simpler theorems in fermi calculations.