I think this post makes an excellent point, and brings to light the aspect of Bayesianism that always made me uncomfortable.
Everyone knows we are not really rational agents; we do not compute terribly fast or accurately (as Morendil states), we are often unaware of our underlying motivations and assumptions, and even those we know about are often fuzzy, contradictory and idealistic.
As such, I think we have different ways of reasoning about things, making decisions, assigning preferences, holding and overcoming inconsistencies, etc.. While it is certainly useful to have a science of quantitative rationality, I doubt we think that way at all… and if we tried, we would quickly run into the qualitative, irrational ramparts of our minds.
Perhaps a Fuzzy Bayesianism would be handy: something that can handle uncertainty, ambivalence and apathy in any of its objects. Something where we don’t need to put in numbers where numbers would be a lie.
Doing research in biology, I can assure you that the more decimal places of accuracy I see, the more I doubt its reliability.
If you are envisioning some sort of approximation of Bayesian reasoning, perhaps one dealing with an ordinal set of probabilities, a framework that is useful in everyday circumstances, I would love to see that suggested, tested and evolving.
It would have to encompass a heuristic for determining the importance of observations, as well as their reliability and general procedures for updating beliefs based on those observations (paired with their reliability).
I think this post makes an excellent point, and brings to light the aspect of Bayesianism that always made me uncomfortable.
Everyone knows we are not really rational agents; we do not compute terribly fast or accurately (as Morendil states), we are often unaware of our underlying motivations and assumptions, and even those we know about are often fuzzy, contradictory and idealistic.
As such, I think we have different ways of reasoning about things, making decisions, assigning preferences, holding and overcoming inconsistencies, etc.. While it is certainly useful to have a science of quantitative rationality, I doubt we think that way at all… and if we tried, we would quickly run into the qualitative, irrational ramparts of our minds.
Perhaps a Fuzzy Bayesianism would be handy: something that can handle uncertainty, ambivalence and apathy in any of its objects. Something where we don’t need to put in numbers where numbers would be a lie.
Doing research in biology, I can assure you that the more decimal places of accuracy I see, the more I doubt its reliability.
If you are envisioning some sort of approximation of Bayesian reasoning, perhaps one dealing with an ordinal set of probabilities, a framework that is useful in everyday circumstances, I would love to see that suggested, tested and evolving.
It would have to encompass a heuristic for determining the importance of observations, as well as their reliability and general procedures for updating beliefs based on those observations (paired with their reliability).
Was such a thing discussed on LW?
Let me be the first to say I like your username, though I wonder if you’ll regret it occasionally...
P.S. Welcome to Less Wrong!
Thank you, and thank you for the link; didn’t occur to me to check for such a topic.