Hello, I would like to ask whether there is any summary/discussion of necessary/sufficient criteria according to which a reason for whatever (belief, action, goal, …) is sufficient.
If not, I would like to discuss it.
I’m sure there’s people here who could give a better answer. My take would be, from the rationalist/Bayesian perspective, is that you have a probability assigned to each belief based on some rationale, which may be subjective and involve a lot of estimation.
The important part is that when new relevant evidence is brought to your attention about that belief, you “update.” In the Bayesian sense thinking “given the new evidence B, and the probability of my old belief A, what is the probability of A given B?”
OK, thanks, but then one of my additional questions is: what is the reasonable threshold for the probability of my belief A given all available evidence B1, B2, .., Bn? And why?
Are you suggesting that beliefs must be binary? Either believed or not? E.g. if the probability of truth is over 50% then you believe it and don’t believe if it’s under 50%? Dispense with the binary and use the probability as your degree of belief. You can act with degrees of uncertainty. Hedge your bets, for example.
Ok, thanks. This is very interesting, and correct in theory (I guess). And I would be very glad to apply it. But before doing my first steps in it on my own by the trial-&-error method, I would like to know some best practices in doing so, if they are available at all. I strongly doubt this is a common practice in a common population and I slightly doubt that it is the common practice also for a “common” attendee of this forum, but I would still like to make this my (usual) habit.
And the greatest issue I see in this is how to talk to common people around me about common uncertain things that are probabilistic if they actually think of the common things as they would be certain. Should I try to gradually and unnoticeably change their paradigm? Or should I use double language: probabilistic inside, but confidential outside?
(I am aware that these questions might be difficult, and I don’t necessarily expect direct answers.)
I’m not sure what to say besides “Bayesian thinking” here. This doesn’t necessarily mean plugging in numbers (although that can help), but develop habits like not neglecting priors or base rates, considering how consistent the supposed evidence is with the converse of the hypotheses and so forth. I think normal, non-rationalist people reason in a Bayesian way at least some of the time. People mostly don’t object to good epistemology, they just use a lot of bad epistemology too. Normal people understand words like “likely” or “uncertain”. These are not alien concepts, just underutilized.
I’m not sure what you mean by “threshold for the probability of belief in A.”
Say A is “I currently have a nose on my face.” You could assign that .99 or .99999 and either expresses a lot of certainty that it’s true, there’s not really a threshold involved.
Say A is “It will snow in Denver on or before October 31st 2021.” Right now, I would assign that a .65 based on my history of living in Denver for 41 years (it seems like it usually does).
But I could go back and look at weather data and see how often that actually happens. Maybe it’s been 39 out of the last 41 years, in which case I should update. Or maybe there’s an El Niño-like weather pattern this year or something like that… so I would adjust up or down accordingly.
The idea being, overtime, encountering evidence and learning to evaluate the quality of the evidence, you would get closer to the “true probability” of whatever A is.
Maybe you’re more asking about how should certain kinds of evidence change the probability of a belief being true? Like how much to update based on evidence presented?
Hello, I would like to ask whether there is any summary/discussion of necessary/sufficient criteria according to which a reason for whatever (belief, action, goal, …) is sufficient. If not, I would like to discuss it.
I’m sure there’s people here who could give a better answer. My take would be, from the rationalist/Bayesian perspective, is that you have a probability assigned to each belief based on some rationale, which may be subjective and involve a lot of estimation.
The important part is that when new relevant evidence is brought to your attention about that belief, you “update.” In the Bayesian sense thinking “given the new evidence B, and the probability of my old belief A, what is the probability of A given B?”
But in practice that’s really hard to do because we have all of these crazy biases. Scott’s recent blog post was good on this point.
OK, thanks, but then one of my additional questions is: what is the reasonable threshold for the probability of my belief A given all available evidence B1, B2, .., Bn? And why?
Are you suggesting that beliefs must be binary? Either believed or not? E.g. if the probability of truth is over 50% then you believe it and don’t believe if it’s under 50%? Dispense with the binary and use the probability as your degree of belief. You can act with degrees of uncertainty. Hedge your bets, for example.
Ok, thanks. This is very interesting, and correct in theory (I guess). And I would be very glad to apply it. But before doing my first steps in it on my own by the trial-&-error method, I would like to know some best practices in doing so, if they are available at all. I strongly doubt this is a common practice in a common population and I slightly doubt that it is the common practice also for a “common” attendee of this forum, but I would still like to make this my (usual) habit.
And the greatest issue I see in this is how to talk to common people around me about common uncertain things that are probabilistic if they actually think of the common things as they would be certain. Should I try to gradually and unnoticeably change their paradigm? Or should I use double language: probabilistic inside, but confidential outside?
(I am aware that these questions might be difficult, and I don’t necessarily expect direct answers.)
I’m not sure what to say besides “Bayesian thinking” here. This doesn’t necessarily mean plugging in numbers (although that can help), but develop habits like not neglecting priors or base rates, considering how consistent the supposed evidence is with the converse of the hypotheses and so forth. I think normal, non-rationalist people reason in a Bayesian way at least some of the time. People mostly don’t object to good epistemology, they just use a lot of bad epistemology too. Normal people understand words like “likely” or “uncertain”. These are not alien concepts, just underutilized.
I’m not sure what you mean by “threshold for the probability of belief in A.”
Say A is “I currently have a nose on my face.” You could assign that .99 or .99999 and either expresses a lot of certainty that it’s true, there’s not really a threshold involved.
Say A is “It will snow in Denver on or before October 31st 2021.” Right now, I would assign that a .65 based on my history of living in Denver for 41 years (it seems like it usually does).
But I could go back and look at weather data and see how often that actually happens. Maybe it’s been 39 out of the last 41 years, in which case I should update. Or maybe there’s an El Niño-like weather pattern this year or something like that… so I would adjust up or down accordingly.
The idea being, overtime, encountering evidence and learning to evaluate the quality of the evidence, you would get closer to the “true probability” of whatever A is.
Maybe you’re more asking about how should certain kinds of evidence change the probability of a belief being true? Like how much to update based on evidence presented?