The process of choosing a probability is not quite that simple. You’re not just making a boolean decision about whether you know enough to know, you’re actually taking the time to distinguish between 10 different amounts of confidence (10%, 20%, 30%, etc), and then making ten more tiny distinctions (30%, 31%, 32% for instance)… at least that’s the way that I do it. (More efficient than making enough distinctions to choose between 100 different options.) When you are wondering exactly how likely you are to know something in order to choose a percentage, that’s when you have to start analyzing things. In order to answer the question, my thought process looked like this:
Bayes. I have to remember who that is. Okay, that’s the guy that came up with Bayesian probability. (This was instant, but that doesn’t mean it took zero mental work.)
Do I have his birthday in here? Nothing comes to mind.
Digs further: Do I have any reason to have read about his birthday at any point? No. Do I remember seeing a page about him? I can’t remember anything I read about his birthday.
Considers whether I should just go “I don’t know” and put a random year with a 0% probability. Decides that this would be copping out and I should try to actually figure this out.
When was Bayesian probability invented? Let’s see… at what point in history would that have occurred?
Try to brainstorm events that may have required Bayesian probability, or that would have suggested it didn’t exist yet.
Try to remember the time periods for when these events happened.
Defines a vague section of time in history.
Considers whether there might be some method of double-checking it.
Considers the meaning of “within 20 years either way” and what that means for the probability that I’m right.
Figures out where in my vague section of time the 40 year range should be fit.
Figures out which year is in the middle of the 40 year range and types it in.
Consider how many years Bayes would likely have to have lived for before giving his theorems to the world and adjust the year to that.
Considers whether it was at all possible for Bayesian probability to have existed before or after each event.
If possible, consider how likely it was that Baye’s probability existed before/after each event.
Calculate how many 40-year ranges there are in the vague section of time between the events where Bayes could not have been born.
Calculate the chance that I chose the correct 40-year section out of all the possible sections, if odds are equal.
Compare this to my probabilities regarding how likely it was for Bayes theorem to have existed before and after certain events.
Adjust my probability figure to take all that into account.
My answer to this question took at least twenty steps, and that doesn’t even count all the steps I went through for each event, nor does it count all the sub steps I went through for things that I sort of hand-waved like “Adjust my probability figure to take all that into account”.
If you think figuring out stuff is instant, you underestimate the number of steps your brain does in order to figure things out. I highly recommend doing meditation to improve your meta-cognition. Meta-cognition is awesome.
you’re actually taking the time to distinguish between 10 different amounts of confidence (10%, 20%, 30%, etc), and then making ten more tiny distinctions (30%, 31%, 32% for instance)… at least that’s the way that I do it
The straightforward interpretation of your words evaluates as a falsity, as you can’t estimate informal beliefs to within 1%.
I’d put it more in terms of decibels of log-odds than percentages of probability. Telling 98% from 99% (i.e. +17 dB from +20 dB) sounds easier to me than telling 50% from 56% (i.e. 0 dB from +1 dB).
No, I’m pretty certain you can’t. You can’t even formulate truth conditions for correctness of such an evaluation. Only in very special circumstances getting to that point would be plausible (when a conclusion is mostly determined by data that is received in an explicit form or if you work with a formalizable specification of a situation, as in probability theory problems; this is not what I meant by “informal beliefs”).
If you think figuring out stuff is instant, you underestimate the number of steps your brain does in order to figure things out.
(I was commenting on a skill/habit that might be useful in the situations where you don’t/can’t make the effort of explicitly reasoning about things. Don’t fight the hypothetical.)
(I was commenting on a skill that might be useful in the situations where you don’t/can’t make the effort of explicitly reasoning about things. Don’t fight the hypothetical.)
Is it your position that there is a thinking skill that is actually accurate for figuring stuff out without thinking about it?
I expect you can improve accuracy in the sense of improving calibration, by reducing estimated precision, avoiding unwarranted overconfidence, even when you are not considering questions in detail, if your intuitive estimation has an overconfidence problem, which seems to be common (more annoying in the form of an “The solution is S!” for some promptly confabulated arbitrary S, when quantifying uncertainty isn’t even on the agenda).
(I feel the language of there being “positions” has epistemically unhealthy connotations of encouraging status quo bias with respect to beliefs, although it’s clear what you mean.)
The process of choosing a probability is not quite that simple. You’re not just making a boolean decision about whether you know enough to know, you’re actually taking the time to distinguish between 10 different amounts of confidence (10%, 20%, 30%, etc), and then making ten more tiny distinctions (30%, 31%, 32% for instance)… at least that’s the way that I do it. (More efficient than making enough distinctions to choose between 100 different options.) When you are wondering exactly how likely you are to know something in order to choose a percentage, that’s when you have to start analyzing things. In order to answer the question, my thought process looked like this:
Bayes. I have to remember who that is. Okay, that’s the guy that came up with Bayesian probability. (This was instant, but that doesn’t mean it took zero mental work.)
Do I have his birthday in here? Nothing comes to mind.
Digs further: Do I have any reason to have read about his birthday at any point? No. Do I remember seeing a page about him? I can’t remember anything I read about his birthday.
Considers whether I should just go “I don’t know” and put a random year with a 0% probability. Decides that this would be copping out and I should try to actually figure this out.
When was Bayesian probability invented? Let’s see… at what point in history would that have occurred?
Try to brainstorm events that may have required Bayesian probability, or that would have suggested it didn’t exist yet.
Try to remember the time periods for when these events happened.
Defines a vague section of time in history.
Considers whether there might be some method of double-checking it.
Considers the meaning of “within 20 years either way” and what that means for the probability that I’m right.
Figures out where in my vague section of time the 40 year range should be fit.
Figures out which year is in the middle of the 40 year range and types it in.
Consider how many years Bayes would likely have to have lived for before giving his theorems to the world and adjust the year to that.
Considers whether it was at all possible for Bayesian probability to have existed before or after each event.
If possible, consider how likely it was that Baye’s probability existed before/after each event.
Calculate how many 40-year ranges there are in the vague section of time between the events where Bayes could not have been born.
Calculate the chance that I chose the correct 40-year section out of all the possible sections, if odds are equal.
Compare this to my probabilities regarding how likely it was for Bayes theorem to have existed before and after certain events.
Adjust my probability figure to take all that into account.
My answer to this question took at least twenty steps, and that doesn’t even count all the steps I went through for each event, nor does it count all the sub steps I went through for things that I sort of hand-waved like “Adjust my probability figure to take all that into account”.
If you think figuring out stuff is instant, you underestimate the number of steps your brain does in order to figure things out. I highly recommend doing meditation to improve your meta-cognition. Meta-cognition is awesome.
The straightforward interpretation of your words evaluates as a falsity, as you can’t estimate informal beliefs to within 1%.
I’d put it more in terms of decibels of log-odds than percentages of probability. Telling 98% from 99% (i.e. +17 dB from +20 dB) sounds easier to me than telling 50% from 56% (i.e. 0 dB from +1 dB).
Well, you can, but it would be a waste of time.
No, I’m pretty certain you can’t. You can’t even formulate truth conditions for correctness of such an evaluation. Only in very special circumstances getting to that point would be plausible (when a conclusion is mostly determined by data that is received in an explicit form or if you work with a formalizable specification of a situation, as in probability theory problems; this is not what I meant by “informal beliefs”).
(I was commenting on a skill/habit that might be useful in the situations where you don’t/can’t make the effort of explicitly reasoning about things. Don’t fight the hypothetical.)
Is it your position that there is a thinking skill that is actually accurate for figuring stuff out without thinking about it?
I expect you can improve accuracy in the sense of improving calibration, by reducing estimated precision, avoiding unwarranted overconfidence, even when you are not considering questions in detail, if your intuitive estimation has an overconfidence problem, which seems to be common (more annoying in the form of an “The solution is S!” for some promptly confabulated arbitrary S, when quantifying uncertainty isn’t even on the agenda).
(I feel the language of there being “positions” has epistemically unhealthy connotations of encouraging status quo bias with respect to beliefs, although it’s clear what you mean.)