People who post probability estimates of anything should explain in details how they arrived at them. Otherwise it should be called not probability estimate but pulling it out of your ass.
Seriously, stuff like “1% FAI success by 2100”? When there is no clear definition of AI in sight? Just stop.
All beliefs are probability estimates, although it can be hard to trace how a particular belief got to the degree of confidence it’s at, and while it might be a nice norm to have in a perfect world I think it’s unreasonable to demand that every time someone expresses how confident or unconfident they are in a belief, they should also clarify the entire precise history of that belief’s presence in their mind.
Apologies for the curmudgeonliness, but it really bugs me when people say things like this. The actual version of this statement that is true is
All coherent actions can be modeled as arising from beliefs that correspond to probability estimates
which is different and much weaker, as now we can argue about how important coherence is relative to other desiderata. One such desideratum is correspondence to reality, which I believe is Locaha’s point above. Personally, I would much rather have incoherent beliefs that correspond to reality than coherent beliefs that do not correspond to reality.
Whenever I post a probability estimate, it is solely for the purpose of making my position more clear, not as something anyone should use to actually update their beliefs. You should always consider probability estimates as rough information about how the person who made the estimate thinks, not as a factual bit of information about the prediction itself.
I don’t think you make anything clearer by translating your intuition’s unlikely to N%, while my unlikely is M%, where M!=N. You just make a false impression of having done a calculation (which, unlike intuition, can be confirmed).
Suppose passive_fist translates “unlikely” as 2% and Locaha translates “unlikely” as 12%. This could mean either of two things (or some combination of them). (1) passive_fist applies the word “unlikely” to things that feel more unlikely, corresponding to lower probability estimates when forced to quantify. (2) Both actually think much the same about the event in question, as shown by their use of the same word, but they have quite different processes (at least one of them very inaccurate) for translating those thoughts into numbers.
In case 1, quantifying helps to clarify that the two people involved mean quite different things by “unlikely”. There may be a lot of fuzziness about the numbers, but once we have them we can see that passive_fist will likely be much more surprised if something s/he calls “unlikely” happens, than Locaha will be if something s/he calls “unlikely” happens.
In case 2, quantifying just adds confusion and error.
I would expect that (especially for analytical quantitative types like most of LW’s readership) the truth is something like this. We think, mostly, in fuzzy terms that don’t correspond directly either to numbers or to words. There will be some region of subjective likelihood-feeling space that corresponds (e.g.) to the number 2% or 12%. There will be some region that corresponds (e.g.) to the word “unlikely”. These correspondences will all work differently for different people, but (a) there will generally be more consistency between one person’s “10%” and another’s than between one person’s “unlikely” and another’s, and (b) the finer-grained information you get by asking for probability estimates does have some value, provided you’ve wit enough not to imagine that everything expressed numerically is known accurately.
People who post probability estimates of anything should explain in details how they arrived at them. Otherwise it should be called not probability estimate but pulling it out of your ass.
Probabilities are useful for being precise about the claims that you are making. There no reason why one shouldn’t be precise about the claim one is making even when one doesn’t use a formal method to arrive at them.
If you don’t use a precise method to arrive at your claim, you have no business making a precise claim. Remember significant figures from high school chemistry? Same principle.
I think this is an error. (And so are “significant figures” as commonly used.) 2.4 +- 2 and 2.0 +- 2 are quite different estimates even though you wouldn’t (according to conventional wisdom) be justified in giving more than one “significant figure” for either.
Using the number of digits you quote to indicate how accurately you think you know the figure as well as to say what the number is is a hack. It’s a convenient hack sometimes, but that’s all it is. Everyone knows not to round intermediate results even when starting with low-precision numbers. Well, your final result might be used by someone else as an intermediate result in some bigger calculation.
The same goes for probabilities. It is very important to know when your estimate of a probability is very inaccurate—but that’s no reason to refuse to estimate an actual probability. Even if you just pulled it out of your arse: doing that makes it a very unreliable probability estimate but it’s still a probability estimate.
I won’t deny that significant figures are a crap implementation of the principle I’m talking about. But you have to propagate the uncertainty and include it, in some way, in your final answer, either numerically or via some explanation that might let me figure out how precise your answer is.
Don’t say “1% probability of FAI success by 2100.” Say ”.01-10% probability of FAI success by 100 based on XYZ.” Or if there’s no numerical process behind it that can support even a range like that, just say “FAI success by 2100 seems unlikely.”
Agreed. Though in the latter case you might still do best to give numbers: “Somewhere around 1%, but this is a wild guess so don’t take it too seriously.” This is not the same statement as the corresponding one with 2% instead of 1%, even though both might be reasonably accurately paraphrased as “unlikely” or even “very unlikely”.
If you don’t use a precise method to arrive at your claim, you have no business making a precise claim. Remember significant figures from high school chemistry? Same principle.
That assumes that someone isn’t calibrated. If someone calibrates his intuition via frequent usage of prediction book and by always thinking in terms of probability he might be able to make precise claims without following a precise method.
If someone would claim “1.21% chance of FAI” success by 2100 I would agree with you that the person didn’t learn the lesson about significant figures from high school chemistry. I don’t the that issue with someone claiming 1% chance.
If you want to get calibrated it’s also useful to start putting numbers on a lot of likelihoods that you think about, even if the precision is sometimes to high. It allows you to be wrong and that’s good for learning.
I think it’s likely that calibration is domain-specific, so I’m not sure I buy this unless the calibration has occurred in the same domain, which is rare/impossible for the domains we’re talking about.
I think it’s likely that calibration is domain-specific, so I’m not sure I buy this unless the calibration has occurred in the same domain, which is rare/impossible for the domains we’re talking about.
I think you can argue that the probability is inherently unknowable but I don’t see how a detailed process is much better than an intuitive process.
It’s very useful to have a mental ability to distinguish between 0.01, 0.001 and 0.0001 when it comes to thinking about XRisk events. I don’t think that it’s a good practice to call all of those events unlikely and avoiding to make semantic distinctions between them.
It’s very useful to have a mental ability to distinguish between 0.01, 0.001 and 0.0001 when it comes to thinking about XRisk events. I don’t think that it’s a good practice to call all of those events unlikely and avoiding to make semantic distinctions between them.
But how do you arrive at them? Intuition doesn’t deal with 0.01 and 0.00001. Intuition deals with vague notions of likely and unlikely, which also change depends on what you ate for lunch and the phase of the moon. IOW, your intuition is useless to me unless I can confirm it myself. (But then it’s not intuition anymore.)
But how do you arrive at them? Intuition doesn’t deal with 0.01 and 0.00001. Intuition deals with vague notions of likely and unlikely, which also change depends on what you ate for lunch and the phase of the moon.
I think there are plenty of cases where I can give you a intuitive answer that won’t change from 0.01 to 0.00001 depending on what I ate for lunch.
The chance that I die in the next year is higher than 0.00001 but lower than 0.01.
If you don’t have an intuition that allows you to do so, I think it’s because you don’t have enough exposure to people making distinctions between 0.01 and 0.00001.
If there’s a 0.01 chance that something happens tomorrow, then if everything stays the same you’d expect that thing to happen about three or four times this year, whereas if it’s 0.00001 you’d be quite surprised if it ever happens (EDIT: during your lifetime, assuming no cryonics/antiagathics/uploads). (Of course with stuff like x-risk intuition will be much less reliable.)
And that’s a good assumption, since by my estimate, 99.9537% of people are not calibrated.
Part of lesswrong mission is about moving to a world where more people are calibrated. I don’t think it’s helpful to declare calibration a lost course.
But the “50% probability of Situation A (2% probability of FAI in 100 years) and 50% probability of Situation B (0% probability of FAI in 100 years)” is much more informative to the reader than “1% probability of FAI in 100 years.” It exposes more about which parts of the estimate are pulled out of the writer’s ass. If I know something the writer doesn’t about any one of these component probabilities, I can update my own beliefs, or discuss the estimate, more usefully this way than if I’m just given a flat “1%.”
Probabilities are useful for being precise about the claims that you are making. There no reason why one shouldn’t be precise about the claim one is making even when one doesn’t use a formal method to arrive at them.
How is the belief of some random person X in some vague-defined event many years into the future is useful for anything nut the research into person X state of mind? Even if it’s defined to 1000 significant figures?
If you are reading the text of a person who presumably care about that person state of mind and what this person believes.
If you don’t why do you read the text in the first place?
I do think there a difference between someone thinking an event is unlikely with p=0.2, p=0.01 or p=0.0001. It worthwhile to put a number on the belief to communicate the likelihood.
If people frequently provide likelihoods you can also aggregate the data.
People who post probability estimates of anything should explain in details how they arrived at them. Otherwise it should be called not probability estimate but pulling it out of your ass.
Seriously, stuff like “1% FAI success by 2100”? When there is no clear definition of AI in sight? Just stop.
All beliefs are probability estimates, although it can be hard to trace how a particular belief got to the degree of confidence it’s at, and while it might be a nice norm to have in a perfect world I think it’s unreasonable to demand that every time someone expresses how confident or unconfident they are in a belief, they should also clarify the entire precise history of that belief’s presence in their mind.
Apologies for the curmudgeonliness, but it really bugs me when people say things like this. The actual version of this statement that is true is
which is different and much weaker, as now we can argue about how important coherence is relative to other desiderata. One such desideratum is correspondence to reality, which I believe is Locaha’s point above. Personally, I would much rather have incoherent beliefs that correspond to reality than coherent beliefs that do not correspond to reality.
I don’t think this belief has much correspondence to reality.
If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics
Whenever I post a probability estimate, it is solely for the purpose of making my position more clear, not as something anyone should use to actually update their beliefs. You should always consider probability estimates as rough information about how the person who made the estimate thinks, not as a factual bit of information about the prediction itself.
I don’t think you make anything clearer by translating your intuition’s unlikely to N%, while my unlikely is M%, where M!=N. You just make a false impression of having done a calculation (which, unlike intuition, can be confirmed).
Suppose passive_fist translates “unlikely” as 2% and Locaha translates “unlikely” as 12%. This could mean either of two things (or some combination of them). (1) passive_fist applies the word “unlikely” to things that feel more unlikely, corresponding to lower probability estimates when forced to quantify. (2) Both actually think much the same about the event in question, as shown by their use of the same word, but they have quite different processes (at least one of them very inaccurate) for translating those thoughts into numbers.
In case 1, quantifying helps to clarify that the two people involved mean quite different things by “unlikely”. There may be a lot of fuzziness about the numbers, but once we have them we can see that passive_fist will likely be much more surprised if something s/he calls “unlikely” happens, than Locaha will be if something s/he calls “unlikely” happens.
In case 2, quantifying just adds confusion and error.
I would expect that (especially for analytical quantitative types like most of LW’s readership) the truth is something like this. We think, mostly, in fuzzy terms that don’t correspond directly either to numbers or to words. There will be some region of subjective likelihood-feeling space that corresponds (e.g.) to the number 2% or 12%. There will be some region that corresponds (e.g.) to the word “unlikely”. These correspondences will all work differently for different people, but (a) there will generally be more consistency between one person’s “10%” and another’s than between one person’s “unlikely” and another’s, and (b) the finer-grained information you get by asking for probability estimates does have some value, provided you’ve wit enough not to imagine that everything expressed numerically is known accurately.
[EDITED to fix formatting screwup.]
Plus, some people here use stuff like PredictionBook to check whether the intuition they call “10%” is actually correct 10% of the time.
Probabilities are useful for being precise about the claims that you are making. There no reason why one shouldn’t be precise about the claim one is making even when one doesn’t use a formal method to arrive at them.
If you don’t use a precise method to arrive at your claim, you have no business making a precise claim. Remember significant figures from high school chemistry? Same principle.
I think this is an error. (And so are “significant figures” as commonly used.) 2.4 +- 2 and 2.0 +- 2 are quite different estimates even though you wouldn’t (according to conventional wisdom) be justified in giving more than one “significant figure” for either.
Using the number of digits you quote to indicate how accurately you think you know the figure as well as to say what the number is is a hack. It’s a convenient hack sometimes, but that’s all it is. Everyone knows not to round intermediate results even when starting with low-precision numbers. Well, your final result might be used by someone else as an intermediate result in some bigger calculation.
The same goes for probabilities. It is very important to know when your estimate of a probability is very inaccurate—but that’s no reason to refuse to estimate an actual probability. Even if you just pulled it out of your arse: doing that makes it a very unreliable probability estimate but it’s still a probability estimate.
I won’t deny that significant figures are a crap implementation of the principle I’m talking about. But you have to propagate the uncertainty and include it, in some way, in your final answer, either numerically or via some explanation that might let me figure out how precise your answer is.
Don’t say “1% probability of FAI success by 2100.” Say ”.01-10% probability of FAI success by 100 based on XYZ.” Or if there’s no numerical process behind it that can support even a range like that, just say “FAI success by 2100 seems unlikely.”
Agreed. Though in the latter case you might still do best to give numbers: “Somewhere around 1%, but this is a wild guess so don’t take it too seriously.” This is not the same statement as the corresponding one with 2% instead of 1%, even though both might be reasonably accurately paraphrased as “unlikely” or even “very unlikely”.
That assumes that someone isn’t calibrated. If someone calibrates his intuition via frequent usage of prediction book and by always thinking in terms of probability he might be able to make precise claims without following a precise method.
If someone would claim “1.21% chance of FAI” success by 2100 I would agree with you that the person didn’t learn the lesson about significant figures from high school chemistry. I don’t the that issue with someone claiming 1% chance.
If you want to get calibrated it’s also useful to start putting numbers on a lot of likelihoods that you think about, even if the precision is sometimes to high. It allows you to be wrong and that’s good for learning.
I think it’s likely that calibration is domain-specific, so I’m not sure I buy this unless the calibration has occurred in the same domain, which is rare/impossible for the domains we’re talking about.
I think you can argue that the probability is inherently unknowable but I don’t see how a detailed process is much better than an intuitive process.
It’s very useful to have a mental ability to distinguish between 0.01, 0.001 and 0.0001 when it comes to thinking about XRisk events. I don’t think that it’s a good practice to call all of those events unlikely and avoiding to make semantic distinctions between them.
But how do you arrive at them? Intuition doesn’t deal with 0.01 and 0.00001. Intuition deals with vague notions of likely and unlikely, which also change depends on what you ate for lunch and the phase of the moon. IOW, your intuition is useless to me unless I can confirm it myself. (But then it’s not intuition anymore.)
I think there are plenty of cases where I can give you a intuitive answer that won’t change from 0.01 to 0.00001 depending on what I ate for lunch.
The chance that I die in the next year is higher than 0.00001 but lower than 0.01.
If you don’t have an intuition that allows you to do so, I think it’s because you don’t have enough exposure to people making distinctions between 0.01 and 0.00001.
If there’s a 0.01 chance that something happens tomorrow, then if everything stays the same you’d expect that thing to happen about three or four times this year, whereas if it’s 0.00001 you’d be quite surprised if it ever happens (EDIT: during your lifetime, assuming no cryonics/antiagathics/uploads). (Of course with stuff like x-risk intuition will be much less reliable.)
And that’s a good assumption, since by my estimate, 99.9537% of people are not calibrated.
Part of lesswrong mission is about moving to a world where more people are calibrated. I don’t think it’s helpful to declare calibration a lost course.
Shalizi had a nice post about that.
But the “50% probability of Situation A (2% probability of FAI in 100 years) and 50% probability of Situation B (0% probability of FAI in 100 years)” is much more informative to the reader than “1% probability of FAI in 100 years.” It exposes more about which parts of the estimate are pulled out of the writer’s ass. If I know something the writer doesn’t about any one of these component probabilities, I can update my own beliefs, or discuss the estimate, more usefully this way than if I’m just given a flat “1%.”
Anna and Steve had a nice post about that.
How is the belief of some random person X in some vague-defined event many years into the future is useful for anything nut the research into person X state of mind? Even if it’s defined to 1000 significant figures?
If you are reading the text of a person who presumably care about that person state of mind and what this person believes. If you don’t why do you read the text in the first place?
I do think there a difference between someone thinking an event is unlikely with p=0.2, p=0.01 or p=0.0001. It worthwhile to put a number on the belief to communicate the likelihood.
If people frequently provide likelihoods you can also aggregate the data.
I recommend operationalizing this by recommending that people ask “Why do you think so?” when they see a probability estimate.