He’d even probably concede that it is more likely that Zeus exists than that a completely random other god with no myths about them exists.
Given that someone like Richard Kennaway who’s smart and exposed to LW thinking (>10000 karma) doesn’t immediately find the point I’m making obvious, you are very optimistic.
People usually don’t change central beliefs about ontology in an hour after reading a convincing post on a forum. A hour might be enough to change the language you use, but it’s not enough to give you a new way to relate to reality.
But, in reality, I and most likely also you discount probabilities that are very small, instead of calculating them out and changing our actions
The probability that an asteroid destroys humanity in the next decade is relatively small. On the other hand it’s still useful for our society to invest more resources into telescopes to have all near-earth objects covered.
The same goes for Yellowstone destroying our civilisation.
Our society is quite poor at dealing with low probability high impact events. If it comes to things like Yellowstone the instinctual response of some people is to say: “Extraordinary claims require extraordinary evidence.”
That kind of thinking is very dangerous given that human technology get’s more and more powerful as time goes on.
I would say the probability of Yellowstone or meteor impact situation are both vastly higher than something like the existance of a specific deity. They’re in the realm of possibilities that are worth thinking about. But there are tons of other possible civilization-ending disasters that we don’t, and shouldn’t, consider, because they have much less evidence for them and thus are so improbable that they are not worth considering. I do not believe we as humans can function without discounting very small probabilities.
But yeah, I’m generally rather optimistic about things. Reading LW has helped me, at that—before, I did not know why various things seemed to be so wrong, now I have an idea, and I know there are people out there who also recognize these things and can work to fix them.
As for the note about changing their central beliefs, I agree on that. What I meant to say was that the central beliefs of this hypothetical skeptic are not actually different from yours in this particular regard, he just uses different terminology. That is, his thinking goes ‘This has little evidence for it and is a very strong claim that contradicts a lot of the evidence we have’ → ‘This is very unlikely to be true’ → ‘This is not true’ and what happens in his brain is he figures it’s untrue and does not consider it any further. I would assume that your thinking goes something along the lines of ‘This has little evidence for it and is a very strong claim that contradicts a lot of the evidence we have’ → ‘This is very unlikely to be true’, and then you skip that last step, but what still happens in your brain is that you figure it is probably untrue and don’t consider it any further.
And both of you are most likely willing to reconsider should additional evidence present itself.
I would say the probability of Yellowstone or meteor impact situation are both vastly higher than something like the existance of a specific deity. They’re in the realm of possibilities that are worth thinking about.
Careful there. Our intuition of what’s in the “realm of possibilities that are worth thinking about” doesn’t correspond to any particular probability, rather it is based on whether the thing is possible based on our current model of the world and doesn’t take into account how likely that model is to be wrong.
If I understand you correctly, then I agree. However, to me it seems clear that human beings discount probabilities that seem to them to be very small, and it also seems to me that we must do that, because calculating them out and having them weigh our actions by tiny amounts is impossible.
The question of where we should try to set the cut-off point is a more difficult one. It is usually too high, I think. But if, after actual consideration, it seems that something is actually extremely unlikely (as in, somewhere along the lines of 10^{-18} or whatever), then we treat it as if it is outright false, regardless of whether we say it is false or say that it is simply very unlikely.
And to me, this does not seem to be a problem so long as, when new evidence comes up, we still update, and then start considering the possibilities that now seem sufficiently probable.
Of course, there is a danger in that it is difficult for a successive series of small new pieces of evidence pointing towards a certain, previously very unlikely conclusion to overcome our resistance to considering very unlikely conclusions. This is precisely because I don’t believe we can actually use numbers to update all the possibilities, which are basically infinite in number. It is hard for me to imagine a slow, successive series of tiny nuggets of evidence that would slowly convince me that Zeus actually exists. I could read several thousand different myths about Zeus, and it still wouldn’t convince me. Something large enough for a single major push to the probability to force me to consider it more thoroughly, priviledge that hypothesis in the hypothesis-space, seems to be the much more likely way—say, Zeus speaking to me and showing off some of his powers. This is admittedly a weakness, but at least it is an admitted weakness, and I haven’t found a way to circumvent it yet but I can at least try to mitigate it by consciously paying more attention than I intuitively would to small but not infinitesimal probabilities.
Anyway, back to the earlier point: What I’m saying is that whether you say “X is untrue” or “X is extremely unlikely”, when considering the evidence you have for and against X, it is very possible that what happens in your brain when thinking about X is the same thing. The hypothetical skeptic who does not know to use the terminology of probabilities and likelihoods will simply call things he finds extremely unlikely ‘untrue’. And then, when a person who is unused to this sort of terminology hears the words ‘X is very unlikely’ he considers that to mean ‘X is not unlikely enough to be considered untrue, but it is still quite unlikely, which means X is quite possible, even if it is not the likeliest of possibilities’. And here a misunderstanding happens, because I meant to say that X is so unlikely that it is not worth considering, but he takes it as me saying X is unlikely, but not unlikely enough not to be worth considering.
Of course, there are also people who actually believe in something being true or untrue, meaning their probability estimate could not possibly be altered by any evidence. But in the case of most beliefs, and most people, I think that when they say ‘true’ or ‘false’, they mean ‘extremely likely’ or ‘extremely unlikely’.
What I’m saying is that whether you say “X is untrue” or “X is extremely unlikely”, when considering the evidence you have for and against X, it is very possible that what happens in your brain when thinking about X is the same thing.
Disagree. Most people use “unlikely” for something that fits their model but is unlikely, e.g., winning the lottery, having black come up ten times in a row in a game of roulette, two bullets colliding in mid air. “Untrue” is used for something that one’s model says is impossible, e.g, Zeus or ghosts existing.
I am confused now. Did you properly read my post? What you say here is ‘I disagree, what you said is correct.’
To try and restate myself, most people use ‘unlikely’ like you said, but some, many of whom frequent this site, use it for ‘so unlikely it is as good as impossible’, and this difference can cause communication issues.
My point is that in common usage (in other words from the inside) they distinction between “unlikely” and “impossible” doesn’t correspond to any probability. In fact there are “unlikely” events that have a lower probability than some “impossible” events.
Assuming you mean that things you believe are merely ‘unlikely’ can actually, more objectively, be less likely than things you believe are outright ‘impossible’, then I agree.
What I mean is that the conjunction of possible events will be perceived as unlikely, even if enough events are conjoined together to put the probability below what the threshold for “impossible” should be.
True. However, there is no such thing as ‘impossible’, or probability 0. And while in common language people do use ‘impossible’ for what is merely ‘very improbable’, there’s no accepted, specific threshold there. Your earlier point about people seeing a fake distinction between things that seem possible but unlikely in their model and things that seem impossible in their model contributes to that. I prefer to use ‘very improbable’ for things that are very improbable, and ‘unlikely’ for things that are merely unlikely, but it is important to keep in mind that most people do not use the same words I do and to communicate accurately I need to remember that.
Okay, I just typed that and then I went back and looked and it seems that we’ve talked a circle, which is a good indication that there is no disagreement in this conversation. I think that I’ll leave it here, unless you believe otherwise.
Given that someone like Richard Kennaway who’s smart and exposed to LW thinking (>10000 karma) doesn’t immediately find the point I’m making obvious, you are very optimistic.
People usually don’t change central beliefs about ontology in an hour after reading a convincing post on a forum. A hour might be enough to change the language you use, but it’s not enough to give you a new way to relate to reality.
The probability that an asteroid destroys humanity in the next decade is relatively small. On the other hand it’s still useful for our society to invest more resources into telescopes to have all near-earth objects covered. The same goes for Yellowstone destroying our civilisation.
Our society is quite poor at dealing with low probability high impact events. If it comes to things like Yellowstone the instinctual response of some people is to say: “Extraordinary claims require extraordinary evidence.”
That kind of thinking is very dangerous given that human technology get’s more and more powerful as time goes on.
I would say the probability of Yellowstone or meteor impact situation are both vastly higher than something like the existance of a specific deity. They’re in the realm of possibilities that are worth thinking about. But there are tons of other possible civilization-ending disasters that we don’t, and shouldn’t, consider, because they have much less evidence for them and thus are so improbable that they are not worth considering. I do not believe we as humans can function without discounting very small probabilities.
But yeah, I’m generally rather optimistic about things. Reading LW has helped me, at that—before, I did not know why various things seemed to be so wrong, now I have an idea, and I know there are people out there who also recognize these things and can work to fix them.
As for the note about changing their central beliefs, I agree on that. What I meant to say was that the central beliefs of this hypothetical skeptic are not actually different from yours in this particular regard, he just uses different terminology. That is, his thinking goes ‘This has little evidence for it and is a very strong claim that contradicts a lot of the evidence we have’ → ‘This is very unlikely to be true’ → ‘This is not true’ and what happens in his brain is he figures it’s untrue and does not consider it any further. I would assume that your thinking goes something along the lines of ‘This has little evidence for it and is a very strong claim that contradicts a lot of the evidence we have’ → ‘This is very unlikely to be true’, and then you skip that last step, but what still happens in your brain is that you figure it is probably untrue and don’t consider it any further.
And both of you are most likely willing to reconsider should additional evidence present itself.
Careful there. Our intuition of what’s in the “realm of possibilities that are worth thinking about” doesn’t correspond to any particular probability, rather it is based on whether the thing is possible based on our current model of the world and doesn’t take into account how likely that model is to be wrong.
If I understand you correctly, then I agree. However, to me it seems clear that human beings discount probabilities that seem to them to be very small, and it also seems to me that we must do that, because calculating them out and having them weigh our actions by tiny amounts is impossible.
The question of where we should try to set the cut-off point is a more difficult one. It is usually too high, I think. But if, after actual consideration, it seems that something is actually extremely unlikely (as in, somewhere along the lines of 10^{-18} or whatever), then we treat it as if it is outright false, regardless of whether we say it is false or say that it is simply very unlikely.
And to me, this does not seem to be a problem so long as, when new evidence comes up, we still update, and then start considering the possibilities that now seem sufficiently probable.
Of course, there is a danger in that it is difficult for a successive series of small new pieces of evidence pointing towards a certain, previously very unlikely conclusion to overcome our resistance to considering very unlikely conclusions. This is precisely because I don’t believe we can actually use numbers to update all the possibilities, which are basically infinite in number. It is hard for me to imagine a slow, successive series of tiny nuggets of evidence that would slowly convince me that Zeus actually exists. I could read several thousand different myths about Zeus, and it still wouldn’t convince me. Something large enough for a single major push to the probability to force me to consider it more thoroughly, priviledge that hypothesis in the hypothesis-space, seems to be the much more likely way—say, Zeus speaking to me and showing off some of his powers. This is admittedly a weakness, but at least it is an admitted weakness, and I haven’t found a way to circumvent it yet but I can at least try to mitigate it by consciously paying more attention than I intuitively would to small but not infinitesimal probabilities.
Anyway, back to the earlier point: What I’m saying is that whether you say “X is untrue” or “X is extremely unlikely”, when considering the evidence you have for and against X, it is very possible that what happens in your brain when thinking about X is the same thing. The hypothetical skeptic who does not know to use the terminology of probabilities and likelihoods will simply call things he finds extremely unlikely ‘untrue’. And then, when a person who is unused to this sort of terminology hears the words ‘X is very unlikely’ he considers that to mean ‘X is not unlikely enough to be considered untrue, but it is still quite unlikely, which means X is quite possible, even if it is not the likeliest of possibilities’. And here a misunderstanding happens, because I meant to say that X is so unlikely that it is not worth considering, but he takes it as me saying X is unlikely, but not unlikely enough not to be worth considering.
Of course, there are also people who actually believe in something being true or untrue, meaning their probability estimate could not possibly be altered by any evidence. But in the case of most beliefs, and most people, I think that when they say ‘true’ or ‘false’, they mean ‘extremely likely’ or ‘extremely unlikely’.
Disagree. Most people use “unlikely” for something that fits their model but is unlikely, e.g., winning the lottery, having black come up ten times in a row in a game of roulette, two bullets colliding in mid air. “Untrue” is used for something that one’s model says is impossible, e.g, Zeus or ghosts existing.
I am confused now. Did you properly read my post? What you say here is ‘I disagree, what you said is correct.’
To try and restate myself, most people use ‘unlikely’ like you said, but some, many of whom frequent this site, use it for ‘so unlikely it is as good as impossible’, and this difference can cause communication issues.
My point is that in common usage (in other words from the inside) they distinction between “unlikely” and “impossible” doesn’t correspond to any probability. In fact there are “unlikely” events that have a lower probability than some “impossible” events.
Assuming you mean that things you believe are merely ‘unlikely’ can actually, more objectively, be less likely than things you believe are outright ‘impossible’, then I agree.
What I mean is that the conjunction of possible events will be perceived as unlikely, even if enough events are conjoined together to put the probability below what the threshold for “impossible” should be.
True. However, there is no such thing as ‘impossible’, or probability 0. And while in common language people do use ‘impossible’ for what is merely ‘very improbable’, there’s no accepted, specific threshold there. Your earlier point about people seeing a fake distinction between things that seem possible but unlikely in their model and things that seem impossible in their model contributes to that. I prefer to use ‘very improbable’ for things that are very improbable, and ‘unlikely’ for things that are merely unlikely, but it is important to keep in mind that most people do not use the same words I do and to communicate accurately I need to remember that.
Okay, I just typed that and then I went back and looked and it seems that we’ve talked a circle, which is a good indication that there is no disagreement in this conversation. I think that I’ll leave it here, unless you believe otherwise.