In common parlance, “no evidence for” means “no good evidence for”. Saying that myths are not evidence for Zeus is not being smug; it’s being able to comprehend English.
I could just as well complain about people saying “I constantly hear fallacies” by asking them if they hear fallacies when they are asleep, and if not, why they are being so smug about an obviously false statement.
I’m not saying that it’s necessary to say things like that to not be a smug skeptic. On the other hand it’s sufficient.
For a Bayesian there no such things as good or bad evidence. Good or bad indicate approval and disapproval. There’s weak and strong evidence but even weak evidence means that your belief in a statement should be higher than without that evidence.
It looks to me to be rather clear that what is being said (“myths are not evidence for Zeus”) translates roughly to “myths are very weak evidence for Zeus, and so my beliefs are changed very little by them”. Is there still a real misunderstanding here?
It looks to me to be rather clear that what is being said (“myths are not evidence for Zeus”) translates roughly to “myths are very weak evidence for Zeus, and so my beliefs are changed very little by them”. Is there still a real misunderstanding here?
You are making a mistake in reasoning if you don’t change your belief through that evidence. Your belief should change by orders of magnitude. A change from 10^{-18} to 10^{-15} is a strong change.
The central reason to believe that Zeus doesn’t exist are weak priors.
Skeptics have ideas that someone has to prove something to them for them to believe it. In the Bayesian worldview you always have probabilities for your beliefs. Social obligations aren’t part of it. “Good” evidence means that someone fulfilled a social obligation of providing a certain amount of proof.
It doesn’t refer to how strongly a Bayesian should update after being exposed to a piece of evidence.
There are very strong instincts for humans to either believe X is true or to believe X is false. It takes effort to think in terms of probabilities.
You are making a mistake in reasoning if you don’t change your belief through that evidence. Your belief should change by orders of magnitude. A change from 10^{-18} to 10^{-15} is a strong change.
In this case they come from me. Feel free to post your own numbers.
The point of choosing Zeus as an example is that it’s a claim that probably not going to mindkill anyone. That makes it easier to talk about the principles than using an example where the updating actually matters.
Why should a myth about Zeus change anyone’s belief by “orders of magnitude”?
I’d buy it. Consider all the possible gods about whom no myths exist: I wouldn’t exactly call this line of argument rigorous, but it seems reasonable to say that there’s much stronger evidence for the existence of Baduhenna, a Germanic battle-goddess known only from Tacitus’ Annals, than for the existence of Gleep, a god of lint balls that collect under furniture whom I just made up.
Of course, there’s some pretty steep diminishing returns here. A second known myth might be good for a doubling of probability or so—there are surprisingly many mythological figures that are very poorly known—but a dozen known myths not much more than that.
Is this a case where orders of magnitude aren’t so important and absolute numbers are? I’m not sure how to even assign probabilities here, but let’s say we assign Baduhenna 0.0001% chance of existing, and Gleep 0.00000000000001%. That makes Baduhenna several orders of magnitude more likely than Gleep, but she’s still down in the noise below which we can reliably reason. For all practical purposes, Baduhenna and Gleep have the same likelihood of existing. I.e. the possibility of Baduhenna makes no more or less impact on my choices or anything else I believe in than does the possibility of Gleep.
Nobody makes sacrifices to Baduhenna. You might spend a hundred dollars to get a huge military advantage by making sacrifices to Baduhenna.
If you shut up and calculate a 0.0001% change for Baduhenna to exist might be enough to change actions.
A lot of people vote in presidential elections when the chance of their vote turning the election is worse than 0.0001%. If the chance of turning an election through voting was 0.00000000000001% nobody would go to vote.
There are probably various Xrisks with 0.0001% chance of happening. Separating them from Xrisks with 0.00000000000001% chance of happening is important.
My point is that we can’t shut up and calculate with probabilities of 0.0001% because we can’t reliably measure or reason with probabilities that small in day-to-day life (absent certain very carefully measured scientific and engineering problems with extremely high precision; e.g. certain cross-sections in particle physics).
I know I assign very low probability to Baduhenna, but what probability do I assign? 0.0001% 0.000001% less? I can’t tell you. There is a point at which we just say the probability is so close to zero as to be indistinguishable.
When you’re dealing with probabilities of specific events, be they XRisks or individual accidents, that have such low probability, the sensible course of action is to take general measures that improve your fitness against multiple risks, likely and unlikely. Otherwise the amount you invest in the highly salient 0.0001% chance events will take too much time away from the 10% events, and you’ll have decreased your fitness.
For example, you can imagine a very unlikely 0.0001% event in which a particular microbe mutates in a specific way and causes a pandemic. You could invest a lot of money in preventing that one microbe from becoming problematic. Or you could invest the same money in improving the the science of medicine, the emergency response system, and general healthcare available to the population. The latter will help against all microbes and a lot more risks.
My point is that we can’t shut up and calculate with probabilities of 0.0001% because we can’t reliably measure or reason with probabilities that small in day-to-day life
Do you vote in presidential elections?
Do you wear a seat belt every time you drive a car and would also do so if you make a vacation in a country without laws that force you to do it?
Or you could invest the same money in improving the the science of medicine
How do you know that will reduce and not increase the risk or a deadly bioengineered pandemic?
I know I assign very low probability to Baduhenna, but what probability do I assign? 0.0001% 0.000001% less? I can’t tell you.
Yes, reasoning about low probability events is hard. You might not have the mental skills to reason in a decent matter about low probability events.
On the other hand that doesn’t mean that reasoning about low probability events is inherently impossible.
You might not have the mental skills to reason in a decent matter about low probability events.
Do you? You were unable or unwilling to say how you came up with 10^-18 and 10^-15 in the matter of Zeus. (And no, I am not inclined to take your coming up with numbers as evidence that you employed any reasonable method to do so.)
I am not inclined to take your coming up with numbers as evidence that you employed any reasonable method to do so
Intuition can be a reasonable method when you have enough relevant information in your head.
I’m good enough that I wouldn’t make the mistake of calling Baduhenna existence or Zeus existence a 10^{-6} event.
Is it possible that I might have said 10^{-12} instead of 10^{-15} in I would have been in a different mood the day I wrote the post.
When we did Fermi estimates at the European Community Event in Berlin there was a moment where we had to estimate the force that light from the sun exerts on earth. We had no good idea about how to do a Fermi estimate. We settled for Jonas who thought he read the number in the past but couldn’t remember it writing down an intuitive guess. He wrote 10^9 and the correct answer was 5.5 * 10^8.
As a practical matter telling the difference between 10^{-15} and 10^{-12} isn’t that important. On the other hand reasoning about whether the chance that the Large Hadron collider creates a black hole that destroys earth is 10^{-6} or 10^{-12} is important.
I think a 10^{-6} chance for creating a black hole that destroys the earth should be enough to avoid doing experiments like that. In that case I think the probability wasn’t 10^{-6} and it was okay to run the experiment but with increased power of technology we might have more experiments that actually do have a 10^{-6} xrisk chance and we should avoid running them.
I don’t know what this means. On the basis of what would you decide what’s “reasonable” and what’s not?
There is a time-honored and quite popular technique called pulling numbers out of your ass. Calling it “intuition” doesn’t make the numbers smell any better.
See “If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics” on Slate Star Codex, though I agree that a human’s intuition for probabilities well below 1e-9 is likely to be very unreliable (except for propositions in a reference class containing billions of very similar propositions, such as “John Doe will win the lottery this week and Jane Roe will win the lottery next week”).
The only thing that matters is making successful predictions. How they smell doesn’t.
To know at whether a method makes successful predictions you calibrate the method against other data. That then gives you an idea about how accurate your predictions happen to be.
Depending on the purpose for which you need the numbers different amounts of accuracy is good enough.
I’m not making some Pascal mugging argument that people are supposed to care more about Zeus where I need to know the difference between 10^{-15} and 10^{-16}. I made an argument about how many orders of magnitude my beliefs should be swayed.
My current belief in the probability of Zeus is uncertain enough that I have no idea if it changed by orders of magnitude, and I am very surprised that you seem to think the probability is in a narrow enough range that claiming to have increased it by order of magnitude becomes meaningful.
No, I can’t. Heuristics are a kind of algorithms that provide not optimal but adequate results. “Adequate” here means “sufficient for a particular real-life purpose”.
I don’t see how proclaiming that the probability of Zeus existing is 10^-12 is a heuristic.
Why should a myth about Zeus change anyone’s belief by “orders of magnitude”?
Because the probability of there being a myth about Zeus, given that Zeus exists, is orders of magnitude higher than the probability of there being a myth about Zeus, given that he does not exist?
Because the probability of there being a myth about Zeus, given that Zeus exists, is orders of magnitude higher than the probability of there being a myth about Zeus, given that he does not exist?
This seems obviously empirically false. Pick something that everyone agrees is made up- there are way more stories about cthulu than there are stories about any random person who happens to exist.
One of my kids more readily knows Paul Bunyan and John Henry than any US president. The fiction section of the library is substantially larger than the non-fiction. Probability that A exists, given that A is in a story seems very, very small.
Because the probability of there being a myth about Zeus, given that Zeus exists, is orders of magnitude higher than the probability of there being a myth about Zeus, given that he does not exist?
Given that the myths about Zeus attribute vast supernatural properties to him, and we now know better than to believe in any such stuff (we don’t need Zeus to explain thunder and lightning), the myths are evidence against his existence. For the ancient Greeks, of course, it was not so, but the question is being posed here and now.
Also, myths are generally told more of imaginary entities than real ones, not less. Myths are all that imaginary creatures have going for them. How many myths are there about Pope Francis? I expect there are some unfounded stories going around among the devout, but nothing on the scale of Greek mythology. So no, P(myths about Zeus|Zeus is real) is not larger, but smaller than P(myths about Zeus|Zeus is imaginary).
On the other hand, it is larger than P(myths about Zeus|no such entity has even been imagined). The latter is indistinguishable from zero—to have a myth about an entity implies that that entity has been imagined. So we can conclude from the existence of myths that Zeus has been imagined. I’m fine with that.
Also, myths are generally told more of imaginary entities than real ones, not less. Myths are all that imaginary creatures have going for them. How many myths are there about Pope Francis?
I see your problem here, you’re restricting attention to things that either exist or have had myths told about them. Thus it’s not surprising that you find that they are negatively correlated. If you condition on at least one of A or B being true, then A and B will always negatively correlate.
The original context was a slogan about myths of Zeus, but there are myths about real people. Joan of Arc, for example. So this is not true by definition, but an empirical fact.
I had no particular definition in mind, any more than I do of “Zeus” or any of the other words I have just used, but if you want one, this from Google seems to describe what we are all talking about here:
a traditional story, especially one concerning the early history of a people or explaining a natural or social phenomenon, and typically involving supernatural beings or events
Great heroes with historical existence accrete myths of supernatural events around them, while natural forces get explained by supernatural beings.
Heracles might have been a better example than Zeus. I don’t know if ancient Greek scholarship has anything to say on the matter, but it seems quite possible that the myths of Heracles could originate from a historical figure. Likewise Romulus, Jason, and all the other mortals of Graeco-Roman mythology. These have some reasonable chance of existing. Zeus does not. But by that very fact, the claim that “myths about Heracles are evidence for Heracles’ existence” is not as surprising as the one about Zeus, and so does not function as a shibboleth for members of the Cult of Bayes to identify one another.
Heracles might have been a better example than Zeus. I don’t know if ancient Greek scholarship has anything to say on the matter, but it seems quite possible that the myths of Heracles could originate from a historical figure. Likewise Romulus, Jason, and all the other mortals of Graeco-Roman mythology. These have some reasonable chance of existing. Zeus does not.
Notice that in the above argument you’re implicitly conditioning on gods not existing. We’re trying to determine how the existence of myths about Zeus affects our estimate that Zeus exists. You’re basically saying “I assign probability 0 to Zeus existing, so the myths don’t alter it”.
I’m decently calibrated on the credence game and have made plenty of prediction book predictions.
The idea of Bayesianism that it’s good to boil down your beliefs to probability numbers.
you did say (my emphasis)
If you think my argument is wrong provide your own numbers. P(Zeus exists | Myths exists) and P(Zeus exists | Myths don’t exist)
There really no point discussing Zeus further if you aren’t willing to put number on your own beliefs. Apart from that I linked to a discussion about Bayesianism and you might want to read that discussion if you want a deeper understanding of the claim.
I’m decently calibrated on the credence game and have made plenty of prediction book predictions.
You cannot use the credence game to validate your estimation of probabilities of one-off situations down at the 10^-18 level. You will never see Zeus or any similar entity.
The idea of Bayesianism that it’s good to boil down your beliefs to probability numbers.
I am familiar with the concept. The idea is also that it’s no good pulling numbers out of thin air. Bayesian reasoning is about (1) doing certain calculations with probabilities and evidence—by which I mean numerical calculations with numbers that are not made up—and (2) where numerical calculation is not possible, using the ideas as a heuristic background and toolbox. Assigning 10^-bignum to Zeus existing confuses the two.
Look! My office walls are white! I must increase my estimated probability of crows being bright pink from 10^-18 to 10^-15! No, I don’t think I shall.
Earlier you wrote:
The central reason to believe that Zeus doesn’t exist are weak priors.
The central reason to believe that Zeus doesn’t exist is the general arguments against the existence of gods and similar entities. We don’t see them acting in the world. We know what thunder and lightning are and have no reason to attribute them to Zeus. Our disbelief arose after we already knew about the myths, so the thought experiment is ill-posed. “The fact that there are myths about Zeus is evidence that Zeus exists” is a pretty slogan but does not actually make any sense. Sense nowadays, that is. Of course the ancient Greeks were brought up on such tales and I assume believed in their pantheon as much as the believers of any other religion do in theirs. But the thought experiment is being posed today, addressed to people today, and you claim to have updated—from what prior state? -- from 10^-18 to 10^-15.
The point to have the discussion about Zeus is Politics is the Mind-Killer. The insignificance of Zeus existence is a feature not a bug.
If I would make an argument that the average person’s estimate of the chance that a single unprotected act of sex with a stranger infects them with AIDS is off by two orders of magnitude, then that topic is going to mind kill. The same is true for other interesting claims.
I agree with this comment, but I want to point out that there may be a problem with equating the natural language concept “strength of evidence” with the likelihood ratio.
You can compare two probabilities on either an additive or multiplicative scale. When applying a likelihood ratio of 1000, your prior changes by a multiplicative factor of 1000 (this actually applies to odds rather than probabilities, but for low probability events, the two approximate each other). However, on an additive scale, a change from 10^{-18} to 10^{-15} is really just a change of less than 10^{-15} , which is negligible.
The multiplicative scale is great for several reasons: The likelihood ratio is suggested by Bayes’ theorem, it is easy to reason with, it does not depend on the priors, several likelihood ratios can easily be applied sequentially, and it is suitable for comparing the strength of different pieces of evidence for the same hypothesis.
The additive scale does not have those nice properties, but it may still correspond more closely to the natural language concept of “strength of evidence”
Yes, that is probably clear to most of us here. But, in reality, I and most likely also you discount probabilities that are very small, instead of calculating them out and changing our actions (we’ll profess ‘this is very unlikely’ instead of ‘this is not true’, but what actually happens is the same thing). There’s a huge amount of probability 10^{-18} deities out there, we just shrug and assume they don’t exist unless enough strong (or ‘good’, I still don’t see the difference there) evidence comes up to alter that probability enough so that it is in the realm of probabilities worth actually spending time and effort thinking about.
This hypothetical skeptic, if pressed, would most likely concede that sure, it is /possible/ that Zeus exists. He’d even probably concede that it is more likely that Zeus exists than that a completely random other god with no myths about them exists. But he’d say that is fruitless nitpicking, because both of them are overwhelmingly unlikely to exist and the fact that they still might exist does not change our actions in any way. If you wish to argue this point, then that is fine, but if we agree here then there’s no argument, just a conflict of language.
I’m trying to say that where you would say “Probability for X is very low”, most people who have not learned the terminology here would normally say “X is false”, even if they would concede that “X is possible but very unlikely” if pressed on it.
He’d even probably concede that it is more likely that Zeus exists than that a completely random other god with no myths about them exists.
Given that someone like Richard Kennaway who’s smart and exposed to LW thinking (>10000 karma) doesn’t immediately find the point I’m making obvious, you are very optimistic.
People usually don’t change central beliefs about ontology in an hour after reading a convincing post on a forum. A hour might be enough to change the language you use, but it’s not enough to give you a new way to relate to reality.
But, in reality, I and most likely also you discount probabilities that are very small, instead of calculating them out and changing our actions
The probability that an asteroid destroys humanity in the next decade is relatively small. On the other hand it’s still useful for our society to invest more resources into telescopes to have all near-earth objects covered.
The same goes for Yellowstone destroying our civilisation.
Our society is quite poor at dealing with low probability high impact events. If it comes to things like Yellowstone the instinctual response of some people is to say: “Extraordinary claims require extraordinary evidence.”
That kind of thinking is very dangerous given that human technology get’s more and more powerful as time goes on.
I would say the probability of Yellowstone or meteor impact situation are both vastly higher than something like the existance of a specific deity. They’re in the realm of possibilities that are worth thinking about. But there are tons of other possible civilization-ending disasters that we don’t, and shouldn’t, consider, because they have much less evidence for them and thus are so improbable that they are not worth considering. I do not believe we as humans can function without discounting very small probabilities.
But yeah, I’m generally rather optimistic about things. Reading LW has helped me, at that—before, I did not know why various things seemed to be so wrong, now I have an idea, and I know there are people out there who also recognize these things and can work to fix them.
As for the note about changing their central beliefs, I agree on that. What I meant to say was that the central beliefs of this hypothetical skeptic are not actually different from yours in this particular regard, he just uses different terminology. That is, his thinking goes ‘This has little evidence for it and is a very strong claim that contradicts a lot of the evidence we have’ → ‘This is very unlikely to be true’ → ‘This is not true’ and what happens in his brain is he figures it’s untrue and does not consider it any further. I would assume that your thinking goes something along the lines of ‘This has little evidence for it and is a very strong claim that contradicts a lot of the evidence we have’ → ‘This is very unlikely to be true’, and then you skip that last step, but what still happens in your brain is that you figure it is probably untrue and don’t consider it any further.
And both of you are most likely willing to reconsider should additional evidence present itself.
I would say the probability of Yellowstone or meteor impact situation are both vastly higher than something like the existance of a specific deity. They’re in the realm of possibilities that are worth thinking about.
Careful there. Our intuition of what’s in the “realm of possibilities that are worth thinking about” doesn’t correspond to any particular probability, rather it is based on whether the thing is possible based on our current model of the world and doesn’t take into account how likely that model is to be wrong.
If I understand you correctly, then I agree. However, to me it seems clear that human beings discount probabilities that seem to them to be very small, and it also seems to me that we must do that, because calculating them out and having them weigh our actions by tiny amounts is impossible.
The question of where we should try to set the cut-off point is a more difficult one. It is usually too high, I think. But if, after actual consideration, it seems that something is actually extremely unlikely (as in, somewhere along the lines of 10^{-18} or whatever), then we treat it as if it is outright false, regardless of whether we say it is false or say that it is simply very unlikely.
And to me, this does not seem to be a problem so long as, when new evidence comes up, we still update, and then start considering the possibilities that now seem sufficiently probable.
Of course, there is a danger in that it is difficult for a successive series of small new pieces of evidence pointing towards a certain, previously very unlikely conclusion to overcome our resistance to considering very unlikely conclusions. This is precisely because I don’t believe we can actually use numbers to update all the possibilities, which are basically infinite in number. It is hard for me to imagine a slow, successive series of tiny nuggets of evidence that would slowly convince me that Zeus actually exists. I could read several thousand different myths about Zeus, and it still wouldn’t convince me. Something large enough for a single major push to the probability to force me to consider it more thoroughly, priviledge that hypothesis in the hypothesis-space, seems to be the much more likely way—say, Zeus speaking to me and showing off some of his powers. This is admittedly a weakness, but at least it is an admitted weakness, and I haven’t found a way to circumvent it yet but I can at least try to mitigate it by consciously paying more attention than I intuitively would to small but not infinitesimal probabilities.
Anyway, back to the earlier point: What I’m saying is that whether you say “X is untrue” or “X is extremely unlikely”, when considering the evidence you have for and against X, it is very possible that what happens in your brain when thinking about X is the same thing. The hypothetical skeptic who does not know to use the terminology of probabilities and likelihoods will simply call things he finds extremely unlikely ‘untrue’. And then, when a person who is unused to this sort of terminology hears the words ‘X is very unlikely’ he considers that to mean ‘X is not unlikely enough to be considered untrue, but it is still quite unlikely, which means X is quite possible, even if it is not the likeliest of possibilities’. And here a misunderstanding happens, because I meant to say that X is so unlikely that it is not worth considering, but he takes it as me saying X is unlikely, but not unlikely enough not to be worth considering.
Of course, there are also people who actually believe in something being true or untrue, meaning their probability estimate could not possibly be altered by any evidence. But in the case of most beliefs, and most people, I think that when they say ‘true’ or ‘false’, they mean ‘extremely likely’ or ‘extremely unlikely’.
What I’m saying is that whether you say “X is untrue” or “X is extremely unlikely”, when considering the evidence you have for and against X, it is very possible that what happens in your brain when thinking about X is the same thing.
Disagree. Most people use “unlikely” for something that fits their model but is unlikely, e.g., winning the lottery, having black come up ten times in a row in a game of roulette, two bullets colliding in mid air. “Untrue” is used for something that one’s model says is impossible, e.g, Zeus or ghosts existing.
I am confused now. Did you properly read my post? What you say here is ‘I disagree, what you said is correct.’
To try and restate myself, most people use ‘unlikely’ like you said, but some, many of whom frequent this site, use it for ‘so unlikely it is as good as impossible’, and this difference can cause communication issues.
My point is that in common usage (in other words from the inside) they distinction between “unlikely” and “impossible” doesn’t correspond to any probability. In fact there are “unlikely” events that have a lower probability than some “impossible” events.
Assuming you mean that things you believe are merely ‘unlikely’ can actually, more objectively, be less likely than things you believe are outright ‘impossible’, then I agree.
What I mean is that the conjunction of possible events will be perceived as unlikely, even if enough events are conjoined together to put the probability below what the threshold for “impossible” should be.
True. However, there is no such thing as ‘impossible’, or probability 0. And while in common language people do use ‘impossible’ for what is merely ‘very improbable’, there’s no accepted, specific threshold there. Your earlier point about people seeing a fake distinction between things that seem possible but unlikely in their model and things that seem impossible in their model contributes to that. I prefer to use ‘very improbable’ for things that are very improbable, and ‘unlikely’ for things that are merely unlikely, but it is important to keep in mind that most people do not use the same words I do and to communicate accurately I need to remember that.
Okay, I just typed that and then I went back and looked and it seems that we’ve talked a circle, which is a good indication that there is no disagreement in this conversation. I think that I’ll leave it here, unless you believe otherwise.
You see some difference between Lesswrongians and smug skeptics???
Smug skeptics don’t say things like “The fact that there are myths about Zeus is evidence that Zeus exists”.
In common parlance, “no evidence for” means “no good evidence for”. Saying that myths are not evidence for Zeus is not being smug; it’s being able to comprehend English.
I could just as well complain about people saying “I constantly hear fallacies” by asking them if they hear fallacies when they are asleep, and if not, why they are being so smug about an obviously false statement.
I’m not saying that it’s necessary to say things like that to not be a smug skeptic. On the other hand it’s sufficient.
For a Bayesian there no such things as good or bad evidence. Good or bad indicate approval and disapproval. There’s weak and strong evidence but even weak evidence means that your belief in a statement should be higher than without that evidence.
It looks to me to be rather clear that what is being said (“myths are not evidence for Zeus”) translates roughly to “myths are very weak evidence for Zeus, and so my beliefs are changed very little by them”. Is there still a real misunderstanding here?
You are making a mistake in reasoning if you don’t change your belief through that evidence. Your belief should change by orders of magnitude. A change from 10^{-18} to 10^{-15} is a strong change.
The central reason to believe that Zeus doesn’t exist are weak priors.
Skeptics have ideas that someone has to prove something to them for them to believe it. In the Bayesian worldview you always have probabilities for your beliefs. Social obligations aren’t part of it. “Good” evidence means that someone fulfilled a social obligation of providing a certain amount of proof. It doesn’t refer to how strongly a Bayesian should update after being exposed to a piece of evidence.
There are very strong instincts for humans to either believe X is true or to believe X is false. It takes effort to think in terms of probabilities.
Where do those numbers come from?
In this case they come from me. Feel free to post your own numbers.
The point of choosing Zeus as an example is that it’s a claim that probably not going to mindkill anyone. That makes it easier to talk about the principles than using an example where the updating actually matters.
In other words, you made them up. Fictional evidence.
you did say (my emphasis)
Why should a myth about Zeus change anyone’s belief by “orders of magnitude”?
I’d buy it. Consider all the possible gods about whom no myths exist: I wouldn’t exactly call this line of argument rigorous, but it seems reasonable to say that there’s much stronger evidence for the existence of Baduhenna, a Germanic battle-goddess known only from Tacitus’ Annals, than for the existence of Gleep, a god of lint balls that collect under furniture whom I just made up.
Of course, there’s some pretty steep diminishing returns here. A second known myth might be good for a doubling of probability or so—there are surprisingly many mythological figures that are very poorly known—but a dozen known myths not much more than that.
Is this a case where orders of magnitude aren’t so important and absolute numbers are? I’m not sure how to even assign probabilities here, but let’s say we assign Baduhenna 0.0001% chance of existing, and Gleep 0.00000000000001%. That makes Baduhenna several orders of magnitude more likely than Gleep, but she’s still down in the noise below which we can reliably reason. For all practical purposes, Baduhenna and Gleep have the same likelihood of existing. I.e. the possibility of Baduhenna makes no more or less impact on my choices or anything else I believe in than does the possibility of Gleep.
The US military budget is billions.
Nobody makes sacrifices to Baduhenna. You might spend a hundred dollars to get a huge military advantage by making sacrifices to Baduhenna.
If you shut up and calculate a 0.0001% change for Baduhenna to exist might be enough to change actions.
A lot of people vote in presidential elections when the chance of their vote turning the election is worse than 0.0001%. If the chance of turning an election through voting was 0.00000000000001% nobody would go to vote.
There are probably various Xrisks with 0.0001% chance of happening. Separating them from Xrisks with 0.00000000000001% chance of happening is important.
My point is that we can’t shut up and calculate with probabilities of 0.0001% because we can’t reliably measure or reason with probabilities that small in day-to-day life (absent certain very carefully measured scientific and engineering problems with extremely high precision; e.g. certain cross-sections in particle physics).
I know I assign very low probability to Baduhenna, but what probability do I assign? 0.0001% 0.000001% less? I can’t tell you. There is a point at which we just say the probability is so close to zero as to be indistinguishable.
When you’re dealing with probabilities of specific events, be they XRisks or individual accidents, that have such low probability, the sensible course of action is to take general measures that improve your fitness against multiple risks, likely and unlikely. Otherwise the amount you invest in the highly salient 0.0001% chance events will take too much time away from the 10% events, and you’ll have decreased your fitness.
For example, you can imagine a very unlikely 0.0001% event in which a particular microbe mutates in a specific way and causes a pandemic. You could invest a lot of money in preventing that one microbe from becoming problematic. Or you could invest the same money in improving the the science of medicine, the emergency response system, and general healthcare available to the population. The latter will help against all microbes and a lot more risks.
Do you vote in presidential elections? Do you wear a seat belt every time you drive a car and would also do so if you make a vacation in a country without laws that force you to do it?
How do you know that will reduce and not increase the risk or a deadly bioengineered pandemic?
Yes, reasoning about low probability events is hard. You might not have the mental skills to reason in a decent matter about low probability events.
On the other hand that doesn’t mean that reasoning about low probability events is inherently impossible.
Do you? You were unable or unwilling to say how you came up with 10^-18 and 10^-15 in the matter of Zeus. (And no, I am not inclined to take your coming up with numbers as evidence that you employed any reasonable method to do so.)
Intuition can be a reasonable method when you have enough relevant information in your head.
I’m good enough that I wouldn’t make the mistake of calling Baduhenna existence or Zeus existence a 10^{-6} event.
Is it possible that I might have said 10^{-12} instead of 10^{-15} in I would have been in a different mood the day I wrote the post.
When we did Fermi estimates at the European Community Event in Berlin there was a moment where we had to estimate the force that light from the sun exerts on earth. We had no good idea about how to do a Fermi estimate. We settled for Jonas who thought he read the number in the past but couldn’t remember it writing down an intuitive guess. He wrote 10^9 and the correct answer was 5.5 * 10^8.
As a practical matter telling the difference between 10^{-15} and 10^{-12} isn’t that important. On the other hand reasoning about whether the chance that the Large Hadron collider creates a black hole that destroys earth is 10^{-6} or 10^{-12} is important.
I think a 10^{-6} chance for creating a black hole that destroys the earth should be enough to avoid doing experiments like that. In that case I think the probability wasn’t 10^{-6} and it was okay to run the experiment but with increased power of technology we might have more experiments that actually do have a 10^{-6} xrisk chance and we should avoid running them.
I don’t know what this means. On the basis of what would you decide what’s “reasonable” and what’s not?
There is a time-honored and quite popular technique called pulling numbers out of your ass. Calling it “intuition” doesn’t make the numbers smell any better.
See “If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics” on Slate Star Codex, though I agree that a human’s intuition for probabilities well below 1e-9 is likely to be very unreliable (except for propositions in a reference class containing billions of very similar propositions, such as “John Doe will win the lottery this week and Jane Roe will win the lottery next week”).
The only thing that matters is making successful predictions. How they smell doesn’t. To know at whether a method makes successful predictions you calibrate the method against other data. That then gives you an idea about how accurate your predictions happen to be.
Depending on the purpose for which you need the numbers different amounts of accuracy is good enough. I’m not making some Pascal mugging argument that people are supposed to care more about Zeus where I need to know the difference between 10^{-15} and 10^{-16}. I made an argument about how many orders of magnitude my beliefs should be swayed.
My current belief in the probability of Zeus is uncertain enough that I have no idea if it changed by orders of magnitude, and I am very surprised that you seem to think the probability is in a narrow enough range that claiming to have increased it by order of magnitude becomes meaningful.
You can compute the likelihood ratio without knowing the absolute probability.
Being surprised is generally a sign that it’s useful to update a belief.
I would add that given my model of you it doesn’t surprise me that this surprises you.
You can call it heuristics, if you want to...
No, I can’t. Heuristics are a kind of algorithms that provide not optimal but adequate results. “Adequate” here means “sufficient for a particular real-life purpose”.
I don’t see how proclaiming that the probability of Zeus existing is 10^-12 is a heuristic.
Intuition (or educated guesses like the ones referred to here), fall under the umbrella of heuristics.
In what way are you arguing that the number I gave for the existence of Zeus is insufficient for a particular real-life purpose?
Because the probability of there being a myth about Zeus, given that Zeus exists, is orders of magnitude higher than the probability of there being a myth about Zeus, given that he does not exist?
This seems obviously empirically false. Pick something that everyone agrees is made up- there are way more stories about cthulu than there are stories about any random person who happens to exist.
One of my kids more readily knows Paul Bunyan and John Henry than any US president. The fiction section of the library is substantially larger than the non-fiction. Probability that A exists, given that A is in a story seems very, very small.
Given that the myths about Zeus attribute vast supernatural properties to him, and we now know better than to believe in any such stuff (we don’t need Zeus to explain thunder and lightning), the myths are evidence against his existence. For the ancient Greeks, of course, it was not so, but the question is being posed here and now.
Also, myths are generally told more of imaginary entities than real ones, not less. Myths are all that imaginary creatures have going for them. How many myths are there about Pope Francis? I expect there are some unfounded stories going around among the devout, but nothing on the scale of Greek mythology. So no, P(myths about Zeus|Zeus is real) is not larger, but smaller than P(myths about Zeus|Zeus is imaginary).
On the other hand, it is larger than P(myths about Zeus|no such entity has even been imagined). The latter is indistinguishable from zero—to have a myth about an entity implies that that entity has been imagined. So we can conclude from the existence of myths that Zeus has been imagined. I’m fine with that.
I see your problem here, you’re restricting attention to things that either exist or have had myths told about them. Thus it’s not surprising that you find that they are negatively correlated. If you condition on at least one of A or B being true, then A and B will always negatively correlate.
(BTW, this effect is known as Berkson’s paradox.)
Thanks, I knew it had a Wikipedia entry and spent nearly 10 minutes looking for it before giving up.
What definition of “myth” are you using that doesn’t turn the above into a circular argument?
The original context was a slogan about myths of Zeus, but there are myths about real people. Joan of Arc, for example. So this is not true by definition, but an empirical fact.
I had no particular definition in mind, any more than I do of “Zeus” or any of the other words I have just used, but if you want one, this from Google seems to describe what we are all talking about here:
Great heroes with historical existence accrete myths of supernatural events around them, while natural forces get explained by supernatural beings.
Heracles might have been a better example than Zeus. I don’t know if ancient Greek scholarship has anything to say on the matter, but it seems quite possible that the myths of Heracles could originate from a historical figure. Likewise Romulus, Jason, and all the other mortals of Graeco-Roman mythology. These have some reasonable chance of existing. Zeus does not. But by that very fact, the claim that “myths about Heracles are evidence for Heracles’ existence” is not as surprising as the one about Zeus, and so does not function as a shibboleth for members of the Cult of Bayes to identify one another.
Notice that in the above argument you’re implicitly conditioning on gods not existing. We’re trying to determine how the existence of myths about Zeus affects our estimate that Zeus exists. You’re basically saying “I assign probability 0 to Zeus existing, so the myths don’t alter it”.
I’m decently calibrated on the credence game and have made plenty of prediction book predictions. The idea of Bayesianism that it’s good to boil down your beliefs to probability numbers.
If you think my argument is wrong provide your own numbers. P(Zeus exists | Myths exists) and P(Zeus exists | Myths don’t exist)
There really no point discussing Zeus further if you aren’t willing to put number on your own beliefs. Apart from that I linked to a discussion about Bayesianism and you might want to read that discussion if you want a deeper understanding of the claim.
You cannot use the credence game to validate your estimation of probabilities of one-off situations down at the 10^-18 level. You will never see Zeus or any similar entity.
I am familiar with the concept. The idea is also that it’s no good pulling numbers out of thin air. Bayesian reasoning is about (1) doing certain calculations with probabilities and evidence—by which I mean numerical calculations with numbers that are not made up—and (2) where numerical calculation is not possible, using the ideas as a heuristic background and toolbox. Assigning 10^-bignum to Zeus existing confuses the two.
Look! My office walls are white! I must increase my estimated probability of crows being bright pink from 10^-18 to 10^-15! No, I don’t think I shall.
Earlier you wrote:
The central reason to believe that Zeus doesn’t exist is the general arguments against the existence of gods and similar entities. We don’t see them acting in the world. We know what thunder and lightning are and have no reason to attribute them to Zeus. Our disbelief arose after we already knew about the myths, so the thought experiment is ill-posed. “The fact that there are myths about Zeus is evidence that Zeus exists” is a pretty slogan but does not actually make any sense. Sense nowadays, that is. Of course the ancient Greeks were brought up on such tales and I assume believed in their pantheon as much as the believers of any other religion do in theirs. But the thought experiment is being posed today, addressed to people today, and you claim to have updated—from what prior state? -- from 10^-18 to 10^-15.
There really no point discussing Zeus, period.
The point to have the discussion about Zeus is Politics is the Mind-Killer. The insignificance of Zeus existence is a feature not a bug.
If I would make an argument that the average person’s estimate of the chance that a single unprotected act of sex with a stranger infects them with AIDS is off by two orders of magnitude, then that topic is going to mind kill. The same is true for other interesting claims.
I agree with this comment, but I want to point out that there may be a problem with equating the natural language concept “strength of evidence” with the likelihood ratio.
You can compare two probabilities on either an additive or multiplicative scale. When applying a likelihood ratio of 1000, your prior changes by a multiplicative factor of 1000 (this actually applies to odds rather than probabilities, but for low probability events, the two approximate each other). However, on an additive scale, a change from 10^{-18} to 10^{-15} is really just a change of less than 10^{-15} , which is negligible.
The multiplicative scale is great for several reasons: The likelihood ratio is suggested by Bayes’ theorem, it is easy to reason with, it does not depend on the priors, several likelihood ratios can easily be applied sequentially, and it is suitable for comparing the strength of different pieces of evidence for the same hypothesis.
The additive scale does not have those nice properties, but it may still correspond more closely to the natural language concept of “strength of evidence”
I have not said that it’s strong evidence. I said it’s evidence.
Yes, that is probably clear to most of us here. But, in reality, I and most likely also you discount probabilities that are very small, instead of calculating them out and changing our actions (we’ll profess ‘this is very unlikely’ instead of ‘this is not true’, but what actually happens is the same thing). There’s a huge amount of probability 10^{-18} deities out there, we just shrug and assume they don’t exist unless enough strong (or ‘good’, I still don’t see the difference there) evidence comes up to alter that probability enough so that it is in the realm of probabilities worth actually spending time and effort thinking about.
This hypothetical skeptic, if pressed, would most likely concede that sure, it is /possible/ that Zeus exists. He’d even probably concede that it is more likely that Zeus exists than that a completely random other god with no myths about them exists. But he’d say that is fruitless nitpicking, because both of them are overwhelmingly unlikely to exist and the fact that they still might exist does not change our actions in any way. If you wish to argue this point, then that is fine, but if we agree here then there’s no argument, just a conflict of language.
I’m trying to say that where you would say “Probability for X is very low”, most people who have not learned the terminology here would normally say “X is false”, even if they would concede that “X is possible but very unlikely” if pressed on it.
Given that someone like Richard Kennaway who’s smart and exposed to LW thinking (>10000 karma) doesn’t immediately find the point I’m making obvious, you are very optimistic.
People usually don’t change central beliefs about ontology in an hour after reading a convincing post on a forum. A hour might be enough to change the language you use, but it’s not enough to give you a new way to relate to reality.
The probability that an asteroid destroys humanity in the next decade is relatively small. On the other hand it’s still useful for our society to invest more resources into telescopes to have all near-earth objects covered. The same goes for Yellowstone destroying our civilisation.
Our society is quite poor at dealing with low probability high impact events. If it comes to things like Yellowstone the instinctual response of some people is to say: “Extraordinary claims require extraordinary evidence.”
That kind of thinking is very dangerous given that human technology get’s more and more powerful as time goes on.
I would say the probability of Yellowstone or meteor impact situation are both vastly higher than something like the existance of a specific deity. They’re in the realm of possibilities that are worth thinking about. But there are tons of other possible civilization-ending disasters that we don’t, and shouldn’t, consider, because they have much less evidence for them and thus are so improbable that they are not worth considering. I do not believe we as humans can function without discounting very small probabilities.
But yeah, I’m generally rather optimistic about things. Reading LW has helped me, at that—before, I did not know why various things seemed to be so wrong, now I have an idea, and I know there are people out there who also recognize these things and can work to fix them.
As for the note about changing their central beliefs, I agree on that. What I meant to say was that the central beliefs of this hypothetical skeptic are not actually different from yours in this particular regard, he just uses different terminology. That is, his thinking goes ‘This has little evidence for it and is a very strong claim that contradicts a lot of the evidence we have’ → ‘This is very unlikely to be true’ → ‘This is not true’ and what happens in his brain is he figures it’s untrue and does not consider it any further. I would assume that your thinking goes something along the lines of ‘This has little evidence for it and is a very strong claim that contradicts a lot of the evidence we have’ → ‘This is very unlikely to be true’, and then you skip that last step, but what still happens in your brain is that you figure it is probably untrue and don’t consider it any further.
And both of you are most likely willing to reconsider should additional evidence present itself.
Careful there. Our intuition of what’s in the “realm of possibilities that are worth thinking about” doesn’t correspond to any particular probability, rather it is based on whether the thing is possible based on our current model of the world and doesn’t take into account how likely that model is to be wrong.
If I understand you correctly, then I agree. However, to me it seems clear that human beings discount probabilities that seem to them to be very small, and it also seems to me that we must do that, because calculating them out and having them weigh our actions by tiny amounts is impossible.
The question of where we should try to set the cut-off point is a more difficult one. It is usually too high, I think. But if, after actual consideration, it seems that something is actually extremely unlikely (as in, somewhere along the lines of 10^{-18} or whatever), then we treat it as if it is outright false, regardless of whether we say it is false or say that it is simply very unlikely.
And to me, this does not seem to be a problem so long as, when new evidence comes up, we still update, and then start considering the possibilities that now seem sufficiently probable.
Of course, there is a danger in that it is difficult for a successive series of small new pieces of evidence pointing towards a certain, previously very unlikely conclusion to overcome our resistance to considering very unlikely conclusions. This is precisely because I don’t believe we can actually use numbers to update all the possibilities, which are basically infinite in number. It is hard for me to imagine a slow, successive series of tiny nuggets of evidence that would slowly convince me that Zeus actually exists. I could read several thousand different myths about Zeus, and it still wouldn’t convince me. Something large enough for a single major push to the probability to force me to consider it more thoroughly, priviledge that hypothesis in the hypothesis-space, seems to be the much more likely way—say, Zeus speaking to me and showing off some of his powers. This is admittedly a weakness, but at least it is an admitted weakness, and I haven’t found a way to circumvent it yet but I can at least try to mitigate it by consciously paying more attention than I intuitively would to small but not infinitesimal probabilities.
Anyway, back to the earlier point: What I’m saying is that whether you say “X is untrue” or “X is extremely unlikely”, when considering the evidence you have for and against X, it is very possible that what happens in your brain when thinking about X is the same thing. The hypothetical skeptic who does not know to use the terminology of probabilities and likelihoods will simply call things he finds extremely unlikely ‘untrue’. And then, when a person who is unused to this sort of terminology hears the words ‘X is very unlikely’ he considers that to mean ‘X is not unlikely enough to be considered untrue, but it is still quite unlikely, which means X is quite possible, even if it is not the likeliest of possibilities’. And here a misunderstanding happens, because I meant to say that X is so unlikely that it is not worth considering, but he takes it as me saying X is unlikely, but not unlikely enough not to be worth considering.
Of course, there are also people who actually believe in something being true or untrue, meaning their probability estimate could not possibly be altered by any evidence. But in the case of most beliefs, and most people, I think that when they say ‘true’ or ‘false’, they mean ‘extremely likely’ or ‘extremely unlikely’.
Disagree. Most people use “unlikely” for something that fits their model but is unlikely, e.g., winning the lottery, having black come up ten times in a row in a game of roulette, two bullets colliding in mid air. “Untrue” is used for something that one’s model says is impossible, e.g, Zeus or ghosts existing.
I am confused now. Did you properly read my post? What you say here is ‘I disagree, what you said is correct.’
To try and restate myself, most people use ‘unlikely’ like you said, but some, many of whom frequent this site, use it for ‘so unlikely it is as good as impossible’, and this difference can cause communication issues.
My point is that in common usage (in other words from the inside) they distinction between “unlikely” and “impossible” doesn’t correspond to any probability. In fact there are “unlikely” events that have a lower probability than some “impossible” events.
Assuming you mean that things you believe are merely ‘unlikely’ can actually, more objectively, be less likely than things you believe are outright ‘impossible’, then I agree.
What I mean is that the conjunction of possible events will be perceived as unlikely, even if enough events are conjoined together to put the probability below what the threshold for “impossible” should be.
True. However, there is no such thing as ‘impossible’, or probability 0. And while in common language people do use ‘impossible’ for what is merely ‘very improbable’, there’s no accepted, specific threshold there. Your earlier point about people seeing a fake distinction between things that seem possible but unlikely in their model and things that seem impossible in their model contributes to that. I prefer to use ‘very improbable’ for things that are very improbable, and ‘unlikely’ for things that are merely unlikely, but it is important to keep in mind that most people do not use the same words I do and to communicate accurately I need to remember that.
Okay, I just typed that and then I went back and looked and it seems that we’ve talked a circle, which is a good indication that there is no disagreement in this conversation. I think that I’ll leave it here, unless you believe otherwise.
That is empirically false.
Maybe you meant “To be a proper Bayesian, one should have probabilities for one’s beliefs”?
To the extend that people don’t think in terms of probabilities they aren’t Bayesians. I think that’s part of the definition of Bayesianism.
There are practical issues with people not living up to that ideal, but that’s another topic.