This jumped out instantly when I looked at the charts: Your prior and evidence can’t possibly both be correct at the same time. Everywhere the prior has non-negligible density has negligible likelihood. Everywhere that has substantial likelihood has negligible prior density. If you try multiplying the two together to get a compromise probability estimate instead of saying “I notice that I am confused”, I would hold this up as a pretty strong example of the real sin that I think this post should be arguing against, namely that of trying to use math too blindly without sanity-checking its meaning.
(I deleted my response to this following othonormal’s comments; see this one for my revised thought here.)
Of course, I believe this because I think the creation of smarter-than-human intelligence has a (very) large probability of an (extremely) large impact, and that most of the probability mass there is concentrated into AI, and I don’t think there’s nothing that can be done about that, either.
Why do you think that there’s something that can be done about it?
I disagree. It can be rational to shift subjective probabilities by many orders of magnitude in response to very little new information.
What your example looks like is a nearly uniform prior over a very large space- nothing’s wrong when we quickly update to believe that yesterday’s lottery numbers are 04-15-21-31-36.
But the point where you need to halt, melt, and catch fire is if your prior assigns the vast majority of the probability mass to a small compact region, and then the evidence comes along and lands outside that region. That’s the equivalent of starting out 99.99% confident that you know tomorrow’s lottery numbers will begin with 01-02-03, and being proven wrong.
Yes, you’re right, I wasn’t thinking clearly, thanks for catching me. I think there’s something to what I was trying to say, but I need to think about it through more carefully. I find the explanation that you give in your other comment convincing (that the point of the graphs is to clearly illustrate the principle).
Upvoted.
(I deleted my response to this following othonormal’s comments; see this one for my revised thought here.)
Why do you think that there’s something that can be done about it?
What your example looks like is a nearly uniform prior over a very large space- nothing’s wrong when we quickly update to believe that yesterday’s lottery numbers are 04-15-21-31-36.
But the point where you need to halt, melt, and catch fire is if your prior assigns the vast majority of the probability mass to a small compact region, and then the evidence comes along and lands outside that region. That’s the equivalent of starting out 99.99% confident that you know tomorrow’s lottery numbers will begin with 01-02-03, and being proven wrong.
Yes, you’re right, I wasn’t thinking clearly, thanks for catching me. I think there’s something to what I was trying to say, but I need to think about it through more carefully. I find the explanation that you give in your other comment convincing (that the point of the graphs is to clearly illustrate the principle).