Well, you start with a set containing google, mcdonalds, and all other organizations one could be joining, inclusive of all doomsday cults, and then you end up with a much smaller set of organizations, inclusive of all doomsday cults. Which ought to boost the probability of them joining an actual doomsday cult, even if said probability would arguably remain below 0.5 or 0.9 or what ever threshold of credence.
Yes, I understand the statistics you’re trying to point to. I just don’t think it’s as simple as narrowing down the reference class. I expect material differences in behavior between the cases “joining a doomsday cult or something that could reasonably be mistaken for one” and “joining something that kinda looks enough like a doomsday cult that jokes about it are funny, but which isn’t”, and those differences mean that this can’t be solved by a single application of Bayes’ Rule.
Maybe your probability estimate ends up higher by epsilon or so. That depends on all sorts of fuzzy readings of context and estimations of the speaker’s character, far too fuzzy for me to do actual math to it. But I feel fairly confident in saying that it shouldn’t adjust that estimate enough to justify taking any sort of action, which is what actually matters here.
Well, a doomsday cult is not only a doomsday cult but also kinda looks enough like a doomsday cult, too. Of people joining something that kinda looks enough like a doomsday cult, some are joining an actual doomsday cult. Those people, do they, in your model, know that they’re joining a doomsday cult, so they can avoid joking about it?
Well, you start with a set containing google, mcdonalds, and all other organizations one could be joining, inclusive of all doomsday cults, and then you end up with a much smaller set of organizations, inclusive of all doomsday cults. Which ought to boost the probability of them joining an actual doomsday cult, even if said probability would arguably remain below 0.5 or 0.9 or what ever threshold of credence.
Yes, I understand the statistics you’re trying to point to. I just don’t think it’s as simple as narrowing down the reference class. I expect material differences in behavior between the cases “joining a doomsday cult or something that could reasonably be mistaken for one” and “joining something that kinda looks enough like a doomsday cult that jokes about it are funny, but which isn’t”, and those differences mean that this can’t be solved by a single application of Bayes’ Rule.
Maybe your probability estimate ends up higher by epsilon or so. That depends on all sorts of fuzzy readings of context and estimations of the speaker’s character, far too fuzzy for me to do actual math to it. But I feel fairly confident in saying that it shouldn’t adjust that estimate enough to justify taking any sort of action, which is what actually matters here.
Well, a doomsday cult is not only a doomsday cult but also kinda looks enough like a doomsday cult, too. Of people joining something that kinda looks enough like a doomsday cult, some are joining an actual doomsday cult. Those people, do they, in your model, know that they’re joining a doomsday cult, so they can avoid joking about it?