Sort of late to the party, but I’d like to note for any aspiring cognitive science student browsing the archives, that I doubt this comment is accurate. I’m studying cognitive science and, in practice, because of the flexibility we have and because cogsci has maths/cs as constitute disciplines, this largely means taking maths, AI or computer science (largely the same courses that people from these field take). These disciplines make up >60% of my studies. Of course, I’m behind people who focus on maths or cs exclusively in terms of maths and cs, but I don’t see a good reason to think that we lack the ability to think with precision and rigor to an extent that we can contribute to AI safety. Prove me wrong :)
amarai
Karma: 13
[Question] How much does the risk of dying from nuclear war differ within and between countries?
[Question] In forecasting, how do accuracy, calibration and reliability relate to each other?
[Question] A bayesian updating on expert opinions
Do you know how common a position this is among cosmologists?
Thanks! The StackExchange discussion is actually very good!
[Question] How could the universe be infinitely large?
Thanks a lot for the post! I’d be curious to hear if people here significantly disagree with your conclusion that “One can act as if serious Long Covid will occur in ~0.2% of boosted Covid cases.”? If so, on what grounds?
Hey, thanks for the answer and sorry for my very late response. In particular thanks for the link to the OpenPhil report, very interesting! To your question—I now changed my mind again and tentatively think that you are right. Here’s how I think about it now, but I still feel unsure whether I made a reasoning error somewhere:
There’s some distribution of your probabilistic judgments that shows how frequently you report a given probability in a proposition that turned out to be true. It might show e.g. that for true propositions you report 90% probability in 10% of all your probability judgements. This might be the case even if you are perfectly calibrated as long as, for false propositions, you report 90% in (10/9) % of all your probability judgements. Then, it would still be the case that 90% of your 90% probability judgements turn out to be true—and hence you are perfectly calibrated at 90%.
So, given these assumptions, what would the Bayes factor for your 90% judgement in “rain today” be?
P(you give rain 90%|rain) should be 10% since I’m sort of randomly sampling your 90% judgement from the distribution where your 90% judgement occurs 10% of the time. For the same reason, P(you give rain 90%|no rain) = 10⁄9 %. Therefore, the Bayes factor is 10%/(10/9)% = 9.
I suspect that my explanation is overly complicated feel free to point out more elegant ones :)