Huh. Apparently I was underconfident in that I was only 7 years off from the correct date and for the calibration estimated I was 65% sure I was within +/- 15.
My logic to get my year estimate:
Tnyvyrb qvrq gur fnzr lrne Arjgba jnf obea, naq ur fgnegrq qbvat fhofgnagvny jbex nebhaq fvkgrra uhaqerq. Vg gura gbbx gur Vadhvfvgvba n juvyr gb qb nalguvat naq ur fcrag znal lrnef haqre ubhfr neerfg. Fb Tnyvyrb pbhyq abg unir qvrq zhpu orsber fvkgrra guvegl. Fb Arjgba unq gb unir obea nebhaq fvkgrra guvegl gb fvkgrra sbegl. Arjgba jebgr Cevapvcvn jura ur jnf nyernql fbzrjung byq. Fb +sbegl lrnef tvirf nebhaq fvkgrra rvtugl. V jnf nyfb cerggl fher gung Cevapvcvn jnf choyvfurq fbzrgvzr va gur frpbaq unys bs gur friragrrgu praghel, fb gung jnf n (zvyq) pbafvfgrapl purpx. Ubjrire, V rkcrpgrq zl qngr gb or zber yvxryl bire engure guna haqre naq va guvf ertneq V jnf jebat.
Yeah, but the fact that my estimate was pretty close to the correct date suggests that some underconfidence may have been at work. If someone had stated the exactly correct year, and had estimated only a 51% chance that they were in the correct zone, we’d probably look at them funny.
Maybe, but getting very close with low confidence is entirely possible with these estimation-calibration tasks: a uniformly chosen year between 1600-1800 could be the exact year but the confidence of such a guess is always 15%.
Yup. You might already know about it, but PredictionBook seems to get touted around here as a good method to calibrate oneself (although I haven’t used it myself).
Huh. Apparently I was underconfident in that I was only 7 years off from the correct date and for the calibration estimated I was 65% sure I was within +/- 15.
My logic to get my year estimate:
Tnyvyrb qvrq gur fnzr lrne Arjgba jnf obea, naq ur fgnegrq qbvat fhofgnagvny jbex nebhaq fvkgrra uhaqerq. Vg gura gbbx gur Vadhvfvgvba n juvyr gb qb nalguvat naq ur fcrag znal lrnef haqre ubhfr neerfg. Fb Tnyvyrb pbhyq abg unir qvrq zhpu orsber fvkgrra guvegl. Fb Arjgba unq gb unir obea nebhaq fvkgrra guvegl gb fvkgrra sbegl. Arjgba jebgr Cevapvcvn jura ur jnf nyernql fbzrjung byq. Fb +sbegl lrnef tvirf nebhaq fvkgrra rvtugl. V jnf nyfb cerggl fher gung Cevapvcvn jnf choyvfurq fbzrgvzr va gur frpbaq unys bs gur friragrrgu praghel, fb gung jnf n (zvyq) pbafvfgrapl purpx. Ubjrire, V rkcrpgrq zl qngr gb or zber yvxryl bire engure guna haqre naq va guvf ertneq V jnf jebat.
So, rot13 doesn’t do much to obscure numbers.
Good point. I’ve replaced the numbers with numbers that have been spelled out so the rot13 does now obscure them.
That doesn’t mean you were underconfident; with a confidence of 65% you are correct 65% of the time.
Yeah, but the fact that my estimate was pretty close to the correct date suggests that some underconfidence may have been at work. If someone had stated the exactly correct year, and had estimated only a 51% chance that they were in the correct zone, we’d probably look at them funny.
Maybe, but getting very close with low confidence is entirely possible with these estimation-calibration tasks: a uniformly chosen year between 1600-1800 could be the exact year but the confidence of such a guess is always 15%.
That’s a good point. So a single data point like this doesn’t really say much useful for my own calibration.
Yup. You might already know about it, but PredictionBook seems to get touted around here as a good method to calibrate oneself (although I haven’t used it myself).
Yes, I’ve used it quite a bit. So far the main thing I’ve been convinced of from it is that my calibration is all over the place.