It was really a textbook case. I had a short story under review at Asimov’s SF magazine, and they’d held onto it for over two months. Per Duotrope (a writer’s market site), Asimov’s takes twice as long on average reviewing stories that are ultimately accepted as stories that are ultimately rejected (the likely explanation is that obviously bad stories can be rejected right away, while stories that eventually get bought are handed around to multiple readers). So I sort of casually assumed without really thinking about it that this meant I was much more likely to get my story accepted.
But wait! Duotrope has various response statistics based on reports from users. With a few assumptions, I could use their numbers for a simple Bayes calculation, and figure out the real chances for an acceptance given a two month wait.
I used as my priors the numbers on Duotrope for acceptance rate (P(Acceptance)), mean reply time and std dev of reply time (used for P(Two Month Wait)), and took a guess at P(TMW|A) based on the mean reply time for acceptances. The result: yes, it was more likely that my piece was accepted, but because P(A) was so low to begin with (less than .5%), P(A|TMW) was still really low (less than 1%).
I calibrated my expectations accordingly. Which was just as well, since I got their rejection the next day. Rejections are always disappointing, but I’d have been far more disappointed if my expectations had still been out of joint.
Lovely! I intend to add this as a Bayesian sample problem—enough rationality diaries, and we’ll be able to make Bayes booklets exclusively out of real-life cases encountered by LessWrongers!
Hey—were you submitting the story for the first time? I.e., not that this was your first story, but Asimov’s was the first place you sent it? If so, odds probably need adjustment because bad stories get submitted to more magazines than good stories (a rejected story is resubmitted, a good story is accepted more quickly).
However, if you’d read omens from chicken entrails and it made you feel better, that would also have been equally cool :)
(it may be a necessary technique to manage your emotions to actually rationally evaluate expectations, if you’re sufficiently steel-minded that prayers and omens don’t work for you)
Used Bayes in the wild.
It was really a textbook case. I had a short story under review at Asimov’s SF magazine, and they’d held onto it for over two months. Per Duotrope (a writer’s market site), Asimov’s takes twice as long on average reviewing stories that are ultimately accepted as stories that are ultimately rejected (the likely explanation is that obviously bad stories can be rejected right away, while stories that eventually get bought are handed around to multiple readers). So I sort of casually assumed without really thinking about it that this meant I was much more likely to get my story accepted.
But wait! Duotrope has various response statistics based on reports from users. With a few assumptions, I could use their numbers for a simple Bayes calculation, and figure out the real chances for an acceptance given a two month wait.
I used as my priors the numbers on Duotrope for acceptance rate (P(Acceptance)), mean reply time and std dev of reply time (used for P(Two Month Wait)), and took a guess at P(TMW|A) based on the mean reply time for acceptances. The result: yes, it was more likely that my piece was accepted, but because P(A) was so low to begin with (less than .5%), P(A|TMW) was still really low (less than 1%).
I calibrated my expectations accordingly. Which was just as well, since I got their rejection the next day. Rejections are always disappointing, but I’d have been far more disappointed if my expectations had still been out of joint.
Lovely! I intend to add this as a Bayesian sample problem—enough rationality diaries, and we’ll be able to make Bayes booklets exclusively out of real-life cases encountered by LessWrongers!
Hey—were you submitting the story for the first time? I.e., not that this was your first story, but Asimov’s was the first place you sent it? If so, odds probably need adjustment because bad stories get submitted to more magazines than good stories (a rejected story is resubmitted, a good story is accepted more quickly).
This is really cool.
However, if you’d read omens from chicken entrails and it made you feel better, that would also have been equally cool :)
(it may be a necessary technique to manage your emotions to actually rationally evaluate expectations, if you’re sufficiently steel-minded that prayers and omens don’t work for you)
I, for one, applaud and admire wsean for basing his emotions on Bayes’ theorem instead of bird organs.
I don’t believe you; the gizzard this morning told me to be wary of things online.