Good for you for actually trying to apply the Bayesian gospel to realistic problems, instead of taking it on faith!
Not sure if this is a useful example, though, as the posterior probability of a successful launch depends on a score of other tidbits of information obtained from the launch, which are bound to overwhelm the fact of successful launch itself, except in the eyes of the management, who quickly reduce the announced failure rate to one in 100,000 or some other acceptable number after only a dozen of successes, regardless of the warning signs.
In fact, the question, as stated, is meaningless, since the events S and F are mutually exclusive, and to calculate a conditional probability implicitly embedded in the Bayes expression you need compatible events. One would instead define a sample space in which S and F live (e.g. odds of an O-ring failure and its effect on the launch success).
The confidence gained from a single successful launch is no better than the confidence of seeing another head given two heads in two successive tosses of a known fair coin. Until and unless you see a really unlikely event, you should not update your priors based on the outcomes, but only based on your underlying models of the event in question and whatever useful data you can glean from the coin trajectory.
That said, you can definitely apply Bayes to discriminate between competing models of, say, isolation foam debris striking the shuttle, based on the empirical data from a given launch, which will, in turn, affect the estimate of success for the next launch.
Hmm, that ended up being wordier than I expected, but hopefully I haven’t told many lies.
Good for you for actually trying to apply the Bayesian gospel to realistic problems, instead of taking it on faith!
Not sure if this is a useful example, though, as the posterior probability of a successful launch depends on a score of other tidbits of information obtained from the launch, which are bound to overwhelm the fact of successful launch itself, except in the eyes of the management, who quickly reduce the announced failure rate to one in 100,000 or some other acceptable number after only a dozen of successes, regardless of the warning signs.
In fact, the question, as stated, is meaningless, since the events S and F are mutually exclusive, and to calculate a conditional probability implicitly embedded in the Bayes expression you need compatible events. One would instead define a sample space in which S and F live (e.g. odds of an O-ring failure and its effect on the launch success).
The confidence gained from a single successful launch is no better than the confidence of seeing another head given two heads in two successive tosses of a known fair coin. Until and unless you see a really unlikely event, you should not update your priors based on the outcomes, but only based on your underlying models of the event in question and whatever useful data you can glean from the coin trajectory.
That said, you can definitely apply Bayes to discriminate between competing models of, say, isolation foam debris striking the shuttle, based on the empirical data from a given launch, which will, in turn, affect the estimate of success for the next launch.
Hmm, that ended up being wordier than I expected, but hopefully I haven’t told many lies.