Bayes is the well proven (to my knowledge) framework in which you should handle learning from evidence. All the other tools can be understood in how they derive from or contradict Bayes, like how engines can be understood in terms of thermodynamics.
Bayesian updating is a good thing to do when there is no conclusive evidence to discriminate between models and you must decide what to do next. It should be taught to scientists, engineers, economists, lawyers and programmers as the best tool available when deciding under uncertainty. I don’t see how it can be pushed any farther than that, into the realm of determining what is.
There are plenty of Bayesian examples this crowd can benefit from, such as “My code is misbehaving, what’s the best way to find the bug?”, but, unfortunately, EY does not seem to want to settle for a small fry like that.
Bayesian updating is a good thing to do when there is no conclusive evidence to discriminate between models and you must decide what to do next.
Likewise with conclusive evidence. Bayes is always right.
I don’t see how it can be pushed any farther than that, into the realm of determining what is.
I think I’ve confused you, sorry. I don’t mean to claim that Bayes implies or is able to support realism any better or worse than anything else. Bayes allocates anticipation between hypotheses. The what-is thing is orthogonal and (I’m coming to agree with you) probably useless.
2: If you are doing science and not, say, criminal law, at some point you have to get that conclusive evidence (or at least as conclusive as it gets, like the recent Higgs confirmation). Bayes is still probably, on average, the fastest way to get there, though.
Bayes is always right.
Feel free to unpack what you mean by right. Even your best Bayesian guess can turn out to be wrong.
So? It’s correct. Maybe you use some quick approximation, but it’s not like doing the right thing is inherently more costly.
If you are doing science and not, say, criminal law, at some point you have to get that conclusive evidence (or at least as conclusive as it gets, like the recent Higgs confirmation). Bayes is still probably, on average, the fastest way to get there, though.
This get-better-evidence thing would also be recommended by bayes+decision theory. (and if it wasn’t then it would have to defer to bayes+decision). Don’t see the relevence.
Feel free to unpack what you mean by right.
The right probability distribution is the one that maximizes the expected utility of an expected utility maximizer using that probability distribution. That’s missing a bunch of hairy stuff involving where to get the outer probability distribution, but I hope you get the point.
You can often get lucky by not using Bayesian updating. After all, that’s how science has been done for ages. What matters in the end is the superior explanatory and predictive power of the model, not how likely, simple or cute it is.
The right probability distribution is the one that maximizes the expected utility of an expected utility maximizer using that probability distribution.
So, on average, you make better decisions. I agree with that much. As I said, a nice useful tool. You can still lose even if you use it (“but I was doing everything right”—Bayesian’s famous last words), while someone who never heard of Bayes can win (and does, every 6⁄49 draw).
You can often get lucky by not using Bayesian updating. After all, that’s how science has been done for ages.
It’s “gotten lucky” exactly to the extent that it follows Bayes.
What matters in the end is the superior explanatory and predictive power of the model, not how simple or cute it is.
Yes. cuteness is overridden by evidence, but there is a definite trend in physics and elsewhere that the best models have often been quite cute in a certain sense, so we can use that cuteness as a proxy for “probably right”.
As I said, a nice useful tool. You can still lose even if you use it
Yes, a useful tool, but also the proven most-optimal and fully general tool. You can still lose, but any other system will cause you to still lose even more.
I think we are in agreement for the most part. I’m out.
Bayesian updating is a good thing to do when there is no conclusive evidence to discriminate between models and you must decide what to do next. It should be taught to scientists, engineers, economists, lawyers and programmers as the best tool available when deciding under uncertainty. I don’t see how it can be pushed any farther than that, into the realm of determining what is.
There are plenty of Bayesian examples this crowd can benefit from, such as “My code is misbehaving, what’s the best way to find the bug?”, but, unfortunately, EY does not seem to want to settle for a small fry like that.
Likewise with conclusive evidence. Bayes is always right.
I think I’ve confused you, sorry. I don’t mean to claim that Bayes implies or is able to support realism any better or worse than anything else. Bayes allocates anticipation between hypotheses. The what-is thing is orthogonal and (I’m coming to agree with you) probably useless.
1: It’s an overkill in this case.
2: If you are doing science and not, say, criminal law, at some point you have to get that conclusive evidence (or at least as conclusive as it gets, like the recent Higgs confirmation). Bayes is still probably, on average, the fastest way to get there, though.
Feel free to unpack what you mean by right. Even your best Bayesian guess can turn out to be wrong.
So? It’s correct. Maybe you use some quick approximation, but it’s not like doing the right thing is inherently more costly.
This get-better-evidence thing would also be recommended by bayes+decision theory. (and if it wasn’t then it would have to defer to bayes+decision). Don’t see the relevence.
The right probability distribution is the one that maximizes the expected utility of an expected utility maximizer using that probability distribution. That’s missing a bunch of hairy stuff involving where to get the outer probability distribution, but I hope you get the point.
You can often get lucky by not using Bayesian updating. After all, that’s how science has been done for ages. What matters in the end is the superior explanatory and predictive power of the model, not how likely, simple or cute it is.
So, on average, you make better decisions. I agree with that much. As I said, a nice useful tool. You can still lose even if you use it (“but I was doing everything right”—Bayesian’s famous last words), while someone who never heard of Bayes can win (and does, every 6⁄49 draw).
It’s “gotten lucky” exactly to the extent that it follows Bayes.
Yes. cuteness is overridden by evidence, but there is a definite trend in physics and elsewhere that the best models have often been quite cute in a certain sense, so we can use that cuteness as a proxy for “probably right”.
Yes, a useful tool, but also the proven most-optimal and fully general tool. You can still lose, but any other system will cause you to still lose even more.
I think we are in agreement for the most part. I’m out.
EDIT: also, you should come to more meetups.
Thursday is a bad day for me...