All of the applications in which Bayesian statistics/ML methods work so well. All of the psychology/neuroscience research on human intelligence approximating Bayesianism. All the robotics/AI/control theory applications where Bayesian methods are used in practice.
All of the applications in which Bayesian statistics/ML methods work so well. All the robotics/AI/control theory applications where Bayesian methods are used in practice.
This does not really seem like much evidence to me, because for most of these cases non-bayesian methods work much better. I confess I personally am in the “throw a massive neural network at it” camp of machine learning; and certainly if something with so little theoretical validation works so well, it makes one question whether the sort of success you cite really tells us much about bayesianism in general.
All of the psychology/neuroscience research on human intelligence approximating Bayesianism.
I’m less familiar with this literature. Surely human intelligence *as a whole* is not a very good approximation to bayesianism (whatever that means). And it seems like most of the heuristics and biases literature is specifically about how we don’t update very rationally. But at a lower level, I defer to your claim that modules in our brain approximate bayesianism.
Then I guess the question is how to interpret this. It certainly feels like a point in favour of some interpretation of bayesianism as a general framework. But insofar as you’re thinking about an interpretation which is being supported by empirical evidence, it seems important for someone to formulate it in such a way that it could be falsified. I claim that the way bayesianism has been presented around here (as an ideal of rationality) is not a falsifiable framework, and so at the very least we need someone else to make the case for what they’re standing for.
Problem is, “throw a massive neural network at it” fails completely for the vast majority of practical applications. We need astronomical amounts of data to make neural networks work. Try using them at a small company on a problem with a few thousand data points; it won’t work.
I see the moral of that story as: if you have enough data, any stupid algorithm will work. It’s when data is not superabundant that we need Bayesian methods, because nothing else reliably works. (BTW, this is something we could guess based on Bayesian foundations: Cox’ Theorem or an information-theoretic foundation of Bayesian probability do not depend on infinite data for any particular problem, whereas things like frequentist statistics or brute-force neural nets do.)
(Side note for people confused about how that plays with the comment at the top of this thread: the relevant limit there was not the limit of infinite data, but the limit of reasoning over all possible models.)
I claim that the way bayesianism has been presented around here (as an ideal of rationality) is not a falsifiable framework, and so at the very least we need someone else to make the case for what they’re standing for.
Around here, rationality is about winning. To the extent that we consider Bayesianism an ideal of rationality, that can be falsified by outperforming Bayesianism, in places where behavior of that ideal can be calculated or at least characterized enough to prove that something else outperforms the supposed ideal.
What sort of evidence are you referring to; can you list a few examples?
All of the applications in which Bayesian statistics/ML methods work so well. All of the psychology/neuroscience research on human intelligence approximating Bayesianism. All the robotics/AI/control theory applications where Bayesian methods are used in practice.
This does not really seem like much evidence to me, because for most of these cases non-bayesian methods work much better. I confess I personally am in the “throw a massive neural network at it” camp of machine learning; and certainly if something with so little theoretical validation works so well, it makes one question whether the sort of success you cite really tells us much about bayesianism in general.
I’m less familiar with this literature. Surely human intelligence *as a whole* is not a very good approximation to bayesianism (whatever that means). And it seems like most of the heuristics and biases literature is specifically about how we don’t update very rationally. But at a lower level, I defer to your claim that modules in our brain approximate bayesianism.
Then I guess the question is how to interpret this. It certainly feels like a point in favour of some interpretation of bayesianism as a general framework. But insofar as you’re thinking about an interpretation which is being supported by empirical evidence, it seems important for someone to formulate it in such a way that it could be falsified. I claim that the way bayesianism has been presented around here (as an ideal of rationality) is not a falsifiable framework, and so at the very least we need someone else to make the case for what they’re standing for.
Problem is, “throw a massive neural network at it” fails completely for the vast majority of practical applications. We need astronomical amounts of data to make neural networks work. Try using them at a small company on a problem with a few thousand data points; it won’t work.
I see the moral of that story as: if you have enough data, any stupid algorithm will work. It’s when data is not superabundant that we need Bayesian methods, because nothing else reliably works. (BTW, this is something we could guess based on Bayesian foundations: Cox’ Theorem or an information-theoretic foundation of Bayesian probability do not depend on infinite data for any particular problem, whereas things like frequentist statistics or brute-force neural nets do.)
(Side note for people confused about how that plays with the comment at the top of this thread: the relevant limit there was not the limit of infinite data, but the limit of reasoning over all possible models.)
Around here, rationality is about winning. To the extent that we consider Bayesianism an ideal of rationality, that can be falsified by outperforming Bayesianism, in places where behavior of that ideal can be calculated or at least characterized enough to prove that something else outperforms the supposed ideal.