jacob_cannell above seems to think it is very important for physicists to know about Solomonoff induction.
Nah—I was just using that as an example of things physicists (regardless of IQ) don’t automatically know.
Most physicists were trained to think in terms of Popperian epistemology, which is strictly inferior to (dominated by) Bayesian epistemology (if you don’t believe that, it’s not worth my time to debate). In at least some problem domains, the difference in predictive capability between the two methodologies are becoming significant.
Physicists don’t automatically update their epistemologies, it isn’t something they are using to having to update.
Most physicists were trained to think in terms of Popperian epistemology, which is strictly inferior to (dominated
by) Bayesian epistemology (if you don’t believe that, it’s not worth my time to debate).
I equate “Bayesian epistemology” with a better approximation of universal inference. It’s easy to generate example environments where Bayesian agents dominate Popperian agents, while the converse is never true. Popperian agents completely fail to generalize well from small noisy datasets. When you have very limited evidence, popperian reliance on hard logical falsifiability just fails.
This shouldn’t even really be up for debate—do you actually believe the opposite position, or are you just trolling?
Nah—I was just using that as an example of things physicists (regardless of IQ) don’t automatically know.
Most physicists were trained to think in terms of Popperian epistemology, which is strictly inferior to (dominated by) Bayesian epistemology (if you don’t believe that, it’s not worth my time to debate). In at least some problem domains, the difference in predictive capability between the two methodologies are becoming significant.
Physicists don’t automatically update their epistemologies, it isn’t something they are using to having to update.
Heh, ok. Thanks for your time!
Ok, so I lied, I’ll bite.
I equate “Bayesian epistemology” with a better approximation of universal inference. It’s easy to generate example environments where Bayesian agents dominate Popperian agents, while the converse is never true. Popperian agents completely fail to generalize well from small noisy datasets. When you have very limited evidence, popperian reliance on hard logical falsifiability just fails.
This shouldn’t even really be up for debate—do you actually believe the opposite position, or are you just trolling?