Hi, Hume’s constant conjunction stuff I think has nothing to do with free lunch theorems in ML (?please correct me if I am missing something?), and has to do with defining causation, an issue Hume was worried about all his life (and ultimately solved, imo, via his counterfactual definition of causality that we all use today, by way of Neyman, Rubin, Pearl, etc.).
My read on the state of public academic philosophy is that there are many specific and potentially-but-not-obviously-related issues that come up in the general topic of “foundations of inference”. There are many angles of attack, and many researchers over the years. Many of them are no longer based out of official academic “philosophy departments” anymore and this is not necessarily a tragedy ;-)
The general issue is “why does ‘thinking’ seem to work at all ever?” This can be expressed in terms of logic, or probabilistic reasoning, or sorting, or compression, or computability, or theorem decidability, or P vs NP, or oracles of various kinds, or the possibility of language acquisition, and/or why (or why not) running basic plug-and-chug statistical procedures during data processing seems to (maybe) work in the “social sciences”.
Arguably, these all share a conceptual unity, and might eventually be formally unified by a single overarching theory that they are all specialized versions of.
From existing work we know that lossless compression algorithms have actual uses in real life, and it certainly seems as though mathematicians make real progress over time, up to and including Chaitin himself!
However when people try to build up “first principles explanations” how how “good thinking” works at all, they often derive generalized impossibility when we scope over naive formulations of “all possible theories” or “all possible inputs”.
So in most cases we almost certainly experience a “lucky fit” of some kind between various clearly productive thinking approaches and various practical restrictions on the kinds of input these approaches typically face.
Generative adversarial techniques in machine learning, and MIRI’s own Garrabrant Inductor are probably relevant here because they start to spell out formal models where a reasoning process of some measurable strength is pitted against inputs produced by a process that is somewhat hostile but clearly weaker.
Hume functions in my mind as a sort of memetic LUCA for this vast field of research, which is fundamentally motivated by the core idea that thinking correctly about raw noise is formally impossible, and yet we seem to be pretty decent at some kinds of thinking, and so there must be some kind of fit between various methods of thinking and the things that these thinking techniques seem to work on.
Also thanks! The Neyman-Pearson lemma has come up for me in practical professional situations before, but I’d never pushed deeper into recognizing Jerzy Neyman as yet another player in this game :-)
Jerzy Neyman gets credit for lots of things, but in particular in my neck of the woods for inventing the potential outcome notation. This is the notation for “if the first object had not been, the second never had existed” in Hume’s definition of causation.
Hi, Hume’s constant conjunction stuff I think has nothing to do with free lunch theorems in ML (?please correct me if I am missing something?), and has to do with defining causation, an issue Hume was worried about all his life (and ultimately solved, imo, via his counterfactual definition of causality that we all use today, by way of Neyman, Rubin, Pearl, etc.).
My read on the state of public academic philosophy is that there are many specific and potentially-but-not-obviously-related issues that come up in the general topic of “foundations of inference”. There are many angles of attack, and many researchers over the years. Many of them are no longer based out of official academic “philosophy departments” anymore and this is not necessarily a tragedy ;-)
The general issue is “why does ‘thinking’ seem to work at all ever?” This can be expressed in terms of logic, or probabilistic reasoning, or sorting, or compression, or computability, or theorem decidability, or P vs NP, or oracles of various kinds, or the possibility of language acquisition, and/or why (or why not) running basic plug-and-chug statistical procedures during data processing seems to (maybe) work in the “social sciences”.
Arguably, these all share a conceptual unity, and might eventually be formally unified by a single overarching theory that they are all specialized versions of.
From existing work we know that lossless compression algorithms have actual uses in real life, and it certainly seems as though mathematicians make real progress over time, up to and including Chaitin himself!
However when people try to build up “first principles explanations” how how “good thinking” works at all, they often derive generalized impossibility when we scope over naive formulations of “all possible theories” or “all possible inputs”.
So in most cases we almost certainly experience a “lucky fit” of some kind between various clearly productive thinking approaches and various practical restrictions on the kinds of input these approaches typically face.
Generative adversarial techniques in machine learning, and MIRI’s own Garrabrant Inductor are probably relevant here because they start to spell out formal models where a reasoning process of some measurable strength is pitted against inputs produced by a process that is somewhat hostile but clearly weaker.
Hume functions in my mind as a sort of memetic LUCA for this vast field of research, which is fundamentally motivated by the core idea that thinking correctly about raw noise is formally impossible, and yet we seem to be pretty decent at some kinds of thinking, and so there must be some kind of fit between various methods of thinking and the things that these thinking techniques seem to work on.
Also thanks! The Neyman-Pearson lemma has come up for me in practical professional situations before, but I’d never pushed deeper into recognizing Jerzy Neyman as yet another player in this game :-)
Jerzy Neyman gets credit for lots of things, but in particular in my neck of the woods for inventing the potential outcome notation. This is the notation for “if the first object had not been, the second never had existed” in Hume’s definition of causation.