These are heuristic descriptions; these essays don’t make explicit how to test whether a model is interpretable or not. I think it probably has something to do with model size; is the model reducible to one with fewer parameters, or not?
If you use e.g. the Akaike Information Criterion for model evaluation, you get around the size problem in theory. Model size is then something you score explicitely.
Personally, I still have intuitive problems with this approach though: many phenomenological theories in physics are easier to interpret than Quantum Mechanics, and seem to be intuitively less complex, but are more complex in a formal sense (and thus get a worse AIC score, even if they predict the same thing).
How much of human thought and behavior is “irreducible” in this way, resembling the huge black-box models of contemporary machine learning? Plausibly a lot
‘Irreducible’ is a pretty strong stance. I agree that many things will be hard for humans to describe ein a way that other humans find satisfying. But do you think that an epistemically rational entity with unlimited computational power (something like a Solomonov Inductor) would be unable to do that?
Also, WOW I had no idea Arbital’s writing was so good. (The Solomonov Inductor link.) In case anyone else didn’t click the first time, it’s not just a definition, it’s a dialogue, probably by Eliezer, and it’s super cool.
So, it’s usually possible to create “adversarial examples”—datasets that are so “perverse” that they resist accurate prediction by simple models and actually require lots of variables. (For a very simple example, if you’re trying to fit a continuous curve based on a finite number of data points, you can make the problem arbitrarily hard with functions that are nowhere differentiable.) I’m not being that rigorous here, but I think the answer to the question “are there irreducibly complex statistical models?” is yes. You can make models such that any simplification has a large accuracy cost.
Are there irreducibly complex statistical models that humans or animals use in real life? That’s a different and harder question, and my answer there is more like “I don’t know, but I could believe so.”
I think the answer to the question “are there irreducibly complex statistical models?” is yes.
I agree that there are some sources of irreducible complexity, like ‘truely random’ events.
To me, the field of cognition does not pattern-match to ‘irrecducibly complex’, but more to ‘We don’t have good models. Yet, growth mindset’. So, unless you have some patterns where you can prove that they are irrreducible, I will stick with my priors I guess. The example you gave me,
For a very simple example, if you’re trying to fit a continuous curve based on a finite number of data points, you can make the problem arbitrarily hard with functions that are nowhere differentiable.
falls squarely in the ‘our models are bad’-category, e.g. the Weierstrass function can be stated pretty compactly with analyitic formulas.
But also, of course I can’t prove the non-existence of such irreducible, important processes in the brain.
and my answer there is more like “I don’t know, but I could believe so.”
Ah, how you think about that example helps clarify. I wasn’t even thinking about the possibility of an AI that could “learn” the analytic form of Weierstrass function, I was thinking about the fact that trying to fit a polynomial to it would be arbitrarily hard.
Obviously “not modelable by ANY means” is a much stronger claim than “if you use THESE means, then your model needs a lot of epicycles to be close to accurate.” (Analyst’s mindset vs. computer scientist’s mindset; the computer scientist’s typical class of “possible algorithms” is way broader. I’m more used to thinking like an analyst.)
I think you and I are pretty close to agreement at this point.
If you use e.g. the Akaike Information Criterion for model evaluation, you get around the size problem in theory. Model size is then something you score explicitely.
Personally, I still have intuitive problems with this approach though: many phenomenological theories in physics are easier to interpret than Quantum Mechanics, and seem to be intuitively less complex, but are more complex in a formal sense (and thus get a worse AIC score, even if they predict the same thing).
‘Irreducible’ is a pretty strong stance. I agree that many things will be hard for humans to describe ein a way that other humans find satisfying. But do you think that an epistemically rational entity with unlimited computational power (something like a Solomonov Inductor) would be unable to do that?
Also, WOW I had no idea Arbital’s writing was so good. (The Solomonov Inductor link.) In case anyone else didn’t click the first time, it’s not just a definition, it’s a dialogue, probably by Eliezer, and it’s super cool.
So, it’s usually possible to create “adversarial examples”—datasets that are so “perverse” that they resist accurate prediction by simple models and actually require lots of variables. (For a very simple example, if you’re trying to fit a continuous curve based on a finite number of data points, you can make the problem arbitrarily hard with functions that are nowhere differentiable.) I’m not being that rigorous here, but I think the answer to the question “are there irreducibly complex statistical models?” is yes. You can make models such that any simplification has a large accuracy cost.
Are there irreducibly complex statistical models that humans or animals use in real life? That’s a different and harder question, and my answer there is more like “I don’t know, but I could believe so.”
I agree that there are some sources of irreducible complexity, like ‘truely random’ events.
To me, the field of cognition does not pattern-match to ‘irrecducibly complex’, but more to ‘We don’t have good models. Yet, growth mindset’. So, unless you have some patterns where you can prove that they are irrreducible, I will stick with my priors I guess. The example you gave me,
falls squarely in the ‘our models are bad’-category, e.g. the Weierstrass function can be stated pretty compactly with analyitic formulas.
But also, of course I can’t prove the non-existence of such irreducible, important processes in the brain.
Fair enough.
Ah, how you think about that example helps clarify. I wasn’t even thinking about the possibility of an AI that could “learn” the analytic form of Weierstrass function, I was thinking about the fact that trying to fit a polynomial to it would be arbitrarily hard.
Obviously “not modelable by ANY means” is a much stronger claim than “if you use THESE means, then your model needs a lot of epicycles to be close to accurate.” (Analyst’s mindset vs. computer scientist’s mindset; the computer scientist’s typical class of “possible algorithms” is way broader. I’m more used to thinking like an analyst.)
I think you and I are pretty close to agreement at this point.
Yes, I completely agree with the weaker formulation “irreducible using only THESE means”, like e.g. Polynomials, MPTs, First-Order Logic etc.