I think understanding the universe in terms of concepts and intuitions that humans developed innately is unlikely to be possible. No reason the ways of thinking about the world helpful in the ancestral environment have to be helpful for describing the fundamental nature of reality.
Wow. You and I have had this type of discussion at least once before here on Less Wrong, about whether we ‘really’ understand something (for example, gravity) or if we understand it ‘well enough’. I suspected an underlying difference in the way we were thinking about things.
I don’t believe this: that there are physically realized things that we can’t understand.
I think that our concepts and intuitions are flexible enough to accommodate any possible reality. Quantum mechanics, and even light, are really weird. But there’s still hope for an aether—or whatever is required for this mechanical/local understanding I’m talking about—to bring it back down to human comprehension. The fact that these things are mathematically coherent (explicitly and fully described by equations) is especially compelling, since you can interrogate the equations to build structures in your mind that would model it.
For me, so far, the Maxwell equations are just floating in the air with no physical structural basis. However, if you spent time with them, wouldn’t you start building a physical intuition about them?
Let physics be described by whatever math is necessary. You predict that human general intelligence will be flexible enough to understand it. If someone is consciously “doing math” rather than “applying intuition”, then they don’t understand it yet. But the solution to that may involve growing an intuition, rather than changing the math.
That… sounds reasonable enough. But I question whether the intuition always comes in the form of a mechanism, or even any additional concepts at all.
I would only qualify my earlier statement: while human intelligence is flexible enough to understand anything that is possible, it might not be large enough. If there’s too much going on, the brain may simply not be able to compute it. In which case, the non-understanding doesn’t feel non-intuitive, it just feels too complicated.
I question whether the intuition always comes in the form of a mechanism, or even any additional concepts at all.
Even correct intuition? I guess I don’t mind putting forth a more definitive assertion that intuition must be based on a mechanical understanding. (While it’s likely I’m wildly guilty of the typical mind fallacy, that’s nevertheless my view.)
I’ve been considering the hypothesis that mathematical intuition (especially intuition about highly abstract, non-physical things) comes from an ability to model that math physically in the brain. When we interrogate our ‘intuition’, we’re actually interrogating these (mechanical) models. Modeling is a high-intelligence activity, and a model ‘correct enough’ to yield intuition may be hardly recognizable as such, if we were forced to explain in detail how we knew.
(If we have a correct intuition about mathematics outside our experience, how else could we have it?)
I’ve been considering the hypothesis that mathematical intuition (especially intuition about highly abstract, non-physical things) comes from an ability to model that math physically in the brain.
I’ve been considering the hypothesis that mathematical intuition (especially intuition about highly abstract, non-physical things) comes from an ability to model that math physically in the brain. When we interrogate our ‘intuition’, we’re actually interrogating these (mechanical) models.
This is correct, but there is a useful layer of abstraction to consider. There are a set of operations the brain does that we can be conscious of doing, and inspect the structure of how they interact within our own brains. These operations are, of course, implemented by physics, they come from the structure of neurons and other supporting biological components. And therefore, the structures that are built out of these operations are also, ultimately, implemented by physics, though a lot can be learned by looking at the introspectively observable structure. These operations can be used to build a model of arithmetic. This does give us some power to “explain in detail how we knew”.
I think understanding the universe in terms of concepts and intuitions that humans developed innately is unlikely to be possible. No reason the ways of thinking about the world helpful in the ancestral environment have to be helpful for describing the fundamental nature of reality.
Wow. You and I have had this type of discussion at least once before here on Less Wrong, about whether we ‘really’ understand something (for example, gravity) or if we understand it ‘well enough’. I suspected an underlying difference in the way we were thinking about things.
I don’t believe this: that there are physically realized things that we can’t understand.
I think that our concepts and intuitions are flexible enough to accommodate any possible reality. Quantum mechanics, and even light, are really weird. But there’s still hope for an aether—or whatever is required for this mechanical/local understanding I’m talking about—to bring it back down to human comprehension. The fact that these things are mathematically coherent (explicitly and fully described by equations) is especially compelling, since you can interrogate the equations to build structures in your mind that would model it.
For me, so far, the Maxwell equations are just floating in the air with no physical structural basis. However, if you spent time with them, wouldn’t you start building a physical intuition about them?
Let me see if I understand your claim:
Let physics be described by whatever math is necessary. You predict that human general intelligence will be flexible enough to understand it. If someone is consciously “doing math” rather than “applying intuition”, then they don’t understand it yet. But the solution to that may involve growing an intuition, rather than changing the math.
That… sounds reasonable enough. But I question whether the intuition always comes in the form of a mechanism, or even any additional concepts at all.
I would only qualify my earlier statement: while human intelligence is flexible enough to understand anything that is possible, it might not be large enough. If there’s too much going on, the brain may simply not be able to compute it. In which case, the non-understanding doesn’t feel non-intuitive, it just feels too complicated.
Even correct intuition? I guess I don’t mind putting forth a more definitive assertion that intuition must be based on a mechanical understanding. (While it’s likely I’m wildly guilty of the typical mind fallacy, that’s nevertheless my view.)
I’ve been considering the hypothesis that mathematical intuition (especially intuition about highly abstract, non-physical things) comes from an ability to model that math physically in the brain. When we interrogate our ‘intuition’, we’re actually interrogating these (mechanical) models. Modeling is a high-intelligence activity, and a model ‘correct enough’ to yield intuition may be hardly recognizable as such, if we were forced to explain in detail how we knew.
(If we have a correct intuition about mathematics outside our experience, how else could we have it?)
New post?
This is correct, but there is a useful layer of abstraction to consider. There are a set of operations the brain does that we can be conscious of doing, and inspect the structure of how they interact within our own brains. These operations are, of course, implemented by physics, they come from the structure of neurons and other supporting biological components. And therefore, the structures that are built out of these operations are also, ultimately, implemented by physics, though a lot can be learned by looking at the introspectively observable structure. These operations can be used to build a model of arithmetic. This does give us some power to “explain in detail how we knew”.