I don’t understand what you mean by ‘ideals and definitions,’ and how these are not influenced by past empirical observations and observed correlations. Any definition can simply be reduced to past observed correlations. The function of a heart is based strictly on past observations, and our mapping them to a functional model of how the heart behaves.
My argument seems trivial to me, because the idea that there is some non-empirical or correlated knowledge not based on past information seems nonsensical.
The distinction between “ideal” and “definition” is fuzzy the way I’m using it, so you can think of them as the same thing for simplicity.
Symmetry is an example of an ideal. It’s not a thing you directly observe. You can observe a symmetry, but there are infinitely many kinds of symmetries, and you have some general notion of symmetry that unifies all of them, including ones you’ve never seen. You can construct a symmetry that you’ve never seen, and you can do it algorithmically based on your idea of what symmetries are given a bit of time to think about the problem. You can even construct symmetries that, at first glance, would not look like a symmetry to someone else, and you can convince that someone else that what you’ve constructed is a symmetry.
The set of natural numbers is an example of something that’s defined, not observed. Each natural number is defined sequentially, starting from 1.
Addition is an example of something that’s defined, not observed.
The general notion of a bottle is an ideal.
In terms of philosophy, an ideal is the Platonic Form of a thing. In terms of category theory, an ideal is an initial or terminal object. In terms of category theory, a definition is a commutative diagram.
I didn’t say these things weren’t influenced by past observations and correlations. I said past observations and correlations were irrelevant for distinguishing them. Meaning, for example, you can distinguish between more natural numbers than your past experiences should allow.
I’m going to risk going down a meaningless rabbit hole here of semantic nothingness --
But I still disagree with your distinction, although I do appreciate the point you’re making. I view, and think the correct way to view, the human brain as simply a special case of any other computer. You’re correct that we have, as a collective species, proven and defined these abstract patterns. Yet even all these patterns are based on observations and rules of reasoning between our mind and the empirical reality. We can use our neurons to generate more sequences in a pattern, but the idea of an infinite set of numbers is only an abstraction or an appeal to something that could exist.
Similarly, a silicon computer can hold functions and mappings, but can never create an array of all numbers. They reduce down to electrical on-off switches, no matter how complex the functions are.
There is also no rule that says natural numbers or any category can’t change tomorrow. Or that right outside of the farthest information set in the horizon of space available to humans, the gravitational and laws of mathematics all shift by 0.1. It is sort of nonsensical, but it’s part of the view that the only difference between things that feel real and inherently distinguishable is our perception of how certain they are to continue based on prior information.
In my experience talking about this with people before, it’s not the type of thing people change their mind on (not implying your view is necessarily wrong). It’s a view of reality that we develop pretty foundationally, but I figured I’d write out my thoughts anyway for fun. It’s also sort of a self-indulgent argument about how we perceive reality. But, hey, it’s late and I’m relaxing.
I don’t understand what point you’re making with the computer, as we seem to be in complete agreement there. Nothing about the notion of ideals and definitions suggests that computers can’t have them or their equivalent. It’s obvious enough that computers can represent them, as you demonstrated with your example of natural numbers. It’s obvious enough that neurons and synapses can encode these things, and that they can fire in patterned ways based on them because… well that’s what neurons do, and neurons seem to be doing to bulk of the heavy lifting as far as thinking goes.
Where we disagree is in saying that all concepts that our neurons recognize are equivalent and that they should be reasoned about in the same way. There are clearly some notions that we recognize as being valid only after seeing sufficient evidence. For these notions, I think bayesian reasoning is perfectly well-suited. There are also clearly notions we recognize as being valid for which no evidence is required. For these, I think we need something else. For these notions, only usefulness is required, and sometimes not even that. Bayesian reasoning cannot deal with this second kind because their acceptability has nothing to do with evidence.
You argue that this second kind is irrelevant because these things exist solely in people’s minds. The problem is that the same concepts recur again and again in many people minds. I think I would agree with you if we only ever had to deal with a physical world in which people’s minds did not matter all that much, but that’s not the world we live in. If you want to be able to reliably convey your ideas to others, if you want to understand how people think at a more fundamental level, if you want your models to be useful to someone other than yourself, if you want to develop ideas that people will recognize as valid, if you want to generalize ideas that other people have, if you want your thoughts to be integrated with those of a community for mutual benefit, then you cannot ignore these abstract patterns because these abstract patterns constitute such a vast amount of how people think.
It also, incidentally, has a tremendous impact on how your own brain thinks and the kinds of patterns your brain lets you consciously recognize. If you want to do better generalizing your own ideas in reliable and useful ways, then you need to understand how they work.
For what it’s worth, I do think there are physically-grounded reasons for why this is so.
I don’t understand what you mean by ‘ideals and definitions,’ and how these are not influenced by past empirical observations and observed correlations. Any definition can simply be reduced to past observed correlations. The function of a heart is based strictly on past observations, and our mapping them to a functional model of how the heart behaves.
My argument seems trivial to me, because the idea that there is some non-empirical or correlated knowledge not based on past information seems nonsensical.
The distinction between “ideal” and “definition” is fuzzy the way I’m using it, so you can think of them as the same thing for simplicity.
Symmetry is an example of an ideal. It’s not a thing you directly observe. You can observe a symmetry, but there are infinitely many kinds of symmetries, and you have some general notion of symmetry that unifies all of them, including ones you’ve never seen. You can construct a symmetry that you’ve never seen, and you can do it algorithmically based on your idea of what symmetries are given a bit of time to think about the problem. You can even construct symmetries that, at first glance, would not look like a symmetry to someone else, and you can convince that someone else that what you’ve constructed is a symmetry.
The set of natural numbers is an example of something that’s defined, not observed. Each natural number is defined sequentially, starting from 1.
Addition is an example of something that’s defined, not observed. The general notion of a bottle is an ideal.
In terms of philosophy, an ideal is the Platonic Form of a thing. In terms of category theory, an ideal is an initial or terminal object. In terms of category theory, a definition is a commutative diagram.
I didn’t say these things weren’t influenced by past observations and correlations. I said past observations and correlations were irrelevant for distinguishing them. Meaning, for example, you can distinguish between more natural numbers than your past experiences should allow.
I’m going to risk going down a meaningless rabbit hole here of semantic nothingness --
But I still disagree with your distinction, although I do appreciate the point you’re making. I view, and think the correct way to view, the human brain as simply a special case of any other computer. You’re correct that we have, as a collective species, proven and defined these abstract patterns. Yet even all these patterns are based on observations and rules of reasoning between our mind and the empirical reality. We can use our neurons to generate more sequences in a pattern, but the idea of an infinite set of numbers is only an abstraction or an appeal to something that could exist.
Similarly, a silicon computer can hold functions and mappings, but can never create an array of all numbers. They reduce down to electrical on-off switches, no matter how complex the functions are.
There is also no rule that says natural numbers or any category can’t change tomorrow. Or that right outside of the farthest information set in the horizon of space available to humans, the gravitational and laws of mathematics all shift by 0.1. It is sort of nonsensical, but it’s part of the view that the only difference between things that feel real and inherently distinguishable is our perception of how certain they are to continue based on prior information.
In my experience talking about this with people before, it’s not the type of thing people change their mind on (not implying your view is necessarily wrong). It’s a view of reality that we develop pretty foundationally, but I figured I’d write out my thoughts anyway for fun. It’s also sort of a self-indulgent argument about how we perceive reality. But, hey, it’s late and I’m relaxing.
I don’t understand what point you’re making with the computer, as we seem to be in complete agreement there. Nothing about the notion of ideals and definitions suggests that computers can’t have them or their equivalent. It’s obvious enough that computers can represent them, as you demonstrated with your example of natural numbers. It’s obvious enough that neurons and synapses can encode these things, and that they can fire in patterned ways based on them because… well that’s what neurons do, and neurons seem to be doing to bulk of the heavy lifting as far as thinking goes.
Where we disagree is in saying that all concepts that our neurons recognize are equivalent and that they should be reasoned about in the same way. There are clearly some notions that we recognize as being valid only after seeing sufficient evidence. For these notions, I think bayesian reasoning is perfectly well-suited. There are also clearly notions we recognize as being valid for which no evidence is required. For these, I think we need something else. For these notions, only usefulness is required, and sometimes not even that. Bayesian reasoning cannot deal with this second kind because their acceptability has nothing to do with evidence.
You argue that this second kind is irrelevant because these things exist solely in people’s minds. The problem is that the same concepts recur again and again in many people minds. I think I would agree with you if we only ever had to deal with a physical world in which people’s minds did not matter all that much, but that’s not the world we live in. If you want to be able to reliably convey your ideas to others, if you want to understand how people think at a more fundamental level, if you want your models to be useful to someone other than yourself, if you want to develop ideas that people will recognize as valid, if you want to generalize ideas that other people have, if you want your thoughts to be integrated with those of a community for mutual benefit, then you cannot ignore these abstract patterns because these abstract patterns constitute such a vast amount of how people think.
It also, incidentally, has a tremendous impact on how your own brain thinks and the kinds of patterns your brain lets you consciously recognize. If you want to do better generalizing your own ideas in reliable and useful ways, then you need to understand how they work.
For what it’s worth, I do think there are physically-grounded reasons for why this is so.