You think that words can be defined and then the definition if you look at a sentence and know the gramatical rules and the definition of those words you can find out what the sentence means.
That belief is wrong. If reasoning would work that way, we would have smart AI by now. Meaning depends on context.
I like the concept of phenomelogical primitives. Getting people to integrate a new phenomelogical primitives into their thinking is really hard. I even read someone argue that it’s impossible in physics education to teach new primitives.
Teaching physics students that a metal ball thrown on the floor bounces back because of springiness that lets the ball contract when it hits the floor and then expands again is hard. It takes a while till students don’t reason anymore that the floor somehow pushes the ball back but that a steel ball contracts.
In biology there the concept of a pseudogene. It’s basically a string of DNA that looks like a gene that codes for a gene but that’s not expressed into a protein.
On the first instance that seems like a fine definition, on second investigation different biologists differ about what “looking like a gene” means. Different bioinformaticians each write their own algorithms to detect genes and there are cases where one algorithm A says that D is a pseudogene but algorithm B says that D isn’t.
Of course changing the trainings data on which the algorithms runs also changes the classification. A really deep definition of a particular concept of a pseudogene would probably mention all the trainings data and the specific machine learnine algorithm used.
There are various arguments complicated arguments to prefer one algorithm over another because the resulting classification is better. You can say it’s okay that the algorithm doesn’t notice that some strings are genes because they don’t look like genes are supposed to look or you can say that you really want that your algorithm detects all genes that exist as genes. As a result the amount of pseudogenes changes.
You could speak of pseudogene_A and pseudogene_B but in many cases, you don’t need to think about those details. and abstract them away. It okay if a few people figure out a decent definition of pseudogene that behaves like it’s supposed to and then others can use that notion.
In philosophy the literature and how the literature handles various concepts could be thought as training data for the general human mental classification algorithm. A full definition of a concept would have to conclude what the concept does in various edge cases.
On LW we have our jargon problem. We can use an existing word for a concept or we can invent a new term for what we speak about. We have to decide whether the existing term is good enough for our purposes or whether we mean something slightly different that warrants a new term.
That’s not always an easy decision.
To repeat a cliche: “There are only two hard things in Computer Science: cache invalidation and naming things”
Naming is also hard outside of computer science.
You think that words can be defined and then the definition if you look at a sentence and know the gramatical rules and the definition of those words you can find out what the sentence means. That belief is wrong. If reasoning would work that way, we would have smart AI by now. Meaning depends on context.
I like the concept of phenomelogical primitives. Getting people to integrate a new phenomelogical primitives into their thinking is really hard. I even read someone argue that it’s impossible in physics education to teach new primitives.
Teaching physics students that a metal ball thrown on the floor bounces back because of springiness that lets the ball contract when it hits the floor and then expands again is hard. It takes a while till students don’t reason anymore that the floor somehow pushes the ball back but that a steel ball contracts.
In biology there the concept of a pseudogene. It’s basically a string of DNA that looks like a gene that codes for a gene but that’s not expressed into a protein.
On the first instance that seems like a fine definition, on second investigation different biologists differ about what “looking like a gene” means. Different bioinformaticians each write their own algorithms to detect genes and there are cases where one algorithm A says that D is a pseudogene but algorithm B says that D isn’t.
Of course changing the trainings data on which the algorithms runs also changes the classification. A really deep definition of a particular concept of a pseudogene would probably mention all the trainings data and the specific machine learnine algorithm used.
There are various arguments complicated arguments to prefer one algorithm over another because the resulting classification is better. You can say it’s okay that the algorithm doesn’t notice that some strings are genes because they don’t look like genes are supposed to look or you can say that you really want that your algorithm detects all genes that exist as genes. As a result the amount of pseudogenes changes.
You could speak of pseudogene_A and pseudogene_B but in many cases, you don’t need to think about those details. and abstract them away. It okay if a few people figure out a decent definition of pseudogene that behaves like it’s supposed to and then others can use that notion.
In philosophy the literature and how the literature handles various concepts could be thought as training data for the general human mental classification algorithm. A full definition of a concept would have to conclude what the concept does in various edge cases.
On LW we have our jargon problem. We can use an existing word for a concept or we can invent a new term for what we speak about. We have to decide whether the existing term is good enough for our purposes or whether we mean something slightly different that warrants a new term. That’s not always an easy decision.
To repeat a cliche: “There are only two hard things in Computer Science: cache invalidation and naming things” Naming is also hard outside of computer science.