I’ll give you an example of an ontology in a different field (linguistics) and maybe it will help.
This is WordNet, an ontology of the English language. If you type “book” and keep clicking “S:” and then “direct hypernym”, you will learn that book’s place in the hierarchy is as follows:
… > object > whole/unit > artifact > creation > product > work > publication > book
So if I had to understand one of the LessWrong (-adjacent?) posts mentioning an “ontology”, I would forget about philosophy and just think of a giant tree of words. Because I like concrete examples.
Consider chimpanzees. One way of viewing questions like “Is a chimpanzee truly a person?”—meaning, not, “How do we arbitrarily define the syllables per-son?” but “Should we care a lot about chimpanzees?”—is that they’re about how to apply the ‘person’ category in our desires to things that are neither typical people nor typical nonpeople. We can see this as arising from something like an ontological shift: we’re used to valuing cognitive systems that are made from whole human minds, but it turns out that minds are made of parts, and then we have the question of how to value things that are made from some of the person-parts but not all of them.
My “tree of words” understanding: we classify things into “human minds” or “not human minds”, but now that we know more about possible minds, we don’t want to use this classification anymore. Boom, we have more concepts now and the borders don’t even match. We have a different ontology.
From the same post:
In this sense the problem we face with chimpanzees is exactly analogous to the question a diamond maximizer would face after discovering nuclear physics and asking itself whether a carbon-14 atom counted as ‘carbon’ for purposes of caring about diamonds.
My understanding: You learned more about carbon and now you have new concepts in your ontology: carbon-12 and carbon-14. You want to know if a “diamond” should be “any carbon” or should be refined to “only carbon-12”.
The standard answer is that we say “you lose”—we explain how we’ll be able to exploit them (e.g. via dutch books). Even when abstract “irrationality” is not compelling, “losing” often is. Again, that’s particularly true under ontology improvement. Suppose an agent says “well, I just won’t take bets from Dutch bookies”. But then, once they’ve improved their ontology enough to see that all decisions under uncertainty are a type of bet, they can’t do that—or at least they need to be much unreasonable to do so.
My understanding: You thought only [particular things] were bets so you said “I won’t take bets”. I convinced you that all decisions are bets. This is a change in ontology. Maybe you want to reevaluate your statement about bets now.
Ontology identification is the problem of mapping between an AI’s model of the world and a human’s model, in order to translate human goals (defined in terms of the human’s model) into usable goals (defined in terms of the AI’s model).
My understanding: AI and humans have different sets of categories. AI can’t understand what you want it to do if your categories are different. Like, maybe you have “creative work” in your ontology, and this subcategory belongs to the category of “creations by human-like minds”. You tell the AI that you want to maximize the number of creative works and it starts planting trees. “Tree is not a creative work” is not an objective fact about a tree; it’s a property of your ontology; sorry. (Trees are pretty cool.)
Also, to answer your question about “probability” in a sister chain: yes, “probability” can be in someone’s ontology. Things don’t have to “exist” to be in an ontology.
Here’s another real-world example:
You are playing a game. Maybe you’ll get a heart, maybe you won’t. The concept of probability exists for you.
This person — https://youtu.be/ilGri-rJ-HE?t=364 — is creating a tool-assisted speedrun for the same game. On frame 4582 they’ll get a heart, on frame 4581 they won’t, so they purposefully waste a frame to get a heart (for instance). “Probability” is not a thing that exists for them — for them the universe of the game is fully deterministic.
The person’s ontology is “right” and your ontology is wrong. On the other hand, your ontology is useful for you when playing the game, and their ontology wouldn’t be. You don’t even need to have different knowledge about the game; you both know the game is deterministic, and still it changes nothing.
Actually, let’s do a 2x2 matrix for all combinations of, let’s say, “probability” and “luck” in one’s personal ontology:
Person C: probability and luck both exist. Probability is partly influenced/swayed by luck.
Person D: probability exists, luck doesn’t. (“You” are person D here.)
Person E: luck exists, probability doesn’t. If you didn’t get a heart, you are unlucky today for whatever reason. If you did get a heart, well, you could be even unluckier but you aren’t. An incredibly lucky person could well get a hundred hearts in a row.
Person F: probability and luck both don’t exist and our lives are as deterministic as the game; using the concepts of probability or luck even internally, as “fake concepts”, is useless because actually everything is useless. (Some kind of fatalism.)
//
Now imagine somebody who replies to this comment saying “you could rephrase this in terms of beliefs”. This would be an example of a person saying essentially “hey, you should’ve used [my preferred ontology] instead of yours”, one where you use the concept of “belief” instead of “ontology”. Which is fine!
I’ll also give you two examples of using ontologies — as in “collections of things and relationships between things” — for real-world tasks that are much dumber than AI.
ABBYY attempted to create a giant ontology of all concepts, then develop parsers from natural languages into “meaning trees” and renderers from meaning trees into natural languages. The project was called “Compreno”. If it worked, it would’ve given them a “perfect” translating tool from any supported language into any supported language without having to handle each language pair separately. To my knowledge, they kept trying for 20+ years and it probably died because I google Compreno every once in a few years and there’s still nothing.
Let’s say you are Nestle and you want to sell cereal in 100 countries. You also want to be able to say “organic” on your packaging. For each country, you need to determine if your cereal would be considered “organic”. This also means that you need to know for all of your cereal’s ingredients whether they are “organic” by each country’s definition (and possibly for sub-ingredients, etc). And there are 50 other things that you also have to know about your ingredients — because of food safety regulations, etc. I don’t have first-hand knowledge of this, but I was once approached by a client who wanted to develop tools to help Nestle-like companies solve such problems; and they told me that right now their tool of choice was custom-built ontologies in Protege, with relationships like is-a, instance-of, etc.
I’ll give you an example of an ontology in a different field (linguistics) and maybe it will help.
This is WordNet, an ontology of the English language. If you type “book” and keep clicking “S:” and then “direct hypernym”, you will learn that book’s place in the hierarchy is as follows:
… > object > whole/unit > artifact > creation > product > work > publication > book
So if I had to understand one of the LessWrong (-adjacent?) posts mentioning an “ontology”, I would forget about philosophy and just think of a giant tree of words. Because I like concrete examples.
Now let’s go and look at one of those posts.
https://arbital.com/p/ontology_identification/#h-5c-2.1 , “Ontology identification problem”:
My “tree of words” understanding: we classify things into “human minds” or “not human minds”, but now that we know more about possible minds, we don’t want to use this classification anymore. Boom, we have more concepts now and the borders don’t even match. We have a different ontology.
From the same post:
My understanding: You learned more about carbon and now you have new concepts in your ontology: carbon-12 and carbon-14. You want to know if a “diamond” should be “any carbon” or should be refined to “only carbon-12”.
Let’s take a few more posts:
My understanding: You thought only [particular things] were bets so you said “I won’t take bets”. I convinced you that all decisions are bets. This is a change in ontology. Maybe you want to reevaluate your statement about bets now.
My understanding: AI and humans have different sets of categories. AI can’t understand what you want it to do if your categories are different. Like, maybe you have “creative work” in your ontology, and this subcategory belongs to the category of “creations by human-like minds”. You tell the AI that you want to maximize the number of creative works and it starts planting trees. “Tree is not a creative work” is not an objective fact about a tree; it’s a property of your ontology; sorry. (Trees are pretty cool.)
Also, to answer your question about “probability” in a sister chain: yes, “probability” can be in someone’s ontology. Things don’t have to “exist” to be in an ontology.
Here’s another real-world example:
You are playing a game. Maybe you’ll get a heart, maybe you won’t. The concept of probability exists for you.
This person — https://youtu.be/ilGri-rJ-HE?t=364 — is creating a tool-assisted speedrun for the same game. On frame 4582 they’ll get a heart, on frame 4581 they won’t, so they purposefully waste a frame to get a heart (for instance). “Probability” is not a thing that exists for them — for them the universe of the game is fully deterministic.
The person’s ontology is “right” and your ontology is wrong. On the other hand, your ontology is useful for you when playing the game, and their ontology wouldn’t be. You don’t even need to have different knowledge about the game; you both know the game is deterministic, and still it changes nothing.
Actually, let’s do a 2x2 matrix for all combinations of, let’s say, “probability” and “luck” in one’s personal ontology:
Person C: probability and luck both exist. Probability is partly influenced/swayed by luck.
Person D: probability exists, luck doesn’t. (“You” are person D here.)
Person E: luck exists, probability doesn’t. If you didn’t get a heart, you are unlucky today for whatever reason. If you did get a heart, well, you could be even unluckier but you aren’t. An incredibly lucky person could well get a hundred hearts in a row.
Person F: probability and luck both don’t exist and our lives are as deterministic as the game; using the concepts of probability or luck even internally, as “fake concepts”, is useless because actually everything is useless. (Some kind of fatalism.)
//
Now imagine somebody who replies to this comment saying “you could rephrase this in terms of beliefs”. This would be an example of a person saying essentially “hey, you should’ve used [my preferred ontology] instead of yours”, one where you use the concept of “belief” instead of “ontology”. Which is fine!
I’ll also give you two examples of using ontologies — as in “collections of things and relationships between things” — for real-world tasks that are much dumber than AI.
ABBYY attempted to create a giant ontology of all concepts, then develop parsers from natural languages into “meaning trees” and renderers from meaning trees into natural languages. The project was called “Compreno”. If it worked, it would’ve given them a “perfect” translating tool from any supported language into any supported language without having to handle each language pair separately. To my knowledge, they kept trying for 20+ years and it probably died because I google Compreno every once in a few years and there’s still nothing.
Let’s say you are Nestle and you want to sell cereal in 100 countries. You also want to be able to say “organic” on your packaging. For each country, you need to determine if your cereal would be considered “organic”. This also means that you need to know for all of your cereal’s ingredients whether they are “organic” by each country’s definition (and possibly for sub-ingredients, etc). And there are 50 other things that you also have to know about your ingredients — because of food safety regulations, etc. I don’t have first-hand knowledge of this, but I was once approached by a client who wanted to develop tools to help Nestle-like companies solve such problems; and they told me that right now their tool of choice was custom-built ontologies in Protege, with relationships like is-a, instance-of, etc.