I’ll also give you two examples of using ontologies — as in “collections of things and relationships between things” — for real-world tasks that are much dumber than AI.
ABBYY attempted to create a giant ontology of all concepts, then develop parsers from natural languages into “meaning trees” and renderers from meaning trees into natural languages. The project was called “Compreno”. If it worked, it would’ve given them a “perfect” translating tool from any supported language into any supported language without having to handle each language pair separately. To my knowledge, they kept trying for 20+ years and it probably died because I google Compreno every once in a few years and there’s still nothing.
Let’s say you are Nestle and you want to sell cereal in 100 countries. You also want to be able to say “organic” on your packaging. For each country, you need to determine if your cereal would be considered “organic”. This also means that you need to know for all of your cereal’s ingredients whether they are “organic” by each country’s definition (and possibly for sub-ingredients, etc). And there are 50 other things that you also have to know about your ingredients — because of food safety regulations, etc. I don’t have first-hand knowledge of this, but I was once approached by a client who wanted to develop tools to help Nestle-like companies solve such problems; and they told me that right now their tool of choice was custom-built ontologies in Protege, with relationships like is-a, instance-of, etc.
Also, to answer your question about “probability” in a sister chain: yes, “probability” can be in someone’s ontology. Things don’t have to “exist” to be in an ontology.
Here’s another real-world example:
You are playing a game. Maybe you’ll get a heart, maybe you won’t. The concept of probability exists for you.
This person — https://youtu.be/ilGri-rJ-HE?t=364 — is creating a tool-assisted speedrun for the same game. On frame 4582 they’ll get a heart, on frame 4581 they won’t, so they purposefully waste a frame to get a heart (for instance). “Probability” is not a thing that exists for them — for them the universe of the game is fully deterministic.
The person’s ontology is “right” and your ontology is wrong. On the other hand, your ontology is useful for you when playing the game, and their ontology wouldn’t be. You don’t even need to have different knowledge about the game; you both know the game is deterministic, and still it changes nothing.
Actually, let’s do a 2x2 matrix for all combinations of, let’s say, “probability” and “luck” in one’s personal ontology:
Person C: probability and luck both exist. Probability is partly influenced/swayed by luck.
Person D: probability exists, luck doesn’t. (“You” are person D here.)
Person E: luck exists, probability doesn’t. If you didn’t get a heart, you are unlucky today for whatever reason. If you did get a heart, well, you could be even unluckier but you aren’t. An incredibly lucky person could well get a hundred hearts in a row.
Person F: probability and luck both don’t exist and our lives are as deterministic as the game; using the concepts of probability or luck even internally, as “fake concepts”, is useless because actually everything is useless. (Some kind of fatalism.)
//
Now imagine somebody who replies to this comment saying “you could rephrase this in terms of beliefs”. This would be an example of a person saying essentially “hey, you should’ve used [my preferred ontology] instead of yours”, one where you use the concept of “belief” instead of “ontology”. Which is fine!