An ontology is a collection of sets of objects and properties (or maybe: a collection of sets of points in thingspace). An agent’s ontology determines the abstractions it makes.
For example, “chairs”_Zach is in my ontology; it is (or points to) a set of (possible-)objects (namely what I consider chairs) that I bundle together. “Chairs”_Adam is in your ontology, and it is a very similar set of objects (what you consider chairs). This overlap makes it easy for me to communicate with you and predict how you will make sense of the world.
(Also necessary for easy-communication-and-prediction is that our ontologies are pretty sparse, rather than full of astronomically many overlapping sets. So if we each saw a few chairs we would make very similar abstractions, namely to “chairs”_Zach and “chairs”_Adam.)
(Why care? Most humans seem to have similar ontologies, but AI systems might have very different ontologies, which could cause surprising behavior. E.g. the panda-gibbon thing. Roughly, if the shared-human-ontology isn’t natural [i.e. learned by default] and moreover is hard to teach an AI, then that AI won’t think in terms of the same concepts as we do, which might be bad.)
[Note: substantially edited after Charlie expressed agreement.]
Just to paste my answer below yours since I agree:
There’s “ontology” and there’s “an ontology.”
Ontology with no “an” is the study of what exists. It’s a genre of philosophy questions. However, around here we don’t really worry about it too much.
What you’ll often see on LW is “an ontology,” or “my ontology” or “the ontology used by this model.” In this usage, an ontology is a set of building blocks used in a model of the world. It’s the foundational stuff that other stuff is made out of or described in terms of.
E.g. minecraft has “an ontology,” which is the basic set of blocks (and their internal states if applicable), plus a 3-D grid model of space.
Hm, I think I see. Thanks. But what about abstract things? Things that never boil down to the physical. Like “probability”. Would the concept of probability be something that would belong to someone’s ontology?
It could be! People don’t use the same model of the world all the time. E.g. when talking about my living room I might treat a “chair” as a basic object, even though I could also talk about the atoms making up the chair if prompted to think differently.
When talking about math, people readily reason using ontologies where mathematical objects are the basic building blocks. E.g. “four is next to five.” But if talking about tables and chairs, statements like “this chair has four legs” don’t need to use “four” as part of the ontology, the “four-ness” is just describing a pattern in the actual ontologically basic stuff (chair-legs).
I also agree. I was going to write a similar answer. I’ll just add my nuance as a comment to Zach’s answer.
I said a bunch about ontologies in my post on fake frameworks. There I give examples and I define reductionism in terms of comparing ontologies. The upshot is what I read Zach emphasizing here: an ontology is a collection of things you consider “real” together with some rules for how to combine them into a coherent thingie (a map, though it often won’t feel on the inside like a map).
Maybe the purest example type is an axiomatic system. The undefined terms are ontological primitives, and the axioms are the rules for combining them. We usually combine an axiomatic system with a model to create a sense of being in a space. The classic example of this sort being Euclidean geometry.
But in practice most folk use much more fuzzy and informal ontologies, and often switch between seemingly incompatible ones as needed. Your paycheck, the government, cancer, and a sandwich are all “real” in lots of folks’ worldview, but they don’t always clearly relate the kinds of “real” because how they relate doesn’t usually matter.
I think ontologies are closely related to frames. I wonder if frames are just a special kind of ontology, or maybe the term we give for a particular use of ontologies. Mentioning this in case frames feel more intuitive than ontologies do.
(I agree. I think frames and ontologies are closely related; in particular, ontologies are comprehensive while frames just tell you what to focus on, without needing to give an account of everything.)
An ontology is a collection of sets of objects and properties (or maybe: a collection of sets of points in thingspace). An agent’s ontology determines the abstractions it makes.
For example, “chairs”_Zach is in my ontology; it is (or points to) a set of (possible-)objects (namely what I consider chairs) that I bundle together. “Chairs”_Adam is in your ontology, and it is a very similar set of objects (what you consider chairs). This overlap makes it easy for me to communicate with you and predict how you will make sense of the world.
(Also necessary for easy-communication-and-prediction is that our ontologies are pretty sparse, rather than full of astronomically many overlapping sets. So if we each saw a few chairs we would make very similar abstractions, namely to “chairs”_Zach and “chairs”_Adam.)
(Why care? Most humans seem to have similar ontologies, but AI systems might have very different ontologies, which could cause surprising behavior. E.g. the panda-gibbon thing. Roughly, if the shared-human-ontology isn’t natural [i.e. learned by default] and moreover is hard to teach an AI, then that AI won’t think in terms of the same concepts as we do, which might be bad.)
[Note: substantially edited after Charlie expressed agreement.]
Just to paste my answer below yours since I agree:
There’s “ontology” and there’s “an ontology.”
Ontology with no “an” is the study of what exists. It’s a genre of philosophy questions. However, around here we don’t really worry about it too much.
What you’ll often see on LW is “an ontology,” or “my ontology” or “the ontology used by this model.” In this usage, an ontology is a set of building blocks used in a model of the world. It’s the foundational stuff that other stuff is made out of or described in terms of.
E.g. minecraft has “an ontology,” which is the basic set of blocks (and their internal states if applicable), plus a 3-D grid model of space.
Hm, I think I see. Thanks. But what about abstract things? Things that never boil down to the physical. Like “probability”. Would the concept of probability be something that would belong to someone’s ontology?
It could be! People don’t use the same model of the world all the time. E.g. when talking about my living room I might treat a “chair” as a basic object, even though I could also talk about the atoms making up the chair if prompted to think differently.
When talking about math, people readily reason using ontologies where mathematical objects are the basic building blocks. E.g. “four is next to five.” But if talking about tables and chairs, statements like “this chair has four legs” don’t need to use “four” as part of the ontology, the “four-ness” is just describing a pattern in the actual ontologically basic stuff (chair-legs).
I also agree. I was going to write a similar answer. I’ll just add my nuance as a comment to Zach’s answer.
I said a bunch about ontologies in my post on fake frameworks. There I give examples and I define reductionism in terms of comparing ontologies. The upshot is what I read Zach emphasizing here: an ontology is a collection of things you consider “real” together with some rules for how to combine them into a coherent thingie (a map, though it often won’t feel on the inside like a map).
Maybe the purest example type is an axiomatic system. The undefined terms are ontological primitives, and the axioms are the rules for combining them. We usually combine an axiomatic system with a model to create a sense of being in a space. The classic example of this sort being Euclidean geometry.
But in practice most folk use much more fuzzy and informal ontologies, and often switch between seemingly incompatible ones as needed. Your paycheck, the government, cancer, and a sandwich are all “real” in lots of folks’ worldview, but they don’t always clearly relate the kinds of “real” because how they relate doesn’t usually matter.
I think ontologies are closely related to frames. I wonder if frames are just a special kind of ontology, or maybe the term we give for a particular use of ontologies. Mentioning this in case frames feel more intuitive than ontologies do.
(I agree. I think frames and ontologies are closely related; in particular, ontologies are comprehensive while frames just tell you what to focus on, without needing to give an account of everything.)