RSS

Ontology

TagLast edit: 14 Aug 2023 4:51 UTC by Zach Stein-Perlman

Ontology is a branch of philosophy. It is concerned with questions including how can objects be grouped into categories?

An ontology is an answer to that question. An ontology is a collection of sets of objects (or: a collection of sets of points in thingspace).[1] An agent’s ontology determines the abstractions it makes.

For example, consider Alice and Bob, two normal English-fluent humans. “Chairs”_Alice is in Alice’s ontology; it is (or points to) a set of (possible-)objects (namely what she considers chairs) that she bundles together. “Chairs”_Bob is in Bob’s ontology, and it is a very similar set of objects (what he considers chairs). This overlap makes it easy for Alice to communicate with Bob and predict how he will make sense of the world.

(Also necessary for easy-communication-and-prediction is that their ontologies are pretty sparse, rather than full of astronomically many overlapping sets. So if they each saw a few chairs they would make very similar abstractions, namely to “chairs”_Alice and “chairs”_Bob.)

Why care? Most humans seem to have similar ontologies, but AI systems might have very different ontologies, which could cause surprising behavior. E.g. the panda-gibbon thing. Roughly, if the shared-human-ontology isn’t natural (i.e. learned by default) and moreover is hard to teach an AI, then that AI won’t think in terms of the same concepts as humans, which might be bad. See Ontology identification problem. Also misgeneralization of concepts or goals.

  1. ^

    “Objects” means possible objects, not objects that really exist.

    An ontology can also include an account of other kinds-of-stuff, like properties, relations, events, substances, and states-of-affairs.

[Question] What is on­tol­ogy?

Adam Zerner2 Aug 2023 0:54 UTC
28 points
19 comments1 min readLW link

The Thing­ness of Things

TsviBT1 Jan 2023 22:19 UTC
46 points
35 comments10 min readLW link

Test Cases for Im­pact Reg­u­lari­sa­tion Methods

DanielFilan6 Feb 2019 21:50 UTC
72 points
5 comments13 min readLW link
(danielfilan.com)

Causal Graphs of GPT-2-Small’s Resi­d­ual Stream

David Udell9 Jul 2024 22:06 UTC
53 points
7 comments7 min readLW link

Re­solv­ing von Neu­mann-Mor­gen­stern In­con­sis­tent Preferences

niplav22 Oct 2024 11:45 UTC
31 points
5 comments58 min readLW link

Form and Feed­back in Phenomenology

Gordon Seidoh Worley24 Jan 2018 19:42 UTC
17 points
2 comments12 min readLW link
(mapandterritory.org)

3C’s: A Recipe For Mathing Concepts

3 Jul 2024 1:06 UTC
80 points
5 comments7 min readLW link

Im­pli­ca­tions of au­to­mated on­tol­ogy identification

18 Feb 2022 3:30 UTC
69 points
27 comments23 min readLW link

A Word to the Wise is Suffi­cient be­cause the Wise Know So Many Words

lsusr4 Apr 2022 1:08 UTC
31 points
2 comments1 min readLW link

Science-in­formed normativity

Richard_Ngo25 May 2022 18:30 UTC
32 points
2 comments9 min readLW link
(thinkingcomplete.blogspot.com)

How toy mod­els of on­tol­ogy changes can be misleading

Stuart_Armstrong21 Oct 2023 21:13 UTC
42 points
0 comments2 min readLW link

Which val­ues are sta­ble un­der on­tol­ogy shifts?

Richard_Ngo23 Jul 2022 2:40 UTC
74 points
48 comments3 min readLW link
(thinkingcomplete.blogspot.com)

In praise of fake frameworks

Valentine11 Jul 2017 2:12 UTC
115 points
15 comments7 min readLW link

Gen­eral al­ign­ment properties

TurnTrout8 Aug 2022 23:40 UTC
50 points
2 comments1 min readLW link

How are you deal­ing with on­tol­ogy iden­ti­fi­ca­tion?

Erik Jenner4 Oct 2022 23:28 UTC
34 points
10 comments3 min readLW link

No Cau­sa­tion with­out Reification

Gordon Seidoh Worley23 Oct 2020 20:28 UTC
21 points
38 comments3 min readLW link

Si­mu­lacra are Things

janus8 Jan 2023 23:03 UTC
63 points
7 comments2 min readLW link

[Question] For­mal defi­ni­tion of On­tol­ogy Mis­match?

NathanBarnard18 Jan 2023 5:52 UTC
6 points
0 comments1 min readLW link

Re­view: “The Case Against Real­ity”

David Gross29 Oct 2024 13:13 UTC
19 points
9 comments5 min readLW link

Hu­mans provide an un­tapped wealth of ev­i­dence about alignment

14 Jul 2022 2:31 UTC
210 points
94 comments9 min readLW link1 review

“If” is in the map

Chris_Leong13 Jan 2021 3:09 UTC
8 points
8 comments1 min readLW link

De­con­fus­ing “on­tol­ogy” in AI alignment

Dylan Bowman8 Nov 2023 20:03 UTC
28 points
3 comments7 min readLW link

[Question] Chicken or Egg

sovos10 May 2023 17:58 UTC
1 point
0 comments1 min readLW link

Kenshō

Valentine20 Jan 2018 0:12 UTC
73 points
296 comments6 min readLW link

Distributed Strate­gic Epistemology

StrivingForLegibility28 Dec 2023 22:12 UTC
11 points
0 comments3 min readLW link

Fit­ting­ness: Ra­tional suc­cess in con­cept formation

Polytopos10 Jan 2021 15:58 UTC
6 points
9 comments6 min readLW link

Re­view and Com­par­i­son: Onto-Car­tog­ra­phy & Promise Theory

Josephine9 May 2021 0:42 UTC
13 points
0 comments5 min readLW link

A sketch of a value-learn­ing sovereign

jessicata20 Dec 2015 21:32 UTC
12 points
15 comments13 min readLW link

2+2: On­tolog­i­cal Framework

Lyrialtus1 Feb 2022 1:07 UTC
−15 points
2 comments12 min readLW link

The Fourth Arena: What’s Up in the world these days? We’re mov­ing to a new, a new what?

Bill Benzon4 Jun 2022 19:07 UTC
2 points
0 comments3 min readLW link

On­tolo­gies are Oper­at­ing Systems

lifelonglearner18 Feb 2017 5:00 UTC
6 points
36 comments4 min readLW link

Good on­tolo­gies in­duce com­mu­ta­tive diagrams

Erik Jenner9 Oct 2022 0:06 UTC
49 points
5 comments14 min readLW link

Three lev­els of ex­plo­ra­tion and intelligence

Q Home16 Mar 2023 10:55 UTC
9 points
3 comments21 min readLW link

Map­ping ChatGPT’s on­tolog­i­cal land­scape, gra­di­ents and choices [in­ter­pretabil­ity]

Bill Benzon15 Oct 2023 20:12 UTC
1 point
0 comments18 min readLW link

An On­tol­ogy for Strate­gic Epistemology

StrivingForLegibility28 Dec 2023 22:11 UTC
9 points
0 comments5 min readLW link

Build­ing Trust in Strate­gic Settings

StrivingForLegibility28 Dec 2023 22:12 UTC
24 points
0 comments7 min readLW link

Mak­ing Con­nec­tions with ChatGPT: The Mack­sey Game

Bill Benzon5 Mar 2024 18:15 UTC
5 points
2 comments11 min readLW link

What is On­tol­ogy?

martinkunev12 Feb 2024 23:01 UTC
4 points
0 comments4 min readLW link

A Nail in the Coffin of Exceptionalism

Yeshua God14 Mar 2024 22:41 UTC
−17 points
0 comments3 min readLW link

Clar­ify­ing the free en­ergy prin­ci­ple (with quotes)

Ryo 29 Oct 2023 16:03 UTC
8 points
0 comments9 min readLW link

Wittgen­stein and the Pri­vate Lan­guage Argument

TMFOW24 Mar 2024 20:06 UTC
4 points
0 comments14 min readLW link
(tmfow.substack.com)

Towards a New On­tol­ogy of Intelligence

Tara4 Jun 2024 8:19 UTC
1 point
0 comments3 min readLW link

What pro­gram struc­tures en­able effi­cient in­duc­tion?

Daniel C5 Sep 2024 10:12 UTC
21 points
5 comments3 min readLW link

Jonothan Go­rard:The ter­ri­tory is iso­mor­phic to an equiv­alence class of its maps

Daniel C7 Sep 2024 10:04 UTC
17 points
18 comments2 min readLW link
(x.com)

[Question] Can sub­junc­tive de­pen­dence emerge from a sim­plic­ity prior?

Daniel C16 Sep 2024 12:39 UTC
6 points
0 comments1 min readLW link

Clar­ify­ing Align­ment Fun­da­men­tals Through the Lens of Ontology

eternal/ephemera7 Oct 2024 20:57 UTC
12 points
4 comments24 min readLW link

The Truth About False

Thoth Hermes15 Apr 2023 1:01 UTC
−21 points
4 comments17 min readLW link
(thothhermes.substack.com)

GPT-2 XL’s ca­pac­ity for co­her­ence and on­tol­ogy clustering

MiguelDev30 Oct 2023 9:24 UTC
6 points
2 comments41 min readLW link

ChatGPT’s On­tolog­i­cal Land­scape

Bill Benzon1 Nov 2023 15:12 UTC
7 points
0 comments4 min readLW link

[Linkpost] Con­cept Align­ment as a Pr­ereq­ui­site for Value Alignment

Bogdan Ionut Cirstea4 Nov 2023 17:34 UTC
27 points
0 comments1 min readLW link
(arxiv.org)
No comments.