RSS

Gears-Level

TagLast edit: Feb 8, 2025, 12:32 AM by lesswrong-internal

A gears-level model is ‘well-constrained’ in the sense that there is a strong connection between each of the things you observe—it would be hard for you to imagine that one of the variables could be different while all of the others remained the same.

Related Tags: Anticipated Experiences, Double-Crux, Empiricism, Falsifiability, Map and Territory


The term gears-level was first described on LW in the post “Gears in Understanding”:

This property is how deterministically interconnected the variables of the model are. There are a few tests I know of to see to what extent a model has this property, though I don’t know if this list is exhaustive and would be a little surprised if it were:
1. Does the model pay rent? If it does, and if it were falsified, how much (and how precisely) could you infer other things from the falsification?
2. How incoherent is it to imagine that the model is accurate but that a given variable could be different?
3. If you knew the model were accurate but you were to forget the value of one variable, could you rederive it?

An example from Gears in Understanding of a gears-level model is (surprise) a box of gears. If you can see a series of interlocked gears, alternately turning clockwise, then counterclockwise, and so on, then you’re able to anticipate the direction of any given, even if you cannot see it. It would be very difficult to imagine all of the gears turning as they are but only one of them changing direction whilst remaining interlocked. And finally, you would be able to rederive the direction of any given gear if you forgot it.


Note that the author of Gears in Understanding, Valentine, was careful to point out that these tests do not fully define the property ‘gears-level’, and that “Gears-ness is not the same as goodness”—there are other things that are valuable in a model, and many things cannot practically be modelled in this fashion. If you intend to use the term it is highly recommended you read the post beforehand, as the concept is not easily defined.

Gears in understanding

ValentineMay 12, 2017, 12:36 AM
195 points
38 comments10 min readLW link

Gears-Level Models are Cap­i­tal Investments

johnswentworthNov 22, 2019, 10:41 PM
175 points
29 comments7 min readLW link1 review

Gears vs Behavior

johnswentworthSep 19, 2019, 6:50 AM
117 points
14 comments7 min readLW link1 review

The map has gears. They don’t always turn.

abramdemskiFeb 22, 2018, 8:16 PM
24 points
0 comments1 min readLW link

Gears Level & Policy Level

abramdemskiNov 24, 2017, 7:17 AM
63 points
8 comments7 min readLW link

Toward a New Tech­ni­cal Ex­pla­na­tion of Tech­ni­cal Explanation

abramdemskiFeb 16, 2018, 12:44 AM
92 points
36 comments18 min readLW link1 review

The Lens That Sees Its Flaws

Eliezer YudkowskySep 23, 2007, 12:10 AM
373 points
48 comments3 min readLW link

When Gears Go Wrong

Matt GoldenbergAug 2, 2020, 6:21 AM
28 points
6 comments6 min readLW link

Paper-Read­ing for Gears

johnswentworthDec 4, 2019, 9:02 PM
165 points
6 comments4 min readLW link1 review

In praise of fake frameworks

ValentineJul 11, 2017, 2:12 AM
117 points
15 comments7 min readLW link

Tech­nol­ogy Changes Constraints

johnswentworthJan 25, 2020, 11:13 PM
116 points
6 comments4 min readLW link

Science in a High-Di­men­sional World

johnswentworthJan 8, 2021, 5:52 PM
295 points
53 comments7 min readLW link1 review

Every­day Les­sons from High-Di­men­sional Optimization

johnswentworthJun 6, 2020, 8:57 PM
165 points
44 comments6 min readLW link

Evolu­tion of Modularity

johnswentworthNov 14, 2019, 6:49 AM
185 points
12 comments2 min readLW link1 review

Book Re­view: De­sign Prin­ci­ples of Biolog­i­cal Circuits

johnswentworthNov 5, 2019, 6:49 AM
222 points
24 comments12 min readLW link1 review

Lo­cal Val­idity as a Key to San­ity and Civilization

Eliezer YudkowskyApr 7, 2018, 4:25 AM
221 points
68 comments13 min readLW link5 reviews

A Crisper Ex­pla­na­tion of Si­mu­lacrum Levels

Thane RuthenisDec 23, 2023, 10:13 PM
92 points
13 comments13 min readLW link

[Question] What are the best re­sources for build­ing gears-level mod­els of how gov­ern­ments ac­tu­ally work?

adamShimiAug 19, 2024, 2:05 PM
19 points
6 comments1 min readLW link

[Question] Will LLM agents be­come the first takeover-ca­pa­ble AGIs?

Seth HerdMar 2, 2025, 5:15 PM
36 points
10 comments1 min readLW link

Con­straints & Slack­ness as a Wor­ld­view Generator

johnswentworthJan 25, 2020, 11:18 PM
55 points
4 comments4 min readLW link

Ma­te­rial Goods as an Abun­dant Resource

johnswentworthJan 25, 2020, 11:23 PM
81 points
10 comments5 min readLW link

Wrinkles

johnswentworthNov 19, 2019, 10:59 PM
81 points
14 comments4 min readLW link

The­ory and Data as Constraints

johnswentworthFeb 21, 2020, 10:00 PM
65 points
7 comments4 min readLW link

Homeosta­sis and “Root Causes” in Aging

johnswentworthJan 5, 2020, 6:43 PM
86 points
25 comments3 min readLW link

The Lens, Proge­rias and Polycausality

johnswentworthMar 8, 2020, 5:53 PM
71 points
8 comments3 min readLW link

Adap­tive Im­mune Sys­tem Aging

johnswentworthMar 13, 2020, 3:47 AM
75 points
9 comments3 min readLW link

Ab­strac­tion, Evolu­tion and Gears

johnswentworthJun 24, 2020, 5:39 PM
29 points
11 comments4 min readLW link

A Case for the Least For­giv­ing Take On Alignment

Thane RuthenisMay 2, 2023, 9:34 PM
100 points
85 comments22 min readLW link

Ex­pla­na­tion vs Rationalization

abramdemskiFeb 22, 2018, 11:46 PM
16 points
11 comments4 min readLW link

Time­less Modesty?

abramdemskiNov 24, 2017, 11:12 AM
17 points
2 comments3 min readLW link

[Question] The­ory of Causal Models with Dy­namic Struc­ture?

johnswentworthJan 23, 2020, 7:47 PM
24 points
6 comments1 min readLW link

[Question] What is the right phrase for “the­o­ret­i­cal ev­i­dence”?

Adam ZernerNov 1, 2020, 8:43 PM
23 points
41 comments2 min readLW link

In­side Views, Im­pos­tor Syn­drome, and the Great LARP

johnswentworthSep 25, 2023, 4:08 PM
335 points
53 comments5 min readLW link

De­bug­ging the student

Adam ZernerDec 16, 2020, 7:07 AM
46 points
7 comments4 min readLW link

A Good Ex­pla­na­tion of Differ­en­tial Gears

Johannes C. MayerOct 19, 2023, 11:07 PM
48 points
4 comments1 min readLW link
(youtu.be)

Believ­ing vs understanding

Adam ZernerJul 24, 2021, 3:39 AM
15 points
2 comments6 min readLW link

At­tempted Gears Anal­y­sis of AGI In­ter­ven­tion Dis­cus­sion With Eliezer

ZviNov 15, 2021, 3:50 AM
197 points
49 comments16 min readLW link
(thezvi.wordpress.com)

rough draft on what hap­pens in the brain when you have an insight

EmrikMay 21, 2024, 6:02 PM
11 points
2 comments1 min readLW link

The Gears of Argmax

StrivingForLegibilityJan 4, 2024, 11:30 PM
11 points
0 comments3 min readLW link

Real­ity has a sur­pris­ing amount of detail

jsalvatierMay 13, 2017, 8:02 PM
90 points
31 comments1 min readLW link
(johnsalvatier.org)

Ad­miring the Guts of Things.

MelkorJun 11, 2018, 11:12 PM
22 points
1 comment3 min readLW link

in­ter­pret­ing GPT: the logit lens

nostalgebraistAug 31, 2020, 2:47 AM
230 points
38 comments10 min readLW link

Towards Gears-Level Un­der­stand­ing of Agency

Thane RuthenisJun 16, 2022, 10:00 PM
25 points
4 comments18 min readLW link

A Sketch of Good Communication

Ben PaceMar 31, 2018, 10:48 PM
208 points
36 comments3 min readLW link1 review

Re­think­ing Batch Normalization

Matthew BarnettAug 2, 2019, 8:21 PM
20 points
5 comments8 min readLW link

Value For­ma­tion: An Over­ar­ch­ing Model

Thane RuthenisNov 15, 2022, 5:16 PM
34 points
20 comments34 min readLW link

Cur­rent themes in mechanis­tic in­ter­pretabil­ity research

Nov 16, 2022, 2:14 PM
89 points
2 comments12 min readLW link

Leg­i­bil­ity Makes Log­i­cal Line-Of-Sight Transitive

StrivingForLegibilityJan 19, 2024, 11:39 PM
13 points
0 comments5 min readLW link

Anatomy of a Gear

johnswentworthNov 16, 2020, 4:34 PM
79 points
12 comments7 min readLW link

[Question] By which mechanism does im­mu­nity fa­vor new Covid var­i­ants?

anorangiccApr 4, 2021, 10:24 AM
2 points
5 comments1 min readLW link

Be­ware of black boxes in AI al­ign­ment research

cousin_itJan 18, 2018, 3:07 PM
39 points
10 comments1 min readLW link

What is Life in an Im­moral Maze?

ZviJan 5, 2020, 1:40 PM
72 points
56 comments5 min readLW link
(thezvi.wordpress.com)

Don’t want Good­hart? — Spec­ify the vari­ables more

YanLyutnevNov 21, 2024, 10:43 PM
2 points
2 comments5 min readLW link

The Fu­til­ity of Emergence

Eliezer YudkowskyAug 26, 2007, 10:10 PM
120 points
142 comments3 min readLW link

De­ci­sion Trans­former Interpretability

Feb 6, 2023, 7:29 AM
84 points
13 comments24 min readLW link

Don’t want Good­hart? — Spec­ify the damn variables

Yan LyutnevNov 21, 2024, 10:45 PM
−3 points
2 comments5 min readLW link

Gen­er­al­iz­ing Ex­per­i­men­tal Re­sults by Lev­er­ag­ing Knowl­edge of Mechanisms

Carlos_CinelliDec 11, 2019, 8:39 PM
50 points
5 comments1 min readLW link

Ar­tifi­cial Addition

Eliezer YudkowskyNov 20, 2007, 7:58 AM
90 points
128 comments6 min readLW link

Dreams of AI Design

Eliezer YudkowskyAug 27, 2008, 4:04 AM
41 points
61 comments5 min readLW link

Why Artists Study Anatomy

Sisi ChengMay 18, 2020, 6:44 PM
98 points
10 comments2 min readLW link1 review
No comments.