RSS

adamShimi

Karma: 6,656

Epistemologist specialized in the difficulties of alignment and how to solve AI X-Risks. Currently at Conjecture.

Blogging at For Methods.

Twitter.

Red­wood’s Tech­nique-Fo­cused Epistemic Strategy

adamShimiDec 12, 2021, 4:36 PM
48 points
1 comment7 min readLW link

In­ter­pret­ing Yud­kowsky on Deep vs Shal­low Knowledge

adamShimiDec 5, 2021, 5:32 PM
100 points
32 comments24 min readLW link

Ap­pli­ca­tions for AI Safety Camp 2022 Now Open!

adamShimiNov 17, 2021, 9:42 PM
47 points
3 comments1 min readLW link

Epistemic Strate­gies of Safety-Ca­pa­bil­ities Tradeoffs

adamShimiOct 22, 2021, 8:22 AM
5 points
0 comments6 min readLW link

Epistemic Strate­gies of Selec­tion Theorems

adamShimiOct 18, 2021, 8:57 AM
33 points
1 comment12 min readLW link

On Solv­ing Prob­lems Be­fore They Ap­pear: The Weird Episte­molo­gies of Alignment

adamShimiOct 11, 2021, 8:20 AM
110 points
10 comments15 min readLW link

Align­ment Re­search = Con­cep­tual Align­ment Re­search + Ap­plied Align­ment Research

adamShimiAug 30, 2021, 9:13 PM
37 points
14 comments5 min readLW link

[Question] What are good al­ign­ment con­fer­ence pa­pers?

adamShimiAug 28, 2021, 1:35 PM
12 points
2 comments1 min readLW link

Ap­proaches to gra­di­ent hacking

adamShimiAug 14, 2021, 3:16 PM
16 points
8 comments8 min readLW link

A re­view of “Agents and De­vices”

adamShimiAug 13, 2021, 8:42 AM
21 points
0 comments4 min readLW link

Power-seek­ing for suc­ces­sive choices

adamShimiAug 12, 2021, 8:37 PM
11 points
9 comments4 min readLW link

Goal-Direct­ed­ness and Be­hav­ior, Redux

adamShimiAug 9, 2021, 2:26 PM
16 points
4 comments2 min readLW link

Ap­pli­ca­tions for De­con­fus­ing Goal-Directedness

adamShimiAug 8, 2021, 1:05 PM
38 points
3 comments5 min readLW link1 review

Traps of For­mal­iza­tion in Deconfusion

adamShimiAug 5, 2021, 10:40 PM
28 points
7 comments6 min readLW link

LCDT, A My­opic De­ci­sion Theory

Aug 3, 2021, 10:41 PM
57 points
50 comments15 min readLW link

Alex Turner’s Re­search, Com­pre­hen­sive In­for­ma­tion Gathering

adamShimiJun 23, 2021, 9:44 AM
15 points
3 comments3 min readLW link

Look­ing Deeper at Deconfusion

adamShimiJun 13, 2021, 9:29 PM
62 points
13 comments15 min readLW link

Re­view of “Learn­ing Nor­ma­tivity: A Re­search Agenda”

Jun 6, 2021, 1:33 PM
37 points
0 comments6 min readLW link

[Event] Weekly Align­ment Re­search Coffee Time

adamShimiMay 29, 2021, 1:26 PM
12 points
5 comments1 min readLW link

[Event] Weekly Align­ment Re­search Coffee Time (05/​24)

adamShimiMay 21, 2021, 5:45 PM
9 points
0 comments1 min readLW link