RSS

Re­search Taste

TagLast edit: Jan 14, 2025, 6:54 PM by patrickdward

Research Taste is the intuition that guide researchers towards productive lines of inquiry.

Tips for Em­piri­cal Align­ment Research

Ethan PerezFeb 29, 2024, 6:04 AM
162 points
4 comments23 min readLW link

Touch re­al­ity as soon as pos­si­ble (when do­ing ma­chine learn­ing re­search)

LawrenceCJan 3, 2023, 7:11 PM
117 points
9 comments8 min readLW link1 review

Thomas Kwa’s re­search journal

Nov 23, 2023, 5:11 AM
79 points
1 comment6 min readLW link

How I se­lect al­ign­ment re­search projects

Apr 10, 2024, 4:33 AM
36 points
4 comments24 min readLW link

How to do con­cep­tual re­search: Case study in­ter­view with Cas­par Oesterheld

Chi NguyenMay 14, 2024, 3:09 PM
48 points
5 comments9 min readLW link

Some (prob­le­matic) aes­thet­ics of what con­sti­tutes good work in academia

Steven ByrnesMar 11, 2024, 5:47 PM
147 points
12 comments12 min readLW link

Difficulty classes for al­ign­ment properties

JozdienFeb 20, 2024, 9:08 AM
34 points
5 comments2 min readLW link

Nu­clear Es­pi­onage and AI Governance

GuiveOct 4, 2021, 11:04 PM
32 points
5 comments24 min readLW link

My re­search methodology

paulfchristianoMar 22, 2021, 9:20 PM
159 points
38 comments16 min readLW link1 review
(ai-alignment.com)

11 heuris­tics for choos­ing (al­ign­ment) re­search projects

Jan 27, 2023, 12:36 AM
50 points
5 comments1 min readLW link

Ad­vice I found helpful in 2022

Orpheus16Jan 28, 2023, 7:48 PM
36 points
5 comments2 min readLW link

Which ML skills are use­ful for find­ing a new AIS re­search agenda?

Yonatan CaleFeb 9, 2023, 1:09 PM
16 points
1 comment1 min readLW link

Qual­ities that al­ign­ment men­tors value in ju­nior researchers

Orpheus16Feb 14, 2023, 11:27 PM
88 points
14 comments3 min readLW link

A model of re­search skill

L Rudolf LJan 8, 2024, 12:13 AM
60 points
6 comments12 min readLW link
(www.strataoftheworld.com)

[Question] Build knowl­edge base first, or backchain?

Nicholas / Heather KrossJul 17, 2023, 3:44 AM
11 points
5 comments1 min readLW link

How to do the­o­ret­i­cal re­search, a per­sonal perspective

Mark XuAug 19, 2022, 7:41 PM
91 points
6 comments15 min readLW link

How to be­come an AI safety researcher

peterbarnettApr 15, 2022, 11:41 AM
25 points
0 comments14 min readLW link

How I Formed My Own Views About AI Safety

Neel NandaFeb 27, 2022, 6:50 PM
66 points
6 comments13 min readLW link
(www.neelnanda.io)

Tips On Em­piri­cal Re­search Slides

Jan 8, 2025, 5:06 AM
90 points
4 comments6 min readLW link

The Road to Evil Is Paved with Good Ob­jec­tives: Frame­work to Clas­sify and Fix Misal­ign­ments.

ShivamJan 30, 2025, 2:44 AM
1 point
0 comments11 min readLW link

The Align­ment Map­ping Pro­gram: Forg­ing In­de­pen­dent Thinkers in AI Safety—A Pilot Retrospective

Jan 10, 2025, 4:22 PM
21 points
0 comments4 min readLW link

Re­search Prin­ci­ples for 6 Months of AI Align­ment Studies

Shoshannah TekofskyDec 2, 2022, 10:55 PM
23 points
3 comments6 min readLW link

ML Safety Re­search Ad­vice—GabeM

Gabe MJul 23, 2024, 1:45 AM
29 points
2 comments14 min readLW link
(open.substack.com)

Les­sons After a Cou­ple Months of Try­ing to Do ML Research

KevinRoWangMar 22, 2022, 11:45 PM
70 points
8 comments6 min readLW link

Ques­tions about Value Lock-in, Pa­ter­nal­ism, and Empowerment

Sam F. BrownNov 16, 2022, 3:33 PM
13 points
2 comments12 min readLW link
(sambrown.eu)
No comments.