In Defense of At­tempt­ing Hard Things, and my story of the Lev­er­age ecosystem

CathleenDec 17, 2021, 11:08 PM
115 points
43 comments1 min readLW link2 reviews
(cathleensdiscoveries.com)

[Question] Get­ting di­ag­nosed for ADHD if I don’t plan on tak­ing meds?

vroomerifyDec 17, 2021, 7:27 PM
6 points
6 comments1 min readLW link

Ven­ture Gran­ters, The VCs of pub­lic goods, in­cen­tiviz­ing good dreams

mako yassDec 17, 2021, 8:57 AM
12 points
9 comments12 min readLW link

Un­der­stand the ex­po­nen­tial func­tion: R0 of the COVID

Yandong ZhangDec 17, 2021, 6:44 AM
−6 points
17 comments1 min readLW link

Some mo­ti­va­tions to gra­di­ent hack

peterbarnettDec 17, 2021, 3:06 AM
8 points
0 comments6 min readLW link

Blog Respectably

lsusrDec 17, 2021, 1:23 AM
14 points
4 comments1 min readLW link

The Case for Rad­i­cal Op­ti­mism about Interpretability

Quintin PopeDec 16, 2021, 11:38 PM
66 points
16 comments8 min readLW link1 review

-

Alice KDec 16, 2021, 11:03 PM
2 points
2 comments1 min readLW link

Ev­i­dence Sets: Towards In­duc­tive-Bi­ases based Anal­y­sis of Pro­saic AGI

bayesian_kittenDec 16, 2021, 10:41 PM
22 points
10 comments21 min readLW link

Hous­ing Mar­kets, Satis­ficers, and One-Track Goodhart

J BostockDec 16, 2021, 9:38 PM
2 points
2 comments2 min readLW link

Covid 12/​16: On Your Marks

ZviDec 16, 2021, 9:00 PM
53 points
36 comments9 min readLW link
(thezvi.wordpress.com)

Re­views of “Is power-seek­ing AI an ex­is­ten­tial risk?”

Joe CarlsmithDec 16, 2021, 8:48 PM
80 points
20 comments1 min readLW link

The “Other” Option

jsteinhardtDec 16, 2021, 8:20 PM
24 points
1 comment7 min readLW link
(bounded-regret.ghost.io)

What Ca­plan’s “Miss­ing Mood” Heuris­tic Is Really For

DirectedEvolutionDec 16, 2021, 7:47 PM
32 points
7 comments4 min readLW link

Sub­way Slides

jefftkDec 16, 2021, 7:30 PM
11 points
2 comments1 min readLW link
(www.jefftk.com)

Viru­lence Management

harsimonyDec 16, 2021, 7:25 PM
4 points
0 comments3 min readLW link
(harsimony.wordpress.com)

Omicron Post #7

ZviDec 16, 2021, 5:30 PM
155 points
41 comments12 min readLW link
(thezvi.wordpress.com)

[Question] Where can one learn deep in­tu­itions about in­for­ma­tion the­ory?

ValentineDec 16, 2021, 3:47 PM
72 points
27 comments2 min readLW link

Elic­i­ta­tion for Model­ing Trans­for­ma­tive AI Risks

DavidmanheimDec 16, 2021, 3:24 PM
30 points
2 comments9 min readLW link

An Open Let­ter to the Monas­tic Academy and com­mu­nity members

HS2021Dec 16, 2021, 9:04 AM
44 points
46 comments1 min readLW link

Five Miss­ing Moods

mike_hawkeDec 16, 2021, 1:25 AM
14 points
3 comments3 min readLW link

Mo­ti­va­tions, Nat­u­ral Selec­tion, and Cur­ricu­lum Engineering

Oliver SourbutDec 16, 2021, 1:07 AM
16 points
0 comments42 min readLW link

Univer­sal­ity and the “Filter”

maggiehayesDec 16, 2021, 12:47 AM
10 points
2 comments11 min readLW link

More power to you

jasoncrawfordDec 15, 2021, 11:50 PM
16 points
14 comments1 min readLW link
(rootsofprogress.org)

My Overview of the AI Align­ment Land­scape: A Bird’s Eye View

Neel NandaDec 15, 2021, 11:44 PM
127 points
9 comments15 min readLW link

SmartPoop 1.0: An AI Safety Science-Fiction

Lê Nguyên HoangDec 15, 2021, 10:28 PM
7 points
1 comment1 min readLW link

Bay Area Ra­tion­al­ist Field Day

Raj ThimmiahDec 15, 2021, 7:57 PM
7 points
1 comment1 min readLW link

Fram­ing ap­proaches to al­ign­ment and the hard prob­lem of AI cognition

ryan_greenblattDec 15, 2021, 7:06 PM
16 points
15 comments27 min readLW link

South Bay ACX/​LW Pre-Holi­day Get-Together

ISDec 15, 2021, 4:58 PM
5 points
0 comments1 min readLW link

Leverage

lsusrDec 15, 2021, 5:20 AM
23 points
2 comments1 min readLW link

We’ll Always Have Crazy

Duncan Sabien (Deactivated)Dec 15, 2021, 2:55 AM
36 points
22 comments13 min readLW link

2020 Re­view: The Dis­cus­sion Phase

VaniverDec 15, 2021, 1:12 AM
55 points
14 comments2 min readLW link

The Nat­u­ral Ab­strac­tion Hy­poth­e­sis: Im­pli­ca­tions and Evidence

CallumMcDougallDec 14, 2021, 11:14 PM
39 points
9 comments19 min readLW link

Robin Han­son’s “Hu­mans are Early”

RaemonDec 14, 2021, 10:07 PM
11 points
0 comments2 min readLW link
(www.overcomingbias.com)

Ngo’s view on al­ign­ment difficulty

Dec 14, 2021, 9:34 PM
63 points
7 comments17 min readLW link

A pro­posed sys­tem for ideas jumpstart

Valentin2026Dec 14, 2021, 9:01 PM
4 points
2 comments3 min readLW link

Should we rely on the speed prior for safety?

Marc CarauleanuDec 14, 2021, 8:45 PM
14 points
5 comments5 min readLW link

ARC’s first tech­ni­cal re­port: Elic­it­ing La­tent Knowledge

Dec 14, 2021, 8:09 PM
228 points
90 comments1 min readLW link3 reviews
(docs.google.com)

ARC is hiring!

Dec 14, 2021, 8:09 PM
64 points
2 comments1 min readLW link

In­ter­lude: Agents as Automobiles

Daniel KokotajloDec 14, 2021, 6:49 PM
26 points
6 comments5 min readLW link

Zvi’s Thoughts on the Sur­vival and Flour­ish­ing Fund (SFF)

ZviDec 14, 2021, 2:30 PM
193 points
65 comments64 min readLW link1 review
(thezvi.wordpress.com)

Con­se­quen­tial­ism & corrigibility

Steven ByrnesDec 14, 2021, 1:23 PM
70 points
29 comments7 min readLW link

Mys­tery Hunt 2022

Scott GarrabrantDec 13, 2021, 9:57 PM
30 points
5 comments1 min readLW link

En­abling More Feed­back for AI Safety Researchers

frances_lorenzDec 13, 2021, 8:10 PM
17 points
0 comments3 min readLW link

Lan­guage Model Align­ment Re­search Internships

Ethan PerezDec 13, 2021, 7:53 PM
74 points
1 comment1 min readLW link

Omicron Post #6

ZviDec 13, 2021, 6:00 PM
89 points
30 comments8 min readLW link
(thezvi.wordpress.com)

Anal­y­sis of Bird Box (2018)

TekhneMakreDec 13, 2021, 5:30 PM
11 points
3 comments5 min readLW link

Solv­ing In­ter­pretabil­ity Week

Logan RiggsDec 13, 2021, 5:09 PM
11 points
5 comments1 min readLW link

Un­der­stand­ing and con­trol­ling auto-in­duced dis­tri­bu­tional shift

L Rudolf LDec 13, 2021, 2:59 PM
33 points
4 comments16 min readLW link

A fate worse than death?

RomanSDec 13, 2021, 11:05 AM
−25 points
26 comments2 min readLW link