Kel­sey Piper’s re­cent in­ter­view of SBF

agucovaNov 16, 2022, 8:30 PM
51 points
29 commentsLW link

The Echo Principle

Jonathan MoregårdNov 16, 2022, 8:09 PM
4 points
0 comments3 min readLW link
(honestliving.substack.com)

[Question] Is there some rea­son LLMs haven’t seen broader use?

tailcalledNov 16, 2022, 8:04 PM
25 points
27 comments1 min readLW link

When should we be sur­prised that an in­ven­tion took “so long”?

jasoncrawfordNov 16, 2022, 8:04 PM
32 points
11 comments4 min readLW link
(rootsofprogress.org)

Ques­tions about Value Lock-in, Pa­ter­nal­ism, and Empowerment

Sam F. BrownNov 16, 2022, 3:33 PM
13 points
2 comments12 min readLW link
(sambrown.eu)

If Pro­fes­sional In­vestors Missed This...

jefftkNov 16, 2022, 3:00 PM
37 points
18 comments3 min readLW link
(www.jefftk.com)

Disagree­ment with bio an­chors that lead to shorter timelines

Marius HobbhahnNov 16, 2022, 2:40 PM
75 points
17 comments7 min readLW link1 review

Cur­rent themes in mechanis­tic in­ter­pretabil­ity research

Nov 16, 2022, 2:14 PM
89 points
2 comments12 min readLW link

Un­pack­ing “Shard The­ory” as Hunch, Ques­tion, The­ory, and Insight

Jacy Reese AnthisNov 16, 2022, 1:54 PM
31 points
9 comments2 min readLW link

Mir­a­cles and why not to be­lieve them

mruwnikNov 16, 2022, 12:07 PM
4 points
0 comments2 min readLW link

[Question] How do peo­ple do re­mote re­search col­lab­o­ra­tions effec­tively?

KriegerNov 16, 2022, 11:51 AM
8 points
0 comments1 min readLW link

Method of state­ments: an al­ter­na­tive to taboo

Q HomeNov 16, 2022, 10:57 AM
7 points
0 comments41 min readLW link

The two con­cep­tions of Ac­tive In­fer­ence: an in­tel­li­gence ar­chi­tec­ture and a the­ory of agency

Roman LeventovNov 16, 2022, 9:30 AM
17 points
0 comments4 min readLW link

Devel­oper ex­pe­rience for the motivation

Adam ZernerNov 16, 2022, 7:12 AM
49 points
7 comments4 min readLW link

Progress links and tweets, 2022-11-15

jasoncrawfordNov 16, 2022, 3:21 AM
9 points
0 comments2 min readLW link
(rootsofprogress.org)

EA & LW Fo­rums Weekly Sum­mary (7th Nov − 13th Nov 22′)

Zoe WilliamsNov 16, 2022, 3:04 AM
19 points
0 commentsLW link

The FTX Saga—Simplified

AnnapurnaNov 16, 2022, 2:42 AM
44 points
10 comments7 min readLW link
(jorgevelez.substack.com)

Utili­tar­i­anism and the idea of a “ra­tio­nal agent” are fun­da­men­tally in­con­sis­tent with reality

banevNov 16, 2022, 12:19 AM
−4 points
1 comment1 min readLW link

[Question] Is the speed of train­ing large mod­els go­ing to in­crease sig­nifi­cantly in the near fu­ture due to Cere­bras An­dromeda?

Amal Nov 15, 2022, 10:50 PM
13 points
11 comments1 min readLW link

[Question] What is our cur­rent best in­fo­haz­ard policy for AGI (safety) re­search?

Roman LeventovNov 15, 2022, 10:33 PM
12 points
2 comments1 min readLW link

ACX/​SSC Meetup 1 pm Sun­day Nov 20

svfritzNov 15, 2022, 8:39 PM
2 points
0 comments1 min readLW link

SBF x LoL

Nicholas / Heather KrossNov 15, 2022, 8:24 PM
17 points
6 commentsLW link

Some re­search ideas in forecasting

JsevillamolNov 15, 2022, 7:47 PM
35 points
2 commentsLW link

Strat­egy of In­ner Conflict

Jonathan MoregårdNov 15, 2022, 7:38 PM
9 points
4 comments6 min readLW link
(honestliving.substack.com)

The limited up­side of interpretability

Peter S. ParkNov 15, 2022, 6:46 PM
13 points
11 commentsLW link

Why bet Kelly?

AlexMennenNov 15, 2022, 6:12 PM
32 points
14 comments5 min readLW link

En­tropy Scal­ing And In­trin­sic Me­mory

Nov 15, 2022, 6:11 PM
20 points
5 comments5 min readLW link

[Question] Will nan­otech/​biotech be what leads to AI doom?

tailcalledNov 15, 2022, 5:38 PM
4 points
9 comments2 min readLW link

Value For­ma­tion: An Over­ar­ch­ing Model

Thane RuthenisNov 15, 2022, 5:16 PM
34 points
20 comments34 min readLW link

In­ter­nal com­mu­ni­ca­tion framework

Nov 15, 2022, 12:41 PM
38 points
13 comments12 min readLW link

Bet­ter Mastodon Aliases

jefftkNov 15, 2022, 12:10 PM
14 points
3 comments1 min readLW link
(www.jefftk.com)

The econ­omy as an anal­ogy for ad­vanced AI systems

Nov 15, 2022, 11:16 AM
28 points
0 comments5 min readLW link

We need bet­ter pre­dic­tion markets

eigenNov 15, 2022, 4:54 AM
9 points
8 comments1 min readLW link

Prevent­ing, re­vers­ing, and ad­dress­ing data leak­age: some thoughts

VipulNaikNov 15, 2022, 2:09 AM
14 points
4 comments25 min readLW link

Win­ners of the AI Safety Nudge Competition

Marc CarauleanuNov 15, 2022, 1:06 AM
4 points
0 commentsLW link

Ly­ing to Save Humanity

cebsuvxNov 14, 2022, 11:04 PM
−1 points
4 comments1 min readLW link

Mo­ral con­ta­gion heuristic

MvolzNov 14, 2022, 9:17 PM
14 points
3 comments2 min readLW link

Will we run out of ML data? Ev­i­dence from pro­ject­ing dataset size trends

Pablo VillalobosNov 14, 2022, 4:42 PM
75 points
12 comments2 min readLW link
(epochai.org)

I (with the help of a few more peo­ple) am plan­ning to cre­ate an in­tro­duc­tion to AI Safety that a smart teenager can un­der­stand. What am I miss­ing?

TapataktNov 14, 2022, 4:12 PM
3 points
5 comments1 min readLW link

Two New New­comb Variants

eva_Nov 14, 2022, 2:01 PM
26 points
24 comments3 min readLW link

Im­prov­ing Emer­gency Ve­hi­cle Utilization

jefftkNov 14, 2022, 2:00 PM
15 points
10 comments1 min readLW link
(www.jefftk.com)

X-risk Miti­ga­tion Does Ac­tu­ally Re­quire Longter­mism

DragonGodNov 14, 2022, 12:54 PM
6 points
1 commentLW link

[Question] Why don’t we have self driv­ing cars yet?

Linda LinseforsNov 14, 2022, 12:19 PM
22 points
16 comments1 min readLW link

Ei­gen­val­ues for Dis­tance from The Bud­dhist Pre­cepts And The Ten Commandments

benjamin.j.campbellNov 14, 2022, 5:50 AM
−3 points
2 comments1 min readLW link

AI Safety Micro­grant Round

Chris_LeongNov 14, 2022, 4:25 AM
22 points
1 commentLW link

Es­ti­mat­ing the prob­a­bil­ity that FTX Fu­ture Fund grant money gets clawed back

spencergNov 14, 2022, 3:33 AM
28 points
6 commentsLW link

Ra­tional over­con­fi­dence in the tens of billions: re­cent example

banevNov 13, 2022, 10:48 PM
−20 points
3 comments2 min readLW link

In Defence of Tem­po­ral Dis­count­ing in Longter­mist Ethics

DragonGodNov 13, 2022, 9:54 PM
25 points
4 commentsLW link

An­nounc­ing Non­lin­ear Emer­gency Funding

KatWoodsNov 13, 2022, 7:02 PM
54 points
0 commentsLW link

The Align­ment Com­mu­nity Is Cul­turally Broken

sudoNov 13, 2022, 6:53 PM
136 points
68 comments2 min readLW link