Why I take short timelines seriously

NicholasKees28 Jan 2024 22:27 UTC
121 points
29 comments4 min readLW link

Win Friends and In­fluence Peo­ple Ch. 2: The Bombshell

gull28 Jan 2024 21:40 UTC
38 points
13 comments17 min readLW link
(www.google.com)

Riga ACX Fe­bru­ary 2024 Meetup: 2023 in Review

Anastasia28 Jan 2024 21:36 UTC
4 points
0 comments1 min readLW link

San Fran­cisco ACX Meetup “First Satur­day”

28 Jan 2024 18:39 UTC
8 points
1 comment1 min readLW link

Things You’re Allowed to Do: At the Dentist

rbinnn28 Jan 2024 18:39 UTC
38 points
16 comments1 min readLW link
(metavee.github.io)

[Question] What ex­actly did that great AI fu­ture in­volve again?

lemonhope28 Jan 2024 10:10 UTC
10 points
27 comments1 min readLW link

Pal­world de­vel­op­ment blog post

bhauth28 Jan 2024 5:56 UTC
81 points
12 comments1 min readLW link
(note.com)

Vir­tu­ally Ra­tional—VRChat Meetup

28 Jan 2024 5:52 UTC
25 points
3 comments1 min readLW link

[Stan­ford Daily] Table Talk

sudo28 Jan 2024 3:15 UTC
6 points
1 comment9 min readLW link
(stanforddaily.com)

AI Law-a-Thon

Iknownothing28 Jan 2024 2:30 UTC
5 points
3 comments1 min readLW link

Chap­ter 1 of How to Win Friends and In­fluence People

gull28 Jan 2024 0:32 UTC
49 points
5 comments17 min readLW link
(www.google.com)

Don’t sleep on Co­or­di­na­tion Takeoffs

trevor27 Jan 2024 19:55 UTC
62 points
24 comments5 min readLW link

Epistemic Hell

rogersbacon27 Jan 2024 17:13 UTC
70 points
20 comments14 min readLW link

David Burns Thinks Psy­chother­apy Is a Learn­able Skill. Git Gud.

Morpheus27 Jan 2024 13:21 UTC
27 points
20 comments11 min readLW link
(podcast.clearerthinking.org)

Aligned AI is dual use technology

lc27 Jan 2024 6:50 UTC
58 points
31 comments2 min readLW link

Ques­tions I’d Want to Ask an AGI+ to Test Its Un­der­stand­ing of Ethics

sweenesm26 Jan 2024 23:40 UTC
14 points
6 comments4 min readLW link

An In­vi­ta­tion to Refrain from Down­vot­ing Posts into Net-Nega­tive Karma

MikkW26 Jan 2024 20:13 UTC
2 points
12 comments1 min readLW link

The Good Balsamic Vinegar

jenn26 Jan 2024 19:30 UTC
51 points
4 comments2 min readLW link
(jenn.site)

The Per­spec­tive-based Ex­pla­na­tion to the Reflec­tive In­con­sis­tency Paradox

dadadarren26 Jan 2024 19:00 UTC
10 points
16 comments8 min readLW link

To Boldly Code

StrivingForLegibility26 Jan 2024 18:25 UTC
25 points
4 comments3 min readLW link

In­cor­po­rat­ing Mechanism De­sign Into De­ci­sion Theory

StrivingForLegibility26 Jan 2024 18:25 UTC
17 points
4 comments4 min readLW link

Mak­ing ev­ery re­searcher seek grants is a bro­ken model

jasoncrawford26 Jan 2024 16:06 UTC
159 points
41 comments4 min readLW link
(rootsofprogress.org)

Notes on Innocence

David Gross26 Jan 2024 14:45 UTC
13 points
21 comments19 min readLW link

Stacked Lap­top Monitor

jefftk26 Jan 2024 14:10 UTC
22 points
5 comments1 min readLW link
(www.jefftk.com)

Surgery Works Well Without The FDA

Maxwell Tabarrok26 Jan 2024 13:31 UTC
42 points
28 comments4 min readLW link
(maximumprogress.substack.com)

[Question] Work­shop (hackathon, res­i­dence pro­gram, etc.) about for-profit AI Safety pro­jects?

Roman Leventov26 Jan 2024 9:49 UTC
21 points
5 comments1 min readLW link

Without fun­da­men­tal ad­vances, mis­al­ign­ment and catas­tro­phe are the de­fault out­comes of train­ing pow­er­ful AI

26 Jan 2024 7:22 UTC
161 points
60 comments57 min readLW link

Ap­prox­i­mately Bayesian Rea­son­ing: Knigh­tian Uncer­tainty, Good­hart, and the Look-Else­where Effect

RogerDearnaley26 Jan 2024 3:58 UTC
16 points
2 comments11 min readLW link

Mus­ings on Cargo Cult Consciousness

Gareth Davidson25 Jan 2024 23:00 UTC
−14 points
11 comments17 min readLW link

RAND re­port finds no effect of cur­rent LLMs on vi­a­bil­ity of bioter­ror­ism attacks

StellaAthena25 Jan 2024 19:17 UTC
94 points
14 comments1 min readLW link
(www.rand.org)

[Question] Bayesian Reflec­tion Prin­ci­ples and Ig­no­rance of the Future

crickets25 Jan 2024 19:00 UTC
5 points
3 comments1 min readLW link

“Does your paradigm beget new, good, paradigms?”

Raemon25 Jan 2024 18:23 UTC
40 points
6 comments2 min readLW link

AI #48: The Talk of Davos

Zvi25 Jan 2024 16:20 UTC
38 points
9 comments36 min readLW link
(thezvi.wordpress.com)

Im­port­ing a Python File by Name

jefftk25 Jan 2024 16:00 UTC
12 points
7 comments1 min readLW link
(www.jefftk.com)

[Re­post] The Copen­hagen In­ter­pre­ta­tion of Ethics

mesaoptimizer25 Jan 2024 15:20 UTC
72 points
4 comments5 min readLW link
(web.archive.org)

Nash Bar­gain­ing be­tween Subagents doesn’t solve the Shut­down Problem

A.H.25 Jan 2024 10:47 UTC
22 points
1 comment9 min readLW link

Sta­tus-ori­ented spending

Adam Zerner25 Jan 2024 6:46 UTC
14 points
19 comments4 min readLW link

Pro­tect­ing agent boundaries

Chipmonk25 Jan 2024 4:13 UTC
11 points
6 comments2 min readLW link

[Question] Is a ran­dom box of gas pre­dictable af­ter 20 sec­onds?

24 Jan 2024 23:00 UTC
37 points
35 comments1 min readLW link

[Question] Will quan­tum ran­dom­ness af­fect the 2028 elec­tion?

24 Jan 2024 22:54 UTC
66 points
52 comments1 min readLW link

AISN #30: In­vest­ments in Com­pute and Mili­tary AI Plus, Ja­pan and Sin­ga­pore’s Na­tional AI Safety Institutes

24 Jan 2024 19:38 UTC
27 points
1 comment6 min readLW link
(newsletter.safe.ai)

Krueger Lab AI Safety In­tern­ship 2024

Joey Bream24 Jan 2024 19:17 UTC
3 points
0 comments1 min readLW link

Agents that act for rea­sons: a thought experiment

Michele Campolo24 Jan 2024 16:47 UTC
3 points
0 comments3 min readLW link

Im­pact Assess­ment of AI Safety Camp (Arb Re­search)

Samuel Holton24 Jan 2024 16:19 UTC
10 points
0 comments11 min readLW link
(forum.effectivealtruism.org)

The case for en­sur­ing that pow­er­ful AIs are controlled

24 Jan 2024 16:11 UTC
264 points
66 comments28 min readLW link

LLMs can strate­gi­cally de­ceive while do­ing gain-of-func­tion re­search

Igor Ivanov24 Jan 2024 15:45 UTC
33 points
4 comments11 min readLW link

Monthly Roundup #14: Jan­uary 2024

Zvi24 Jan 2024 12:50 UTC
38 points
22 comments44 min readLW link
(thezvi.wordpress.com)

This might be the last AI Safety Camp

24 Jan 2024 9:33 UTC
195 points
34 comments1 min readLW link

Global LessWrong/​AC10 Meetup on VRChat

24 Jan 2024 5:44 UTC
15 points
2 comments1 min readLW link

Hu­mans aren’t fleeb.

Charlie Steiner24 Jan 2024 5:31 UTC
35 points
5 comments2 min readLW link