Op­por­tunis­tic Time-Management

Richard Henage13 Mar 2024 21:38 UTC
12 points
2 comments1 min readLW link

AI gov­er­nance and strat­egy: a list of re­search agen­das and work that could be done.

13 Mar 2024 21:23 UTC
7 points
0 comments17 min readLW link

High­lights from Lex Frid­man’s in­ter­view of Yann LeCun

Joel Burget13 Mar 2024 20:58 UTC
48 points
15 comments41 min readLW link

On the Lat­est TikTok Bill

Zvi13 Mar 2024 18:50 UTC
58 points
7 comments29 min readLW link
(thezvi.wordpress.com)

[Question] Recom­mended book for a bal­anced take and les­sons learned from covid pan­demic response

Martin Hare Robertson13 Mar 2024 18:14 UTC
4 points
0 comments1 min readLW link

ACX/​LW Seat­tle spring meetup 2024

Nikita Sokolsky13 Mar 2024 17:24 UTC
12 points
3 comments1 min readLW link

Lay­ing the Foun­da­tions for Vi­sion and Mul­ti­modal Mechanis­tic In­ter­pretabil­ity & Open Problems

13 Mar 2024 17:09 UTC
44 points
13 comments14 min readLW link

I was raised by de­vout Mor­mons, AMA [&|] Solic­it­ing Advice

ErioirE13 Mar 2024 16:52 UTC
31 points
41 comments2 min readLW link

Re­la­tional Agency: Con­sis­tently Reach­ing Out

Jonathan Moregård13 Mar 2024 14:34 UTC
16 points
0 comments5 min readLW link
(open.substack.com)

[Question] What could a policy ban­ning AGI look like?

TsviBT13 Mar 2024 14:19 UTC
76 points
23 comments3 min readLW link

Click­bait Soapboxing

DaystarEld13 Mar 2024 14:09 UTC
24 points
15 comments3 min readLW link
(daystareld.com)

Vir­tual AI Safety Un­con­fer­ence 2024

13 Mar 2024 13:54 UTC
14 points
0 comments1 min readLW link

Jobs, Re­la­tion­ships, and Other Cults

13 Mar 2024 5:58 UTC
40 points
9 comments35 min readLW link

How do you im­prove the qual­ity of your drink­ing wa­ter?

Alex K. Chen (parrot)13 Mar 2024 0:37 UTC
11 points
2 comments1 min readLW link

The Parable Of The Fallen Pen­du­lum—Part 2

johnswentworth12 Mar 2024 21:41 UTC
77 points
8 comments4 min readLW link

Open con­sul­tancy: Let­ting un­trusted AIs choose what an­swer to ar­gue for

Fabien Roger12 Mar 2024 20:38 UTC
35 points
5 comments5 min readLW link

[Question] Is any­one work­ing on for­mally ver­ified AI toolchains?

metachirality12 Mar 2024 19:36 UTC
17 points
4 comments1 min readLW link

Trans­former Debugger

Henk Tillman12 Mar 2024 19:08 UTC
25 points
0 comments1 min readLW link
(github.com)

Su­perfore­cast­ing the Ori­gins of the Covid-19 Pandemic

DanielFilan12 Mar 2024 19:01 UTC
62 points
0 comments1 min readLW link
(goodjudgment.substack.com)

min­i­mum vi­able action

Sindhu Prasad12 Mar 2024 16:06 UTC
1 point
0 comments3 min readLW link

Hard­ball ques­tions for the Gem­ini Con­gres­sional Hearing

Michael Thiessen12 Mar 2024 15:27 UTC
−11 points
2 comments1 min readLW link

OpenAI: The Board Expands

Zvi12 Mar 2024 14:00 UTC
92 points
1 comment30 min readLW link
(thezvi.wordpress.com)

Up­date on Devel­op­ing an Ethics Calcu­la­tor to Align an AGI to

sweenesm12 Mar 2024 12:33 UTC
4 points
2 comments8 min readLW link

[Question] How do you iden­tify and coun­ter­act your bi­ases in de­ci­sion-mak­ing?

warrenjordan12 Mar 2024 5:01 UTC
2 points
1 comment1 min readLW link

How Much Have I Been Play­ing?

jefftk12 Mar 2024 2:10 UTC
9 points
0 comments1 min readLW link
(www.jefftk.com)

Bias-Aug­mented Con­sis­tency Train­ing Re­duces Bi­ased Rea­son­ing in Chain-of-Thought

Miles Turpin11 Mar 2024 23:46 UTC
16 points
0 comments1 min readLW link
(arxiv.org)

AI Safety Ac­tion Plan—A re­port com­mis­sioned by the US State Department

agucova11 Mar 2024 22:14 UTC
22 points
1 comment1 min readLW link
(www.gladstone.ai)

A dis­cus­sion of AI risk and the cost/​benefit calcu­la­tion of stop­ping or paus­ing AI development

DuncanFowler11 Mar 2024 21:41 UTC
1 point
0 comments1 min readLW link

Among the A.I. Doom­say­ers—The New Yorker

agucova11 Mar 2024 21:35 UTC
12 points
1 comment1 min readLW link
(www.newyorker.com)

Be More Katja

Nathan Young11 Mar 2024 21:12 UTC
53 points
0 comments3 min readLW link

AI In­ci­dent Re­port­ing: A Reg­u­la­tory Review

11 Mar 2024 21:03 UTC
16 points
0 comments6 min readLW link

Re­sults from an Ad­ver­sar­ial Col­lab­o­ra­tion on AI Risk (FRI)

11 Mar 2024 20:00 UTC
60 points
3 comments9 min readLW link
(forecastingresearch.org)

The Astro­nom­i­cal Sacri­fice Dilemma

Matthew McRedmond11 Mar 2024 19:58 UTC
15 points
3 comments4 min readLW link

Epiphe­nom­e­nal­ism leads to elimi­na­tivism about qualia

Clément L11 Mar 2024 19:53 UTC
4 points
0 comments7 min readLW link

Is an­a­lyz­ing LLM be­hav­ior a valid means for as­sess­ing po­ten­tial con­scious­ness, as de­scribed by global workspace the­ory and higher or­der the­o­ries?

amelia11 Mar 2024 19:37 UTC
1 point
1 comment12 min readLW link

The Best Es­say (Paul Gra­ham)

Chris_Leong11 Mar 2024 19:25 UTC
25 points
2 comments1 min readLW link
(paulgraham.com)

Open Thread Spring 2024

habryka11 Mar 2024 19:17 UTC
22 points
160 comments1 min readLW link

New so­cial credit formalizations

KatjaGrace11 Mar 2024 19:00 UTC
23 points
3 comments2 min readLW link
(worldspiritsockpuppet.com)

How dis­agree­ments about Ev­i­den­tial Cor­re­la­tions could be settled

Martín Soto11 Mar 2024 18:28 UTC
11 points
3 comments4 min readLW link

“Ar­tifi­cial Gen­eral In­tel­li­gence”: an ex­tremely brief FAQ

Steven Byrnes11 Mar 2024 17:49 UTC
70 points
6 comments2 min readLW link

Some (prob­le­matic) aes­thet­ics of what con­sti­tutes good work in academia

Steven Byrnes11 Mar 2024 17:47 UTC
147 points
12 comments12 min readLW link

Storable Votes with a Pay as you win mechanism: a con­tri­bu­tion for in­sti­tu­tional design

Arturo Macias11 Mar 2024 15:58 UTC
17 points
19 comments2 min readLW link

Tend to your clar­ity, not your confusion

Severin T. Seehrich11 Mar 2024 15:09 UTC
23 points
1 comment6 min readLW link

[Question] What do we know about the AI knowl­edge and views, es­pe­cially about ex­is­ten­tial risk, of the new OpenAI board mem­bers?

Zvi11 Mar 2024 14:55 UTC
60 points
2 comments2 min readLW link

“How could I have thought that faster?”

mesaoptimizer11 Mar 2024 10:56 UTC
213 points
32 comments2 min readLW link
(twitter.com)

Sim­ple ver­sus Short: Higher-or­der de­gen­er­acy and er­ror-correction

Daniel Murfet11 Mar 2024 7:52 UTC
111 points
6 comments13 min readLW link

De­con­struct­ing Bostrom’s Clas­sic Ar­gu­ment for AI Doom

Nora Belrose11 Mar 2024 5:58 UTC
16 points
14 comments1 min readLW link
(www.youtube.com)

Ad­vice Needed: Does Us­ing a LLM Com­pomise My Per­sonal Epistemic Se­cu­rity?

Naomi11 Mar 2024 5:57 UTC
17 points
7 comments2 min readLW link

Some Thoughts on Con­cept For­ma­tion and Use in Agents

CatGoddess11 Mar 2024 5:03 UTC
12 points
0 comments8 min readLW link

Steel­man­ning as an es­pe­cially in­sidious form of strawmanning

Cornelius Dybdahl11 Mar 2024 2:25 UTC
10 points
13 comments5 min readLW link