Gra­da­tions of moral weight

MichaelStJules29 Feb 2024 23:08 UTC
1 point
0 comments1 min readLW link

Ap­proach­ing Hu­man-Level Fore­cast­ing with Lan­guage Models

29 Feb 2024 22:36 UTC
60 points
6 comments3 min readLW link

Paper re­view: “The Un­rea­son­able Effec­tive­ness of Easy Train­ing Data for Hard Tasks”

Vassil Tashev29 Feb 2024 18:44 UTC
11 points
0 comments4 min readLW link

What’s in the box?! – Towards in­ter­pretabil­ity by dis­t­in­guish­ing niches of value within neu­ral net­works.

Joshua Clancy29 Feb 2024 18:33 UTC
3 points
4 comments128 min readLW link

Short Post: Discern­ing Truth from Trash

FinalFormal229 Feb 2024 18:09 UTC
−2 points
0 comments1 min readLW link

AI #53: One More Leap

Zvi29 Feb 2024 16:10 UTC
45 points
0 comments38 min readLW link
(thezvi.wordpress.com)

Cry­on­ics p(suc­cess) es­ti­mates are only weakly as­so­ci­ated with in­ter­est in pur­su­ing cry­on­ics in the LW 2023 Survey

Andy_McKenzie29 Feb 2024 14:47 UTC
28 points
6 comments1 min readLW link

Ben­gio’s Align­ment Pro­posal: “Towards a Cau­tious Scien­tist AI with Con­ver­gent Safety Bounds”

mattmacdermott29 Feb 2024 13:59 UTC
76 points
19 comments14 min readLW link
(yoshuabengio.org)

Tips for Em­piri­cal Align­ment Research

Ethan Perez29 Feb 2024 6:04 UTC
154 points
4 comments23 min readLW link

[Question] Sup­pos­ing the 1bit LLM pa­per pans out

O O29 Feb 2024 5:31 UTC
27 points
11 comments1 min readLW link

Can RLLMv3′s abil­ity to defend against jailbreaks be at­tributed to datasets con­tain­ing sto­ries about Jung’s shadow in­te­gra­tion the­ory?

MiguelDev29 Feb 2024 5:13 UTC
7 points
2 comments11 min readLW link

Post se­ries on “Li­a­bil­ity Law for re­duc­ing Ex­is­ten­tial Risk from AI”

Nora_Ammann29 Feb 2024 4:39 UTC
42 points
1 comment1 min readLW link
(forum.effectivealtruism.org)

Tour Ret­ro­spec­tive Fe­bru­ary 2024

jefftk29 Feb 2024 3:50 UTC
10 points
0 comments4 min readLW link
(www.jefftk.com)

Lo­cat­ing My Eyes (Part 3 of “The Sense of Phys­i­cal Ne­ces­sity”)

LoganStrohl29 Feb 2024 3:09 UTC
43 points
4 comments22 min readLW link

Con­spir­acy The­o­rists Aren’t Ig­no­rant. They’re Bad At Episte­mol­ogy.

omnizoid28 Feb 2024 23:39 UTC
18 points
10 comments5 min readLW link

Dis­cov­er­ing al­ign­ment wind­falls re­duces AI risk

28 Feb 2024 21:23 UTC
15 points
1 comment8 min readLW link
(blog.elicit.com)

my the­ory of the in­dus­trial revolution

bhauth28 Feb 2024 21:07 UTC
23 points
7 comments3 min readLW link
(www.bhauth.com)

Whole­some­ness and Effec­tive Altruism

owencb28 Feb 2024 20:28 UTC
42 points
3 comments1 min readLW link

times­tamp­ing through the Singularity

throwaway91811912728 Feb 2024 19:09 UTC
−2 points
4 comments8 min readLW link

Ev­i­den­tial Co­op­er­a­tion in Large Wor­lds: Po­ten­tial Ob­jec­tions & FAQ

28 Feb 2024 18:58 UTC
42 points
5 comments1 min readLW link

Ti­maeus’s First Four Months

28 Feb 2024 17:01 UTC
172 points
6 comments6 min readLW link

Notes on con­trol eval­u­a­tions for safety cases

28 Feb 2024 16:15 UTC
48 points
0 comments32 min readLW link

Cor­po­rate Gover­nance for Fron­tier AI Labs: A Re­search Agenda

Matthew Wearden28 Feb 2024 11:29 UTC
4 points
0 comments16 min readLW link
(matthewwearden.co.uk)

How AI Will Change Education

robotelvis28 Feb 2024 5:30 UTC
6 points
3 comments5 min readLW link
(messyprogress.substack.com)

Band Les­sons?

jefftk28 Feb 2024 3:00 UTC
13 points
3 comments1 min readLW link
(www.jefftk.com)

New LessWrong re­view win­ner UI (“The Least­Wrong” sec­tion and full-art post pages)

kave28 Feb 2024 2:42 UTC
105 points
64 comments1 min readLW link

Count­ing ar­gu­ments provide no ev­i­dence for AI doom

27 Feb 2024 23:03 UTC
95 points
188 comments14 min readLW link

Which an­i­mals re­al­ize which types of sub­jec­tive welfare?

MichaelStJules27 Feb 2024 19:31 UTC
4 points
0 comments1 min readLW link

Biose­cu­rity and AI: Risks and Opportunities

Steve Newman27 Feb 2024 18:45 UTC
11 points
1 comment7 min readLW link
(www.safe.ai)

The Gem­ini In­ci­dent Continues

Zvi27 Feb 2024 16:00 UTC
45 points
6 comments48 min readLW link
(thezvi.wordpress.com)

How I in­ter­nal­ized my achieve­ments to bet­ter deal with nega­tive feelings

Raymond Koopmanschap27 Feb 2024 15:10 UTC
42 points
7 comments6 min readLW link

On Frus­tra­tion and Regret

silentbob27 Feb 2024 12:19 UTC
8 points
0 comments4 min readLW link

Facts vs In­ter­pre­ta­tions—An Ex­er­cise in Cog­ni­tive Reframing

Declan Molony27 Feb 2024 7:57 UTC
15 points
0 comments3 min readLW link

San Fran­cisco ACX Meetup “Third Satur­day”

27 Feb 2024 7:07 UTC
7 points
0 comments1 min readLW link

Ex­am­in­ing Lan­guage Model Perfor­mance with Re­con­structed Ac­ti­va­tions us­ing Sparse Au­toen­coders

27 Feb 2024 2:43 UTC
42 points
16 comments15 min readLW link

Pro­ject idea: an iter­ated pris­oner’s dilemma com­pe­ti­tion/​game

Adam Zerner26 Feb 2024 23:06 UTC
8 points
0 comments5 min readLW link

Act­ing Wholesomely

owencb26 Feb 2024 21:49 UTC
58 points
64 comments1 min readLW link

Get­ting ra­tio­nal now or later: nav­i­gat­ing pro­cras­ti­na­tion and time-in­con­sis­tent prefer­ences for new ra­tio­nal­ists

milo_thoughts26 Feb 2024 19:38 UTC
1 point
0 comments8 min readLW link

[Question] Whom Do You Trust?

JackOfAllTrades26 Feb 2024 19:38 UTC
1 point
0 comments1 min readLW link

Boundary Vio­la­tions vs Boundary Dissolution

Chipmonk26 Feb 2024 18:59 UTC
8 points
4 comments1 min readLW link

[Question] Can we get an AI to “do our al­ign­ment home­work for us”?

Chris_Leong26 Feb 2024 7:56 UTC
53 points
33 comments1 min readLW link

How I build and run be­hav­ioral interviews

benkuhn26 Feb 2024 5:50 UTC
32 points
6 comments4 min readLW link
(www.benkuhn.net)

Hid­den Cog­ni­tion De­tec­tion Meth­ods and Bench­marks

Paul Colognese26 Feb 2024 5:31 UTC
22 points
11 comments4 min readLW link

Cel­lu­lar res­pi­ra­tion as a steam engine

dkl925 Feb 2024 20:17 UTC
24 points
1 comment1 min readLW link
(dkl9.net)

[Question] Ra­tion­al­ism and Depen­dent Origi­na­tion?

Baometrus25 Feb 2024 18:16 UTC
2 points
3 comments1 min readLW link

China-AI forecasts

NathanBarnard25 Feb 2024 16:49 UTC
39 points
29 comments6 min readLW link

Ide­olog­i­cal Bayesians

Kevin Dorst25 Feb 2024 14:17 UTC
95 points
4 comments10 min readLW link
(kevindorst.substack.com)

De­con­fus­ing In-Con­text Learning

Arjun Panickssery25 Feb 2024 9:48 UTC
37 points
1 comment2 min readLW link

Everett branches, in­ter-light cone trade and other alien mat­ters: Ap­pendix to “An ECL ex­plainer”

24 Feb 2024 23:09 UTC
17 points
0 comments1 min readLW link

Co­op­er­at­ing with aliens and AGIs: An ECL explainer

24 Feb 2024 22:58 UTC
51 points
8 comments1 min readLW link