AI Rights for Hu­man Safety

Simon Goldstein1 Aug 2024 23:01 UTC
45 points
6 comments1 min readLW link
(papers.ssrn.com)

Case Study: In­ter­pret­ing, Ma­nipu­lat­ing, and Con­trol­ling CLIP With Sparse Autoencoders

Gytis Daujotas1 Aug 2024 21:08 UTC
44 points
6 comments7 min readLW link

Op­ti­miz­ing Re­peated Correlations

SatvikBeri1 Aug 2024 17:33 UTC
26 points
1 comment1 min readLW link

The need for multi-agent experiments

Martín Soto1 Aug 2024 17:14 UTC
43 points
3 comments9 min readLW link

Dragon Agnosticism

jefftk1 Aug 2024 17:00 UTC
92 points
75 comments2 min readLW link
(www.jefftk.com)

Mor­ris­town ACX Meetup

mbrooks1 Aug 2024 16:29 UTC
2 points
1 comment1 min readLW link

Some com­ments on intelligence

Viliam1 Aug 2024 15:17 UTC
30 points
5 comments3 min readLW link

[Question] [Thought Ex­per­i­ment] Given a but­ton to ter­mi­nate all hu­man­ity, would you press it?

lorepieri1 Aug 2024 15:10 UTC
−2 points
9 comments1 min readLW link

Are un­paid UN in­tern­ships a good idea?

Cipolla1 Aug 2024 15:06 UTC
1 point
7 comments4 min readLW link

AI #75: Math is Easier

Zvi1 Aug 2024 13:40 UTC
46 points
25 comments72 min readLW link
(thezvi.wordpress.com)

Tem­po­rary Cog­ni­tive Hyper­pa­ram­e­ter Alteration

Jonathan Moregård1 Aug 2024 10:27 UTC
9 points
0 comments3 min readLW link
(honestliving.substack.com)

Tech­nol­ogy and Progress

Zero Contradictions1 Aug 2024 4:49 UTC
1 point
0 comments1 min readLW link
(thewaywardaxolotl.blogspot.com)

Do Pre­dic­tion Mar­kets Work?

Benjamin_Sturisky1 Aug 2024 2:31 UTC
7 points
0 comments4 min readLW link

2/​3 Aussie & NZ AI Safety folk of­ten or some­times feel lonely or dis­con­nected (and 16 other bar­ri­ers to im­pact)

yanni kyriacos1 Aug 2024 1:15 UTC
12 points
0 comments8 min readLW link

[Question] Can UBI over­come in­fla­tion and rent seek­ing?

Gordon Seidoh Worley1 Aug 2024 0:13 UTC
5 points
34 comments1 min readLW link

Recom­men­da­tion: re­ports on the search for miss­ing hiker Bill Ewasko

eukaryote31 Jul 2024 22:15 UTC
169 points
28 comments14 min readLW link
(eukaryotewritesblog.com)

Eco­nomics101 pre­dicted the failure of spe­cial card pay­ments for re­fugees, 3 months later whole of Ger­many wants to adopt it

Yanling Guo31 Jul 2024 21:09 UTC
3 points
3 comments2 min readLW link

Am­bi­guity in Pre­dic­tion Mar­ket Re­s­olu­tion is Still Harmful

aphyer31 Jul 2024 20:32 UTC
43 points
17 comments3 min readLW link

AI labs can boost ex­ter­nal safety research

Zach Stein-Perlman31 Jul 2024 19:30 UTC
31 points
1 comment1 min readLW link

Women in AI Safety Lon­don Meetup

njg31 Jul 2024 18:13 UTC
1 point
0 comments1 min readLW link

Con­struct­ing Neu­ral Net­work Pa­ram­e­ters with Down­stream Trainability

ch271828n31 Jul 2024 18:13 UTC
1 point
0 comments1 min readLW link
(github.com)

Want to work on US emerg­ing tech policy? Con­sider the Hori­zon Fel­low­ship.

Elika31 Jul 2024 18:12 UTC
4 points
0 comments1 min readLW link

[Question] What are your cruxes for im­pre­cise prob­a­bil­ities /​ de­ci­sion rules?

Anthony DiGiovanni31 Jul 2024 15:42 UTC
36 points
32 comments1 min readLW link

The new UK gov­ern­ment’s stance on AI safety

Elliot Mckernon31 Jul 2024 15:23 UTC
17 points
0 comments4 min readLW link

Solu­tions to prob­lems with Bayesianism

B Jacobs31 Jul 2024 14:18 UTC
6 points
0 comments21 min readLW link
(bobjacobs.substack.com)

Cat Sus­te­nance Fortification

jefftk31 Jul 2024 2:30 UTC
14 points
7 comments1 min readLW link
(www.jefftk.com)

Twit­ter thread on open-source AI

Richard_Ngo31 Jul 2024 0:26 UTC
33 points
6 comments2 min readLW link
(x.com)

Twit­ter thread on AI takeover scenarios

Richard_Ngo31 Jul 2024 0:24 UTC
37 points
0 comments2 min readLW link
(x.com)

Twit­ter thread on AI safety evals

Richard_Ngo31 Jul 2024 0:18 UTC
62 points
3 comments2 min readLW link
(x.com)

Twit­ter thread on poli­tics of AI safety

Richard_Ngo31 Jul 2024 0:00 UTC
35 points
2 comments1 min readLW link
(x.com)

An ML pa­per on data steal­ing pro­vides a con­struc­tion for “gra­di­ent hack­ing”

David Scott Krueger (formerly: capybaralet)30 Jul 2024 21:44 UTC
21 points
1 comment1 min readLW link
(arxiv.org)

Open Source Au­to­mated In­ter­pretabil­ity for Sparse Au­toen­coder Features

30 Jul 2024 21:11 UTC
67 points
1 comment13 min readLW link
(blog.eleuther.ai)

Cater­pillars and Philosophy

Zero Contradictions30 Jul 2024 20:54 UTC
2 points
0 comments1 min readLW link
(thewaywardaxolotl.blogspot.com)

François Chol­let on the limi­ta­tions of LLMs in reasoning

2PuNCheeZ30 Jul 2024 20:04 UTC
1 point
1 comment2 min readLW link
(x.com)

Against AI As An Ex­is­ten­tial Risk

Noah Birnbaum30 Jul 2024 19:10 UTC
6 points
13 comments1 min readLW link
(irrationalitycommunity.substack.com)

[Question] Is ob­jec­tive moral­ity self-defeat­ing?

dialectica30 Jul 2024 18:23 UTC
−4 points
3 comments2 min readLW link

Limi­ta­tions on the In­ter­pretabil­ity of Learned Fea­tures from Sparse Dic­tionary Learning

Tom Angsten30 Jul 2024 16:36 UTC
6 points
0 comments9 min readLW link

Self-Other Over­lap: A Ne­glected Ap­proach to AI Alignment

30 Jul 2024 16:22 UTC
193 points
43 comments12 min readLW link

In­ves­ti­gat­ing the Abil­ity of LLMs to Rec­og­nize Their Own Writing

30 Jul 2024 15:41 UTC
32 points
0 comments15 min readLW link

Can Gen­er­al­ized Ad­ver­sar­ial Test­ing En­able More Ri­gor­ous LLM Safety Evals?

scasper30 Jul 2024 14:57 UTC
25 points
0 comments4 min readLW link

RTFB: Cal­ifor­nia’s AB 3211

Zvi30 Jul 2024 13:10 UTC
62 points
2 comments11 min readLW link
(thezvi.wordpress.com)

If You Can Climb Up, You Can Climb Down

jefftk30 Jul 2024 0:00 UTC
34 points
9 comments1 min readLW link
(www.jefftk.com)

What is Mo­ral­ity?

Zero Contradictions29 Jul 2024 19:19 UTC
−1 points
0 comments1 min readLW link
(thewaywardaxolotl.blogspot.com)

Arch-an­ar­chism and im­mor­tal­ity

Peter lawless 29 Jul 2024 18:10 UTC
−5 points
1 comment2 min readLW link

AI Safety Newslet­ter #39: Im­pli­ca­tions of a Trump Ad­minis­tra­tion for AI Policy Plus, Safety Engineering

29 Jul 2024 17:50 UTC
17 points
1 comment6 min readLW link
(newsletter.safe.ai)

New Blog Post Against AI Doom

Noah Birnbaum29 Jul 2024 17:21 UTC
1 point
5 comments1 min readLW link
(substack.com)

An In­ter­pretabil­ity Illu­sion from Pop­u­la­tion Statis­tics in Causal Analysis

Daniel Tan29 Jul 2024 14:50 UTC
9 points
3 comments1 min readLW link

[Question] How to­k­eniza­tion in­fluences prompt­ing?

Boris Kashirin29 Jul 2024 10:28 UTC
9 points
4 comments1 min readLW link

Un­der­stand­ing Po­si­tional Fea­tures in Layer 0 SAEs

29 Jul 2024 9:36 UTC
43 points
0 comments5 min readLW link

Pre­dic­tion Mar­kets Explained

Benjamin_Sturisky29 Jul 2024 8:02 UTC
8 points
0 comments9 min readLW link