LessWrong/​ACX meetup Tran­sil­vanya tour—Alba Iulia

Marius Adrian Nicoară19 Jun 2024 19:56 UTC
1 point
1 comment1 min readLW link

Chronic perfec­tion­ism through the eyes of school reports

Stuart Johnson19 Jun 2024 17:46 UTC
13 points
3 comments1 min readLW link

Ilya Sutskever cre­ated a new AGI startup

harfe19 Jun 2024 17:17 UTC
95 points
35 comments1 min readLW link
(ssi.inc)

Beyond the Board: Ex­plor­ing AI Ro­bust­ness Through Go

AdamGleave19 Jun 2024 16:40 UTC
41 points
2 comments1 min readLW link
(far.ai)

A study on cults and non-cults—an­swer ques­tions about a group and get a cult score

spencerg19 Jun 2024 14:30 UTC
1 point
8 comments1 min readLW link
(www.guidedtrack.com)

Work­shop: data anal­y­sis for soft­ware engineers

Derek M. Jones19 Jun 2024 14:20 UTC
2 points
0 comments1 min readLW link

FLEXIBLE AND ADAPTABLE LLM’s WITH CONTINUOUS SELF TRAINING

Escaque 6619 Jun 2024 14:17 UTC
−11 points
0 comments3 min readLW link

Sur­viv­ing Seveneves

Yair Halberstadt19 Jun 2024 13:11 UTC
41 points
4 comments11 min readLW link

Self re­spon­si­bil­ity

Elo19 Jun 2024 10:17 UTC
17 points
3 comments2 min readLW link

Gizmo Watch Review

jefftk18 Jun 2024 20:00 UTC
22 points
3 comments6 min readLW link
(www.jefftk.com)

Boy­cott OpenAI

PeterMcCluskey18 Jun 2024 19:52 UTC
163 points
26 comments1 min readLW link
(bayesianinvestor.com)

Lov­ing a world you don’t trust

Joe Carlsmith18 Jun 2024 19:31 UTC
134 points
13 comments33 min readLW link

Book re­view: the Iliad

philh18 Jun 2024 18:50 UTC
31 points
2 comments14 min readLW link
(reasonableapproximation.net)

AI Safety Newslet­ter #37: US Launches An­titrust In­ves­ti­ga­tions Plus, re­cent crit­i­cisms of OpenAI and An­thropic, and a sum­mary of Si­tu­a­tional Awareness

18 Jun 2024 18:07 UTC
8 points
0 comments5 min readLW link
(newsletter.safe.ai)

Suffer­ing Is Not Pain

jbkjr18 Jun 2024 18:04 UTC
34 points
45 comments5 min readLW link
(jbkjr.me)

Lam­ini’s Tar­geted Hal­lu­ci­na­tion Re­duc­tion May Be a Big Deal for Job Automation

sweenesm18 Jun 2024 15:29 UTC
3 points
0 comments1 min readLW link

On Deep­Mind’s Fron­tier Safety Framework

Zvi18 Jun 2024 13:30 UTC
37 points
4 comments8 min readLW link
(thezvi.wordpress.com)

[Linkpost] Tran­scen­dence: Gen­er­a­tive Models Can Out­perform The Ex­perts That Train Them

Bogdan Ionut Cirstea18 Jun 2024 11:00 UTC
19 points
3 comments1 min readLW link
(arxiv.org)

I would have shit in that alley, too

Declan Molony18 Jun 2024 4:41 UTC
437 points
134 comments4 min readLW link

[Question] The thing I don’t un­der­stand about AGI

Jeremy Kalfus18 Jun 2024 4:25 UTC
7 points
12 comments1 min readLW link

Cal­ling My Se­cond Fam­ily Dance

jefftk18 Jun 2024 2:20 UTC
11 points
0 comments1 min readLW link
(www.jefftk.com)

LLM-Se­cured Sys­tems: A Gen­eral-Pur­pose Tool For Struc­tured Transparency

ozziegooen18 Jun 2024 0:21 UTC
10 points
1 comment1 min readLW link

D&D.Sci Alchemy: Arch­mage Anachronos and the Sup­ply Chain Is­sues Eval­u­a­tion & Ruleset

aphyer17 Jun 2024 21:29 UTC
51 points
11 comments6 min readLW link

Ques­tion­able Nar­ra­tives of “Si­tu­a­tional Aware­ness”

fergusq17 Jun 2024 21:01 UTC
0 points
1 comment1 min readLW link
(forum.effectivealtruism.org)

ZuVillage Ge­or­gia – Mis­sion Statement

Burns17 Jun 2024 19:53 UTC
3 points
3 comments9 min readLW link

Get­ting 50% (SoTA) on ARC-AGI with GPT-4o

ryan_greenblatt17 Jun 2024 18:44 UTC
262 points
50 comments13 min readLW link

Sy­co­phancy to sub­ter­fuge: In­ves­ti­gat­ing re­ward tam­per­ing in large lan­guage models

17 Jun 2024 18:41 UTC
161 points
22 comments8 min readLW link
(arxiv.org)

La­bor Par­ti­ci­pa­tion is a High-Pri­or­ity AI Align­ment Risk

alex17 Jun 2024 18:09 UTC
4 points
0 comments17 min readLW link

Towards a Less Bul­lshit Model of Semantics

17 Jun 2024 15:51 UTC
94 points
44 comments21 min readLW link

Analysing Ad­ver­sar­ial At­tacks with Lin­ear Probing

17 Jun 2024 14:16 UTC
9 points
0 comments8 min readLW link

What’s the fu­ture of AI hard­ware?

Itay Dreyfus17 Jun 2024 13:05 UTC
2 points
0 comments8 min readLW link
(productidentity.co)

OpenAI #8: The Right to Warn

Zvi17 Jun 2024 12:00 UTC
97 points
8 comments34 min readLW link
(thezvi.wordpress.com)

Logit Prisms: De­com­pos­ing Trans­former Out­puts for Mechanis­tic Interpretability

ntt12317 Jun 2024 11:46 UTC
5 points
4 comments6 min readLW link
(neuralblog.github.io)

Weak AGIs Kill Us First

yrimon17 Jun 2024 11:13 UTC
15 points
4 comments9 min readLW link

[Linkpost] Guardian ar­ti­cle cov­er­ing Light­cone In­fras­truc­ture, Man­i­fest and CFAR ties to FTX

ROM17 Jun 2024 10:05 UTC
8 points
9 comments1 min readLW link
(www.theguardian.com)

Fat Tails Dis­cour­age Compromise

niplav17 Jun 2024 9:39 UTC
53 points
5 comments1 min readLW link

Our In­tu­itions About The Crim­i­nal Jus­tice Sys­tem Are Screwed Up

omnizoid17 Jun 2024 6:22 UTC
14 points
14 comments4 min readLW link

A Case for Co­op­er­a­tion: Depen­dence in the Pri­soner’s Dilemma

grantstenger17 Jun 2024 1:10 UTC
9 points
2 comments23 min readLW link

De­gen­era­cies are sticky for SGD

16 Jun 2024 21:19 UTC
56 points
1 comment16 min readLW link

YM’s Shortform

YM16 Jun 2024 20:57 UTC
3 points
1 comment1 min readLW link

“Is-Ought” is Fraught

MiSteR Kittty16 Jun 2024 17:27 UTC
−5 points
2 comments1 min readLW link

The type of AI hu­man­ity has cho­sen to cre­ate so far is un­safe, for soft so­cial rea­sons and not tech­ni­cal ones.

l8c16 Jun 2024 13:31 UTC
−6 points
2 comments1 min readLW link

Self-Con­trol of LLM Be­hav­iors by Com­press­ing Suffix Gra­di­ent into Pre­fix Controller

Henry Cai16 Jun 2024 13:01 UTC
7 points
0 comments7 min readLW link
(arxiv.org)

CIV: a story

Richard_Ngo15 Jun 2024 22:36 UTC
98 points
6 comments9 min readLW link
(www.narrativeark.xyz)

Yann LeCun: We only de­sign ma­chines that min­i­mize costs [there­fore they are safe]

tailcalled15 Jun 2024 17:25 UTC
19 points
8 comments1 min readLW link
(twitter.com)

(Ap­pet­i­tive, Con­sum­ma­tory) ≈ (RL, re­flex)

Steven Byrnes15 Jun 2024 15:57 UTC
38 points
1 comment3 min readLW link

Two LessWrong speed friend­ing experiments

15 Jun 2024 10:52 UTC
52 points
3 comments4 min readLW link

Claude’s dark spiritual AI futurism

jessicata15 Jun 2024 0:57 UTC
22 points
7 comments43 min readLW link
(unstableontology.com)

[Question] When is “un­falsifi­able im­plies false” in­cor­rect?

VojtaKovarik15 Jun 2024 0:28 UTC
3 points
11 comments1 min readLW link

MIRI’s June 2024 Newsletter

Harlan14 Jun 2024 23:02 UTC
74 points
20 comments2 min readLW link
(intelligence.org)