Hyperpolation

Gunnar_Zarncke15 Sep 2024 21:37 UTC
22 points
6 comments1 min readLW link
(arxiv.org)

[Question] If I wanted to spend WAY more on AI, what would I spend it on?

Logan Zoellner15 Sep 2024 21:24 UTC
52 points
16 comments1 min readLW link

Su­per­in­tel­li­gence Can’t Solve the Prob­lem of De­cid­ing What You’ll Do

Vladimir_Nesov15 Sep 2024 21:03 UTC
27 points
11 comments1 min readLW link

For Limited Su­per­in­tel­li­gences, Epistemic Ex­clu­sion is Harder than Ro­bust­ness to Log­i­cal Exploitation

Lorec15 Sep 2024 20:49 UTC
3 points
9 comments3 min readLW link

Why I funded PIBBSS

Ryan Kidd15 Sep 2024 19:56 UTC
115 points
21 comments3 min readLW link

My dis­agree­ments with “AGI ruin: A List of Lethal­ities”

Noosphere8915 Sep 2024 17:22 UTC
36 points
46 comments18 min readLW link

Thirty ran­dom thoughts about AI alignment

Lysandre Terrisse15 Sep 2024 16:24 UTC
6 points
1 comment29 min readLW link

Prove­ably Safe Self Driv­ing Cars [Mo­dulo As­sump­tions]

Davidmanheim15 Sep 2024 13:58 UTC
33 points
26 comments8 min readLW link

SCP Foun­da­tion—Anti memetic Divi­sion Hub

landscape_kiwi15 Sep 2024 13:40 UTC
6 points
1 comment1 min readLW link
(scp-wiki.wikidot.com)

Did Christo­pher Hitchens change his mind about wa­ter­board­ing?

Isaac King15 Sep 2024 8:28 UTC
171 points
22 comments7 min readLW link

Not ev­ery ac­com­mo­da­tion is a Curb Cut Effect: The Handi­capped Park­ing Effect, the Clap­per Effect, and more

Michael Cohn15 Sep 2024 5:27 UTC
80 points
39 comments10 min readLW link
(perplexedguide.net)

AlignedCut: Vi­sual Con­cepts Dis­cov­ery on Brain-Guided Univer­sal Fea­ture Space

Bogdan Ionut Cirstea14 Sep 2024 23:23 UTC
17 points
1 comment1 min readLW link
(arxiv.org)

How you can help pass im­por­tant AI leg­is­la­tion with 10 min­utes of effort

ThomasW14 Sep 2024 22:10 UTC
59 points
2 comments2 min readLW link

[Question] Cal­ibra­tion train­ing for ‘per­centile rank­ings’?

david reinstein14 Sep 2024 21:51 UTC
3 points
0 comments2 min readLW link

OpenAI o1, Llama 4, and AlphaZero of LLMs

Vladimir_Nesov14 Sep 2024 21:27 UTC
83 points
25 comments1 min readLW link

For­ever Leaders

Justice Howard14 Sep 2024 20:55 UTC
6 points
9 comments1 min readLW link

Emer­gent Author­ship: Creativity à la Communing

gswonk14 Sep 2024 19:02 UTC
1 point
0 comments3 min readLW link

Com­pres­sion Moves for Prediction

adamShimi14 Sep 2024 17:51 UTC
20 points
0 comments7 min readLW link
(epistemologicalfascinations.substack.com)

Pay-on-re­sults per­sonal growth: first success

Chipmonk14 Sep 2024 3:39 UTC
63 points
7 comments3 min readLW link
(chrislakin.blog)

Avoid­ing the Bog of Mo­ral Hazard for AI

Nathan Helm-Burger13 Sep 2024 21:24 UTC
17 points
13 comments2 min readLW link

[Question] If I ask an LLM to think step by step, how big are the steps?

ryan_b13 Sep 2024 20:30 UTC
7 points
1 comment1 min readLW link

Es­ti­mat­ing Tail Risk in Neu­ral Networks

Mark Xu13 Sep 2024 20:00 UTC
68 points
9 comments23 min readLW link
(www.alignment.org)

If-Then Com­mit­ments for AI Risk Re­duc­tion [by Holden Karnofsky]

habryka13 Sep 2024 19:38 UTC
28 points
0 comments20 min readLW link
(carnegieendowment.org)

Can star­tups be im­pact­ful in AI safety?

13 Sep 2024 19:00 UTC
15 points
0 comments6 min readLW link

I just can’t agree with AI safety. Why am I wrong?

Ya Polkovnik13 Sep 2024 17:48 UTC
0 points
5 comments2 min readLW link

Keep­ing it (less than) real: Against ℶ₂ pos­si­ble peo­ple or worlds

quiet_NaN13 Sep 2024 17:29 UTC
9 points
0 comments9 min readLW link

Why I’m bear­ish on mechanis­tic in­ter­pretabil­ity: the shards are not in the network

tailcalled13 Sep 2024 17:09 UTC
22 points
40 comments1 min readLW link

In­creas­ing the Span of the Set of Ideas

Jeffrey Heninger13 Sep 2024 15:52 UTC
6 points
1 comment9 min readLW link

How difficult is AI Align­ment?

Sammy Martin13 Sep 2024 15:47 UTC
44 points
6 comments23 min readLW link

The Great Data In­te­gra­tion Schlep

sarahconstantin13 Sep 2024 15:40 UTC
263 points
16 comments9 min readLW link
(sarahconstantin.substack.com)

“Real AGI”

Seth Herd13 Sep 2024 14:13 UTC
18 points
20 comments3 min readLW link

AI, cen­tral­iza­tion, and the One Ring

owencb13 Sep 2024 14:00 UTC
64 points
11 comments8 min readLW link
(strangecities.substack.com)

Ev­i­dence against Learned Search in a Chess-Play­ing Neu­ral Network

p.b.13 Sep 2024 11:59 UTC
56 points
3 comments6 min readLW link

My ca­reer ex­plo­ra­tion: Tools for build­ing confidence

lynettebye13 Sep 2024 11:37 UTC
17 points
0 comments20 min readLW link

Con­tra pa­pers claiming su­per­hu­man AI forecasting

12 Sep 2024 18:10 UTC
181 points
16 comments7 min readLW link

OpenAI o1

Zach Stein-Perlman12 Sep 2024 17:30 UTC
147 points
41 comments1 min readLW link

How to Give in to Threats (with­out in­cen­tiviz­ing them)

Mikhail Samin12 Sep 2024 15:55 UTC
52 points
26 comments5 min readLW link

Open Prob­lems in AIXI Agent Foundations

Cole Wyeth12 Sep 2024 15:38 UTC
41 points
2 comments10 min readLW link

On the de­struc­tion of Amer­ica’s best high school

Chris_Leong12 Sep 2024 15:30 UTC
−6 points
7 comments1 min readLW link
(scottaaronson.blog)

Op­ti­mis­ing un­der ar­bi­trar­ily many con­straint equations

dkl912 Sep 2024 14:59 UTC
6 points
0 comments3 min readLW link
(dkl9.net)

AI #81: Alpha Proteo

Zvi12 Sep 2024 13:00 UTC
59 points
3 comments35 min readLW link
(thezvi.wordpress.com)

[Question] When can I be nu­mer­ate?

FinalFormal212 Sep 2024 4:05 UTC
25 points
3 comments1 min readLW link

A Non­con­struc­tive Ex­is­tence Proof of Aligned Superintelligence

Roko12 Sep 2024 3:20 UTC
0 points
78 comments1 min readLW link
(transhumanaxiology.substack.com)

Col­laps­ing the Belief/​Knowl­edge Distinction

Jeremias11 Sep 2024 21:24 UTC
−7 points
8 comments1 min readLW link

Pro­gram­ming Re­fusal with Con­di­tional Ac­ti­va­tion Steering

Bruce W. Lee11 Sep 2024 20:57 UTC
41 points
0 comments11 min readLW link
(arxiv.org)

Check­ing pub­lic figures on whether they “an­swered the ques­tion” quick anal­y­sis from Har­ris/​Trump de­bate, and a proposal

david reinstein11 Sep 2024 20:25 UTC
7 points
4 comments1 min readLW link
(open.substack.com)

AI Safety Newslet­ter #41: The Next Gen­er­a­tion of Com­pute Scale Plus, Rank­ing Models by Sus­cep­ti­bil­ity to Jailbreak­ing, and Ma­chine Ethics

11 Sep 2024 19:14 UTC
5 points
1 comment5 min readLW link
(newsletter.safe.ai)

Re­fac­tor­ing cry­on­ics as struc­tural brain preservation

Andy_McKenzie11 Sep 2024 18:36 UTC
102 points
14 comments3 min readLW link

[Question] Is this a Pivotal Weak Act? Creat­ing bac­te­ria that de­com­pose metal

doomyeser11 Sep 2024 18:07 UTC
9 points
9 comments3 min readLW link

How to dis­cover the na­ture of sen­tience, and ethics

Gustavo Ramires11 Sep 2024 17:22 UTC
−2 points
4 comments5 min readLW link