[Question] How does some­one prove that their gen­eral in­tel­li­gence is above av­er­age?

M. Y. Zuo16 Sep 2024 21:01 UTC
−3 points
12 comments1 min readLW link

[Question] Does life ac­tu­ally lo­cally *in­crease* en­tropy?

tailcalled16 Sep 2024 20:30 UTC
10 points
27 comments1 min readLW link

Book re­view: Xenosystems

jessicata16 Sep 2024 20:17 UTC
50 points
18 comments37 min readLW link
(unstableontology.com)

MIRI’s Septem­ber 2024 newsletter

Harlan16 Sep 2024 18:15 UTC
46 points
0 comments1 min readLW link
(intelligence.org)

Gen­er­a­tive ML in chem­istry is bot­tle­necked by synthesis

Abhishaike Mahajan16 Sep 2024 16:31 UTC
38 points
2 comments14 min readLW link
(www.owlposting.com)

Se­cret Col­lu­sion: Will We Know When to Un­plug AI?

16 Sep 2024 16:07 UTC
56 points
7 comments31 min readLW link

GPT-o1

Zvi16 Sep 2024 13:40 UTC
86 points
34 comments46 min readLW link
(thezvi.wordpress.com)

[Question] Can sub­junc­tive de­pen­dence emerge from a sim­plic­ity prior?

Daniel C16 Sep 2024 12:39 UTC
8 points
0 comments1 min readLW link

Longevity and the Mind

George3d616 Sep 2024 9:43 UTC
5 points
2 comments10 min readLW link

[Question] What’s the Deal with Log­i­cal Uncer­tainty?

Ape in the coat16 Sep 2024 8:11 UTC
32 points
28 comments2 min readLW link

Re­in­force­ment Learn­ing from In­for­ma­tion Bazaar Feed­back, and other uses of in­for­ma­tion markets

Abhimanyu Pallavi Sudhir16 Sep 2024 1:04 UTC
5 points
1 comment5 min readLW link

Hyperpolation

Gunnar_Zarncke15 Sep 2024 21:37 UTC
22 points
6 comments1 min readLW link
(arxiv.org)

[Question] If I wanted to spend WAY more on AI, what would I spend it on?

Logan Zoellner15 Sep 2024 21:24 UTC
52 points
16 comments1 min readLW link

Su­per­in­tel­li­gence Can’t Solve the Prob­lem of De­cid­ing What You’ll Do

Vladimir_Nesov15 Sep 2024 21:03 UTC
27 points
11 comments1 min readLW link

For Limited Su­per­in­tel­li­gences, Epistemic Ex­clu­sion is Harder than Ro­bust­ness to Log­i­cal Exploitation

Lorec15 Sep 2024 20:49 UTC
3 points
9 comments3 min readLW link

Why I funded PIBBSS

Ryan Kidd15 Sep 2024 19:56 UTC
115 points
21 comments3 min readLW link

My dis­agree­ments with “AGI ruin: A List of Lethal­ities”

Noosphere8915 Sep 2024 17:22 UTC
36 points
46 comments18 min readLW link

Thirty ran­dom thoughts about AI alignment

Lysandre Terrisse15 Sep 2024 16:24 UTC
6 points
1 comment29 min readLW link

Prove­ably Safe Self Driv­ing Cars [Mo­dulo As­sump­tions]

Davidmanheim15 Sep 2024 13:58 UTC
27 points
29 comments8 min readLW link

SCP Foun­da­tion—Anti memetic Divi­sion Hub

landscape_kiwi15 Sep 2024 13:40 UTC
6 points
1 comment1 min readLW link
(scp-wiki.wikidot.com)

Did Christo­pher Hitchens change his mind about wa­ter­board­ing?

Isaac King15 Sep 2024 8:28 UTC
171 points
22 comments7 min readLW link

Not ev­ery ac­com­mo­da­tion is a Curb Cut Effect: The Handi­capped Park­ing Effect, the Clap­per Effect, and more

Michael Cohn15 Sep 2024 5:27 UTC
80 points
39 comments10 min readLW link
(perplexedguide.net)

AlignedCut: Vi­sual Con­cepts Dis­cov­ery on Brain-Guided Univer­sal Fea­ture Space

Bogdan Ionut Cirstea14 Sep 2024 23:23 UTC
17 points
1 comment1 min readLW link
(arxiv.org)

How you can help pass im­por­tant AI leg­is­la­tion with 10 min­utes of effort

ThomasW14 Sep 2024 22:10 UTC
59 points
2 comments2 min readLW link

[Question] Cal­ibra­tion train­ing for ‘per­centile rank­ings’?

david reinstein14 Sep 2024 21:51 UTC
3 points
0 comments2 min readLW link

OpenAI o1, Llama 4, and AlphaZero of LLMs

Vladimir_Nesov14 Sep 2024 21:27 UTC
83 points
25 comments1 min readLW link

For­ever Leaders

Justice Howard14 Sep 2024 20:55 UTC
6 points
9 comments1 min readLW link

Emer­gent Author­ship: Creativity à la Communing

gswonk14 Sep 2024 19:02 UTC
1 point
0 comments3 min readLW link

Com­pres­sion Moves for Prediction

adamShimi14 Sep 2024 17:51 UTC
20 points
0 comments7 min readLW link
(epistemologicalfascinations.substack.com)

Pay-on-re­sults per­sonal growth: first success

Chipmonk14 Sep 2024 3:39 UTC
63 points
8 comments4 min readLW link
(chrislakin.blog)

Avoid­ing the Bog of Mo­ral Hazard for AI

Nathan Helm-Burger13 Sep 2024 21:24 UTC
19 points
13 comments2 min readLW link

[Question] If I ask an LLM to think step by step, how big are the steps?

ryan_b13 Sep 2024 20:30 UTC
7 points
1 comment1 min readLW link

Es­ti­mat­ing Tail Risk in Neu­ral Networks

Mark Xu13 Sep 2024 20:00 UTC
68 points
9 comments23 min readLW link
(www.alignment.org)

If-Then Com­mit­ments for AI Risk Re­duc­tion [by Holden Karnofsky]

habryka13 Sep 2024 19:38 UTC
28 points
0 comments20 min readLW link
(carnegieendowment.org)

Can star­tups be im­pact­ful in AI safety?

13 Sep 2024 19:00 UTC
15 points
0 comments6 min readLW link

I just can’t agree with AI safety. Why am I wrong?

Ya Polkovnik13 Sep 2024 17:48 UTC
0 points
5 comments2 min readLW link

Keep­ing it (less than) real: Against ℶ₂ pos­si­ble peo­ple or worlds

quiet_NaN13 Sep 2024 17:29 UTC
17 points
3 comments9 min readLW link

Why I’m bear­ish on mechanis­tic in­ter­pretabil­ity: the shards are not in the network

tailcalled13 Sep 2024 17:09 UTC
22 points
40 comments1 min readLW link

In­creas­ing the Span of the Set of Ideas

Jeffrey Heninger13 Sep 2024 15:52 UTC
6 points
1 comment9 min readLW link

How difficult is AI Align­ment?

Sammy Martin13 Sep 2024 15:47 UTC
44 points
6 comments23 min readLW link

The Great Data In­te­gra­tion Schlep

sarahconstantin13 Sep 2024 15:40 UTC
267 points
16 comments9 min readLW link
(sarahconstantin.substack.com)

“Real AGI”

Seth Herd13 Sep 2024 14:13 UTC
18 points
20 comments3 min readLW link

AI, cen­tral­iza­tion, and the One Ring

owencb13 Sep 2024 14:00 UTC
80 points
12 comments8 min readLW link
(strangecities.substack.com)

Ev­i­dence against Learned Search in a Chess-Play­ing Neu­ral Network

p.b.13 Sep 2024 11:59 UTC
56 points
3 comments6 min readLW link

My ca­reer ex­plo­ra­tion: Tools for build­ing confidence

lynettebye13 Sep 2024 11:37 UTC
18 points
0 comments20 min readLW link

Con­tra pa­pers claiming su­per­hu­man AI forecasting

12 Sep 2024 18:10 UTC
182 points
16 comments7 min readLW link

OpenAI o1

Zach Stein-Perlman12 Sep 2024 17:30 UTC
147 points
41 comments1 min readLW link

How to Give in to Threats (with­out in­cen­tiviz­ing them)

Mikhail Samin12 Sep 2024 15:55 UTC
54 points
26 comments5 min readLW link

Open Prob­lems in AIXI Agent Foundations

Cole Wyeth12 Sep 2024 15:38 UTC
42 points
2 comments10 min readLW link

On the de­struc­tion of Amer­ica’s best high school

Chris_Leong12 Sep 2024 15:30 UTC
−6 points
7 comments1 min readLW link
(scottaaronson.blog)