The Core Values of Life—A pro­posal for a uni­ver­sal the­ory of ethics

Thomas Gjøstøl10 Feb 2024 21:48 UTC
2 points
4 comments18 min readLW link

And All the Shog­goths Merely Players

Zack_M_Davis10 Feb 2024 19:56 UTC
160 points
57 comments12 min readLW link

Sam Alt­man’s Chip Am­bi­tions Un­der­cut OpenAI’s Safety Strategy

garrison10 Feb 2024 19:52 UTC
198 points
52 comments1 min readLW link
(garrisonlovely.substack.com)

The lat­tice of par­tial updatelessness

Martín Soto10 Feb 2024 17:34 UTC
21 points
5 comments5 min readLW link

A Strange ACH Corner Case

jefftk10 Feb 2024 3:00 UTC
27 points
2 comments2 min readLW link
(www.jefftk.com)

Dreams of AI al­ign­ment: The dan­ger of sug­ges­tive names

TurnTrout10 Feb 2024 1:22 UTC
103 points
59 comments4 min readLW link

Sce­nario plan­ning for AI x-risk

Corin Katzke10 Feb 2024 0:14 UTC
24 points
12 comments14 min readLW link
(forum.effectivealtruism.org)

Close the Gates to an In­hu­man Fu­ture: How and why we should choose to not de­velop su­per­hu­man gen­eral-pur­pose ar­tifi­cial intelligence

aaguirre9 Feb 2024 20:25 UTC
13 points
0 comments1 min readLW link
(arxiv.org)

[Cross­post] Deep Dive: The Com­ing Tech­nolog­i­cal Sin­gu­lar­ity—How to sur­vive in a Post-hu­man Era

Suzie. EXE9 Feb 2024 18:49 UTC
2 points
2 comments9 min readLW link

The Ideal Speech Si­tu­a­tion as a Tool for AI Eth­i­cal Reflec­tion: A Frame­work for Alignment

kenneth myers9 Feb 2024 18:40 UTC
6 points
12 comments3 min readLW link

What’s ChatGPT’s Fa­vorite Ice Cream Fla­vor? An In­ves­ti­ga­tion Into Syn­thetic Respondents

Greg Robison9 Feb 2024 18:38 UTC
19 points
4 comments15 min readLW link

OpenAI wants to raise 5-7 trillion

O O9 Feb 2024 16:15 UTC
13 points
29 comments1 min readLW link
(decrypt.co)

[Question] Con­stituency-sized AI congress?

Nathan Helm-Burger9 Feb 2024 16:01 UTC
11 points
5 comments1 min readLW link

One True Love

Zvi9 Feb 2024 15:10 UTC
33 points
7 comments10 min readLW link
(thezvi.wordpress.com)

[Question] Ex­ec­u­tive func­tion ad­vice from peo­ple who are good at it?

TeaTieAndHat9 Feb 2024 10:11 UTC
7 points
1 comment1 min readLW link

[Question] Do you want to make an AI Align­ment song?

Kabir Kumar9 Feb 2024 8:22 UTC
4 points
0 comments1 min readLW link

Skills I’d like my col­lab­o­ra­tors to have

Raemon9 Feb 2024 8:20 UTC
106 points
9 comments8 min readLW link

Trans­fer learn­ing and gen­er­al­iza­tion-qua-ca­pa­bil­ity in Bab­bage and Davinci (or, why di­vi­sion is bet­ter than Span­ish)

RP and agg
9 Feb 2024 7:00 UTC
50 points
6 comments3 min readLW link

Bi­den-Har­ris Ad­minis­tra­tion An­nounces First-Ever Con­sor­tium Ded­i­cated to AI Safety

Ben Smith9 Feb 2024 6:40 UTC
22 points
0 comments1 min readLW link
(www.nist.gov)

Run­ning the Num­bers on a Heat Pump

jefftk9 Feb 2024 3:00 UTC
30 points
12 comments4 min readLW link
(www.jefftk.com)

[Question] How do high-trust so­cieties form?

Shankar Sivarajan9 Feb 2024 1:11 UTC
22 points
17 comments1 min readLW link

[Question] How do health sys­tems work in ad­e­quate wor­lds?

mukashi9 Feb 2024 0:54 UTC
10 points
2 comments1 min readLW link

Twin Cities ACX Meetup—Fe­bru­ary 2024

Timothy M.8 Feb 2024 23:26 UTC
1 point
2 comments1 min readLW link

A re­view of “Don’t for­get the bound­ary prob­lem...”

jessicata8 Feb 2024 23:19 UTC
12 points
1 comment12 min readLW link
(unstablerontology.substack.com)

ain­telope pro­ject update

Gunnar_Zarncke8 Feb 2024 18:32 UTC
24 points
2 comments3 min readLW link

Up­date­less­ness doesn’t solve most problems

Martín Soto8 Feb 2024 17:30 UTC
130 points
44 comments12 min readLW link

Pre­dict­ing Align­ment Award Win­ners Us­ing ChatGPT 4

Shoshannah Tekofsky8 Feb 2024 14:38 UTC
16 points
2 comments11 min readLW link

AI #50: The Most Danger­ous Thing

Zvi8 Feb 2024 14:30 UTC
53 points
4 comments24 min readLW link
(thezvi.wordpress.com)

How to de­velop a pho­to­graphic mem­ory 3/​3

PhilosophicalSoul8 Feb 2024 9:22 UTC
6 points
2 comments18 min readLW link

Believ­ing In

AnnaSalamon8 Feb 2024 7:06 UTC
230 points
51 comments13 min readLW link

Mea­sur­ing pre-peer-re­view epistemic status

Jakub Smékal8 Feb 2024 5:09 UTC
1 point
0 comments2 min readLW link

A Chess-GPT Lin­ear Emer­gent World Representation

Adam Karvonen8 Feb 2024 4:25 UTC
105 points
14 comments7 min readLW link
(adamkarvonen.github.io)

Do­mes­tic Pro­duc­tion vs In­ter­na­tional Wealth Creation

100YearPants8 Feb 2024 4:25 UTC
1 point
0 comments1 min readLW link

Con­di­tional pre­dic­tion mar­kets are ev­i­den­tial, not causal

philh7 Feb 2024 21:52 UTC
55 points
10 comments2 min readLW link

A Back-Of-The-En­velope Calcu­la­tion On How Un­likely The Cir­cum­stan­tial Ev­i­dence Around Covid-19 Is

Roko7 Feb 2024 21:49 UTC
5 points
36 comments5 min readLW link

Nitric ox­ide for covid and other viral infections

Elizabeth7 Feb 2024 21:30 UTC
39 points
6 comments6 min readLW link
(acesounderglass.com)

De­bat­ing with More Per­sua­sive LLMs Leads to More Truth­ful Answers

7 Feb 2024 21:28 UTC
88 points
14 comments9 min readLW link
(arxiv.org)

[Question] Choos­ing a book on causality

martinkunev7 Feb 2024 21:16 UTC
4 points
3 comments1 min readLW link

More Hyphenation

Arjun Panickssery7 Feb 2024 19:43 UTC
87 points
19 comments1 min readLW link
(arjunpanickssery.substack.com)

Read­ing writ­ing ad­vice doesn’t make writ­ing easier

Henry Sleight7 Feb 2024 19:14 UTC
17 points
0 comments5 min readLW link
(open.substack.com)

[Question] What’s this 3rd se­cret di­rec­tive of evolu­tion called? (sur­vive & spread & ___)

lemonhope7 Feb 2024 14:11 UTC
10 points
11 comments1 min readLW link

Train­ing of su­per­in­tel­li­gence is se­cretly adversarial

quetzal_rainbow7 Feb 2024 13:38 UTC
15 points
2 comments5 min readLW link

The Math of Sus­pi­cious Coincidences

Roko7 Feb 2024 13:32 UTC
30 points
3 comments4 min readLW link

[Question] How to deal with the sense of de­mo­ti­va­tion that comes from think­ing about de­ter­minism?

SpectrumDT7 Feb 2024 10:53 UTC
13 points
71 comments1 min readLW link

Quan­tum Dar­winism, so­cial con­structs, and the sci­en­tific method

pchvykov7 Feb 2024 7:04 UTC
6 points
12 comments9 min readLW link

Why I think it’s net harm­ful to do tech­ni­cal safety re­search at AGI labs

Remmelt7 Feb 2024 4:17 UTC
26 points
24 comments1 min readLW link

story-based de­ci­sion-making

bhauth7 Feb 2024 2:35 UTC
89 points
11 comments4 min readLW link

Full Driv­ing En­gage­ment Optional

jefftk7 Feb 2024 2:30 UTC
14 points
0 comments1 min readLW link
(www.jefftk.com)

How to train your own “Sleeper Agents”

evhub7 Feb 2024 0:31 UTC
91 points
11 comments2 min readLW link

My guess at Con­jec­ture’s vi­sion: trig­ger­ing a nar­ra­tive bifurcation

Alexandre Variengien6 Feb 2024 19:10 UTC
75 points
12 comments16 min readLW link