The na­tional se­cu­rity di­men­sion of OpenAI’s lead­er­ship struggle

Mitchell_Porter20 Nov 2023 23:57 UTC
3 points
3 comments2 min readLW link

[Question] What will you think about the Cur­rent Thing in a year?

mike_hawke20 Nov 2023 22:39 UTC
21 points
0 comments2 min readLW link

Me­tac­u­lus In­tro­duces New Fore­cast Scores, New Leader­board & Medals

ChristianWilliams20 Nov 2023 20:33 UTC
15 points
2 comments1 min readLW link
(www.metaculus.com)

[Question] “Use­less Box” AGI

Cago20 Nov 2023 19:07 UTC
1 point
2 comments1 min readLW link

[Question] Ad­vice on choos­ing an al­co­hol re­hab cen­ter?

Slingshot927120 Nov 2023 18:46 UTC
2 points
1 comment1 min readLW link

Agent Boundaries Aren’t Markov Blan­kets. [Un­less they’re non-causal; see com­ments.]

abramdemski20 Nov 2023 18:23 UTC
82 points
11 comments2 min readLW link

Nav­i­gat­ing emo­tions in an un­cer­tain & con­fus­ing world

Akash20 Nov 2023 18:16 UTC
42 points
1 comment4 min readLW link

OpenAI: Facts from a Weekend

Zvi20 Nov 2023 15:30 UTC
271 points
165 comments9 min readLW link
(thezvi.wordpress.com)

OpenAI Staff (in­clud­ing Sutskever) Threaten to Quit Un­less Board Resigns

Seth Herd20 Nov 2023 14:20 UTC
52 points
28 comments1 min readLW link
(www.wired.com)

Ilya: The AI sci­en­tist shap­ing the world

David Varga20 Nov 2023 13:09 UTC
11 points
0 comments4 min readLW link

[Linkpost] OpenAI’s In­terim CEO’s views on AI x-risk

Bogdan Ionut Cirstea20 Nov 2023 13:00 UTC
9 points
0 comments1 min readLW link

A Girar­dian in­ter­pre­ta­tion of the Alt­man af­fair, it’s on my to-do list

Bill Benzon20 Nov 2023 12:21 UTC
3 points
0 comments1 min readLW link

[Question] How did you in­te­grate voice-to-text AI into your work­flow?

ChristianKl20 Nov 2023 12:01 UTC
28 points
12 comments1 min readLW link

Short film adap­ta­tion of the es­say “The Sim­ple Truth” [eng sub]

bayesyatina20 Nov 2023 11:42 UTC
15 points
4 comments1 min readLW link

For Civ­i­liza­tion and Against Niceness

Gabriel Alfour20 Nov 2023 10:56 UTC
46 points
14 comments8 min readLW link
(cognition.cafe)

“Op­ti­mists always win!” is the biggest sur­vivor­ship bias

Yunfan Ye20 Nov 2023 8:53 UTC
8 points
0 comments2 min readLW link

Sam Alt­man, Greg Brock­man and oth­ers from OpenAI join Microsoft

Ozyrus20 Nov 2023 8:23 UTC
58 points
15 comments1 min readLW link
(twitter.com)

Em­mett Shear to be in­terim CEO of OpenAI

Max H20 Nov 2023 5:40 UTC
21 points
5 comments1 min readLW link
(www.theverge.com)

[Question] Where can I learn about al­gorith­mic trans­for­ma­tion of AI prompts?

denyeverywhere20 Nov 2023 4:35 UTC
0 points
1 comment1 min readLW link

Ex­treme web­site and app blocking

tbenthompson20 Nov 2023 3:53 UTC
7 points
0 comments4 min readLW link
(tbenthompson.com)

Am I go­ing in­sane or is the qual­ity of ed­u­ca­tion at top uni­ver­si­ties shock­ingly low?

ChrisRumanov20 Nov 2023 3:53 UTC
26 points
30 comments2 min readLW link

Res­i­den­tial De­mo­li­tion Tooling

jefftk20 Nov 2023 3:20 UTC
16 points
1 comment3 min readLW link
(www.jefftk.com)

Aaron Silver­book on anti-cav­ity bacteria

DanielFilan20 Nov 2023 3:06 UTC
31 points
3 comments1 min readLW link
(youtu.be)

Cheap Model → Big Model design

Maxwell Peterson19 Nov 2023 22:50 UTC
15 points
2 comments7 min readLW link

Hu­man-like sys­tem­atic gen­er­al­iza­tion through a meta-learn­ing neu­ral network

Burny19 Nov 2023 21:41 UTC
7 points
0 comments2 min readLW link
(twitter.com)

“Benev­olent [ie, Ruler] AI is a bad idea” and a sug­gested alternative

the gears to ascension19 Nov 2023 20:22 UTC
22 points
11 comments1 min readLW link
(www.palladiummag.com)

Align­ment is Hard: An Un­com­putable Align­ment Problem

Alexander Bistagne19 Nov 2023 19:38 UTC
−5 points
4 comments1 min readLW link
(github.com)

New pa­per shows truth­ful­ness & in­struc­tion-fol­low­ing don’t gen­er­al­ize by default

joshc19 Nov 2023 19:27 UTC
60 points
0 comments4 min readLW link

In favour of a sovereign state of Gaza

Yair Halberstadt19 Nov 2023 16:08 UTC
8 points
3 comments4 min readLW link

My Crit­i­cism of Sin­gu­lar Learn­ing Theory

Joar Skalse19 Nov 2023 15:19 UTC
83 points
56 comments12 min readLW link

“Why can’t you just turn it off?”

Roko19 Nov 2023 14:46 UTC
48 points
25 comments1 min readLW link

Spa­cious­ness In Part­ner Dance: A Nat­u­ral­ism Demo

LoganStrohl19 Nov 2023 7:00 UTC
78 points
6 comments19 min readLW link1 review

Alt­man firing re­tal­i­a­tion in­com­ing?

trevor19 Nov 2023 0:10 UTC
50 points
23 comments5 min readLW link

When Will AIs Develop Long-Term Plan­ning?

PeterMcCluskey19 Nov 2023 0:08 UTC
18 points
5 comments4 min readLW link
(bayesianinvestor.com)

Killswitch

Junio18 Nov 2023 22:53 UTC
2 points
0 comments3 min readLW link

Superalignment

Douglas_Reay18 Nov 2023 22:37 UTC
−4 points
4 comments1 min readLW link
(openai.com)

Pre­dictable Defect-Co­op­er­ate?

quetzal_rainbow18 Nov 2023 15:38 UTC
7 points
1 comment2 min readLW link

I think I’m just con­fused. Once a model ex­ists, how do you “red-team” it to see whether it’s safe. Isn’t it already dan­ger­ous?

FTPickle18 Nov 2023 14:16 UTC
21 points
13 comments1 min readLW link

AI Safety Camp 2024

Linda Linsefors18 Nov 2023 10:37 UTC
15 points
1 comment4 min readLW link
(aisafety.camp)

Post-EAG Mu­sic Party

jefftk18 Nov 2023 3:00 UTC
14 points
2 comments2 min readLW link
(www.jefftk.com)

Let­ter to a Sonoma County Jail Cell

MadHatter18 Nov 2023 2:24 UTC
11 points
1 comment1 min readLW link
(open.substack.com)

1. A Sense of Fair­ness: De­con­fus­ing Ethics

RogerDearnaley17 Nov 2023 20:55 UTC
16 points
8 comments15 min readLW link

Sam Alt­man fired from OpenAI

LawrenceC17 Nov 2023 20:42 UTC
192 points
75 comments1 min readLW link
(openai.com)

On the lethal­ity of bi­ased hu­man re­ward ratings

17 Nov 2023 18:59 UTC
48 points
10 comments37 min readLW link

Coup probes: Catch­ing catas­tro­phes with probes trained off-policy

Fabien Roger17 Nov 2023 17:58 UTC
85 points
9 comments11 min readLW link1 review

On Lies and Liars

Gabriel Alfour17 Nov 2023 17:13 UTC
33 points
4 comments14 min readLW link
(cognition.cafe)

Clas­sify­ing rep­re­sen­ta­tions of sparse au­toen­coders (SAEs)

Annah17 Nov 2023 13:54 UTC
15 points
6 comments2 min readLW link

R&D is a Huge Ex­ter­nal­ity, So Why Do Mar­kets Do So Much of it?

Maxwell Tabarrok17 Nov 2023 13:14 UTC
15 points
14 comments3 min readLW link
(maximumprogress.substack.com)

On ex­clud­ing dan­ger­ous in­for­ma­tion from training

ShayBenMoshe17 Nov 2023 11:14 UTC
23 points
5 comments3 min readLW link

The dan­gers of re­pro­duc­ing while old

garymm17 Nov 2023 5:55 UTC
23 points
6 comments1 min readLW link
(www.garymm.org)