Against Al­most Every The­ory of Im­pact of Interpretability

Charbel-Raphaël17 Aug 2023 18:44 UTC
322 points
86 comments26 min readLW link

Model Or­ganisms of Misal­ign­ment: The Case for a New Pillar of Align­ment Research

8 Aug 2023 1:30 UTC
312 points
28 comments18 min readLW link

Dear Self; we need to talk about ambition

Elizabeth27 Aug 2023 23:10 UTC
257 points
25 comments8 min readLW link
(acesounderglass.com)

My cur­rent LK99 questions

Eliezer Yudkowsky1 Aug 2023 22:48 UTC
205 points
38 comments5 min readLW link

Feed­back­loop-first Rationality

Raemon7 Aug 2023 17:58 UTC
192 points
65 comments8 min readLW link

Large Lan­guage Models will be Great for Censorship

Ethan Edwards21 Aug 2023 19:03 UTC
183 points
14 comments8 min readLW link
(ethanedwards.substack.com)

OpenAI API base mod­els are not syco­phan­tic, at any size

nostalgebraist29 Aug 2023 0:58 UTC
182 points
20 comments2 min readLW link
(colab.research.google.com)

A list of core AI safety prob­lems and how I hope to solve them

davidad26 Aug 2023 15:12 UTC
165 points
29 comments5 min readLW link

Pass­word-locked mod­els: a stress case for ca­pa­bil­ities evaluation

Fabien Roger3 Aug 2023 14:53 UTC
156 points
14 comments6 min readLW link

ARC Evals new re­port: Eval­u­at­ing Lan­guage-Model Agents on Real­is­tic Au­tonomous Tasks

Beth Barnes1 Aug 2023 18:30 UTC
153 points
12 comments5 min readLW link
(evals.alignment.org)

The “pub­lic de­bate” about AI is con­fus­ing for the gen­eral pub­lic and for poli­cy­mak­ers be­cause it is a three-sided de­bate

Adam David Long1 Aug 2023 0:08 UTC
146 points
30 comments4 min readLW link

The U.S. is be­com­ing less stable

lc18 Aug 2023 21:13 UTC
146 points
68 comments2 min readLW link

6 non-ob­vi­ous men­tal health is­sues spe­cific to AI safety

Igor Ivanov18 Aug 2023 15:46 UTC
145 points
24 comments4 min readLW link

Re­sponses to ap­par­ent ra­tio­nal­ist con­fu­sions about game /​ de­ci­sion theory

Anthony DiGiovanni30 Aug 2023 22:02 UTC
142 points
14 comments12 min readLW link

In­flec­tion.ai is a ma­jor AGI lab

nikola9 Aug 2023 1:05 UTC
137 points
13 comments2 min readLW link

Ten Thou­sand Years of Solitude

agp15 Aug 2023 17:45 UTC
136 points
18 comments4 min readLW link
(www.discovermagazine.com)

Book Launch: “The Carv­ing of Real­ity,” Best of LessWrong vol. III

Raemon16 Aug 2023 23:52 UTC
131 points
22 comments5 min readLW link

In­vuln­er­a­ble In­com­plete Prefer­ences: A For­mal Statement

SCP30 Aug 2023 21:59 UTC
131 points
38 comments35 min readLW link

In­tro­duc­ing the Cen­ter for AI Policy (& we’re hiring!)

Thomas Larsen28 Aug 2023 21:17 UTC
123 points
50 comments2 min readLW link
(www.aipolicy.us)

Re­port on Fron­tier Model Training

YafahEdelman30 Aug 2023 20:02 UTC
122 points
21 comments21 min readLW link
(docs.google.com)

When dis­cussing AI risks, talk about ca­pa­bil­ities, not intelligence

Vika11 Aug 2023 13:38 UTC
116 points
7 comments3 min readLW link
(vkrakovna.wordpress.com)

As­sume Bad Faith

Zack_M_Davis25 Aug 2023 17:36 UTC
112 points
51 comments7 min readLW link

Sum­mary of and Thoughts on the Hotz/​Yud­kowsky Debate

Zvi16 Aug 2023 16:50 UTC
105 points
47 comments9 min readLW link
(thezvi.wordpress.com)

Biose­cu­rity Cul­ture, Com­puter Se­cu­rity Culture

jefftk30 Aug 2023 16:40 UTC
103 points
10 comments2 min readLW link
(www.jefftk.com)

A The­ory of Laughter

Steven Byrnes23 Aug 2023 15:05 UTC
102 points
14 comments22 min readLW link

[Question] Ex­er­cise: Solve “Think­ing Physics”

Raemon1 Aug 2023 0:44 UTC
97 points
25 comments5 min readLW link

What’s A “Mar­ket”?

johnswentworth8 Aug 2023 23:29 UTC
94 points
16 comments10 min readLW link

Biolog­i­cal An­chors: The Trick that Might or Might Not Work

Scott Alexander12 Aug 2023 0:53 UTC
91 points
3 comments33 min readLW link
(astralcodexten.substack.com)

We Should Pre­pare for a Larger Rep­re­sen­ta­tion of Academia in AI Safety

Leon Lang13 Aug 2023 18:03 UTC
90 points
13 comments5 min readLW link

LTFF and EAIF are un­usu­ally fund­ing-con­strained right now

30 Aug 2023 1:03 UTC
90 points
24 comments15 min readLW link
(forum.effectivealtruism.org)

Prob­lems with Robin Han­son’s Quillette Ar­ti­cle On AI

DaemonicSigil6 Aug 2023 22:13 UTC
89 points
33 comments8 min readLW link

Dat­ing Roundup #1: This is Why You’re Single

Zvi29 Aug 2023 12:50 UTC
86 points
28 comments38 min readLW link
(thezvi.wordpress.com)

My check­list for pub­lish­ing a blog post

Steven Byrnes15 Aug 2023 15:04 UTC
84 points
6 comments3 min readLW link

De­com­pos­ing in­de­pen­dent gen­er­al­iza­tions in neu­ral net­works via Hes­sian analysis

14 Aug 2023 17:04 UTC
83 points
4 comments1 min readLW link

AI pause/​gov­er­nance ad­vo­cacy might be net-nega­tive, es­pe­cially with­out a fo­cus on ex­plain­ing x-risk

Mikhail Samin27 Aug 2023 23:05 UTC
82 points
9 comments6 min readLW link

Step­ping down as mod­er­a­tor on LW

Kaj_Sotala14 Aug 2023 10:46 UTC
82 points
1 comment1 min readLW link

The Low-Hang­ing Fruit Prior and sloped valleys in the loss landscape

23 Aug 2023 21:12 UTC
82 points
1 comment13 min readLW link

Long-Term Fu­ture Fund: April 2023 grant recommendations

2 Aug 2023 7:54 UTC
81 points
3 comments50 min readLW link

The Eco­nomics of the As­teroid Deflec­tion Prob­lem (Dom­i­nant As­surance Con­tracts)

moyamo29 Aug 2023 18:28 UTC
78 points
71 comments15 min readLW link

The God of Hu­man­ity, and the God of the Robot Utilitarians

Raemon24 Aug 2023 8:27 UTC
77 points
12 comments2 min readLW link

Digi­tal brains beat biolog­i­cal ones be­cause diffu­sion is too slow

GeneSmith26 Aug 2023 2:22 UTC
77 points
21 comments5 min readLW link

An In­ter­pretabil­ity Illu­sion for Ac­ti­va­tion Patch­ing of Ar­bi­trary Subspaces

29 Aug 2023 1:04 UTC
77 points
4 comments1 min readLW link

Com­pu­ta­tional Thread Art

CallumMcDougall6 Aug 2023 21:42 UTC
75 points
2 comments6 min readLW link

A plea for more fund­ing short­fall transparency

porby7 Aug 2023 21:33 UTC
73 points
4 comments2 min readLW link

AI Fore­cast­ing: Two Years In

jsteinhardt19 Aug 2023 23:40 UTC
72 points
15 comments11 min readLW link
(bounded-regret.ghost.io)

A Proof of Löb’s The­o­rem us­ing Com­putabil­ity Theory

jessicata16 Aug 2023 18:57 UTC
71 points
0 comments17 min readLW link
(unstableontology.com)

3 lev­els of threat obfuscation

HoldenKarnofsky2 Aug 2023 14:58 UTC
69 points
14 comments7 min readLW link

When Om­nipo­tence is Not Enough

lsusr25 Aug 2023 19:50 UTC
69 points
3 comments2 min readLW link

Mo­du­lat­ing syco­phancy in an RLHF model via ac­ti­va­tion steering

Nina Panickssery9 Aug 2023 7:06 UTC
69 points
20 comments12 min readLW link

Red-team­ing lan­guage mod­els via ac­ti­va­tion engineering

Nina Panickssery26 Aug 2023 5:52 UTC
69 points
6 comments9 min readLW link