[Question] What’s up with the font size in the Mark­down text ed­i­tor?

Ege ErdilMay 14, 2022, 9:12 PM
7 points
1 comment1 min readLW link

[Link post] Promis­ing Paths to Align­ment—Con­nor Leahy | Talk

frances_lorenzMay 14, 2022, 4:01 PM
34 points
0 comments1 min readLW link

Inequal­ity is in­sep­a­rable from mar­kets

NathanBarnardMay 14, 2022, 1:39 PM
22 points
7 comments3 min readLW link

Pre­dict­ing the Elec­tions with Deep Learn­ing—Part 1 - Results

Quentin ChenevierMay 14, 2022, 12:54 PM
0 points
0 comments1 min readLW link

Clar­ify­ing the con­fu­sion around in­ner alignment

Rauno ArikeMay 13, 2022, 11:05 PM
31 points
0 comments11 min readLW link

Costs and benefits of am­nio­cen­te­sis for nor­mal pregnancies

bracesMay 13, 2022, 10:47 PM
13 points
4 comments3 min readLW link

Frame for Take-Off Speeds to in­form com­pute gov­er­nance & scal­ing alignment

Logan RiggsMay 13, 2022, 10:23 PM
15 points
2 comments2 min readLW link

Align­ment as Constraints

Logan RiggsMay 13, 2022, 10:07 PM
10 points
0 comments2 min readLW link

How close to nu­clear war did we get over Cuba?

NathanBarnardMay 13, 2022, 7:58 PM
13 points
0 comments10 min readLW link

Against Time in Agent Models

johnswentworthMay 13, 2022, 7:55 PM
62 points
13 comments3 min readLW link

Agency As a Nat­u­ral Abstraction

Thane RuthenisMay 13, 2022, 6:02 PM
55 points
9 comments13 min readLW link

Fermi es­ti­ma­tion of the im­pact you might have work­ing on AI safety

Fabien RogerMay 13, 2022, 5:49 PM
10 points
0 comments1 min readLW link

“Tech com­pany sin­gu­lar­i­ties”, and steer­ing them to re­duce x-risk

Andrew_CritchMay 13, 2022, 5:24 PM
75 points
11 comments4 min readLW link

An ob­ser­va­tion about Hub­inger et al.’s frame­work for learned optimization

carboniferous_umbraculum May 13, 2022, 4:20 PM
34 points
9 comments8 min readLW link

[Question] The Eco­nomics of a New En­ergy Source

casualphysicsenjoyerMay 13, 2022, 2:08 PM
2 points
13 comments1 min readLW link

[Question] Still pos­si­ble to change user­name?

gabrielreccMay 13, 2022, 1:41 PM
7 points
4 comments1 min readLW link

[Rough notes, BAIS] Hu­man val­ues and cycli­cal preferences

May 13, 2022, 1:28 PM
5 points
0 comments4 min readLW link

[Question] Can mod­er­a­tors fix old se­quences posts?

EniScienMay 13, 2022, 12:30 PM
10 points
1 comment1 min readLW link

Deep­Mind is hiring for the Scal­able Align­ment and Align­ment Teams

May 13, 2022, 12:17 PM
150 points
34 comments9 min readLW link

Thoughts on AI Safety Camp

Charlie SteinerMay 13, 2022, 7:16 AM
33 points
8 comments7 min readLW link

Deferring

owencbMay 12, 2022, 11:56 PM
18 points
2 comments11 min readLW link

RLHF

Ansh RadhakrishnanMay 12, 2022, 9:18 PM
18 points
5 comments5 min readLW link

[Question] What to do when start­ing a busi­ness in an im­mi­nent-AGI world?

ryan_bMay 12, 2022, 9:07 PM
25 points
7 comments1 min readLW link

In­ter­pretabil­ity’s Align­ment-Solv­ing Po­ten­tial: Anal­y­sis of 7 Scenarios

Evan R. MurphyMay 12, 2022, 8:01 PM
58 points
0 comments59 min readLW link

In­tro­duc­tion to the se­quence: In­ter­pretabil­ity Re­search for the Most Im­por­tant Century

Evan R. MurphyMay 12, 2022, 7:59 PM
16 points
0 comments8 min readLW link

A ten­ta­tive di­alogue with a Friendly-boxed-su­per-AGI on brain uploads

Ramiro P.May 12, 2022, 7:40 PM
1 point
12 comments4 min readLW link

The Last Paperclip

Logan ZoellnerMay 12, 2022, 7:25 PM
63 points
15 comments18 min readLW link

Deep­mind’s Gato: Gen­er­al­ist Agent

Daniel KokotajloMay 12, 2022, 4:01 PM
165 points
62 comments1 min readLW link

“A Gen­er­al­ist Agent”: New Deep­Mind Publication

1a3ornMay 12, 2022, 3:30 PM
79 points
43 comments1 min readLW link

Covid 5/​12/​22: Other Priorities

ZviMay 12, 2022, 1:30 PM
31 points
4 comments15 min readLW link
(thezvi.wordpress.com)

[Question] How would pub­lic me­dia out­lets need to be gov­erned to cover all poli­ti­cal views?

ChristianKlMay 12, 2022, 12:55 PM
13 points
14 comments1 min readLW link

[Question] What’s keep­ing con­cerned ca­pa­bil­ities gain re­searchers from leav­ing the field?

sovranMay 12, 2022, 12:16 PM
19 points
4 comments1 min readLW link

Pos­i­tive out­comes un­der an un­al­igned AGI takeover

YitzMay 12, 2022, 7:45 AM
19 points
10 comments3 min readLW link

[Question] What are your recom­men­da­tions for tech­ni­cal AI al­ign­ment pod­casts?

Evan_GaensbauerMay 11, 2022, 9:52 PM
5 points
4 comments1 min readLW link

Grace­fully cor­rect­ing un­cal­ibrated shame

AF2022May 11, 2022, 7:51 PM
−31 points
34 comments4 min readLW link

[In­tro to brain-like-AGI safety] 14. Con­trol­led AGI

Steven ByrnesMay 11, 2022, 1:17 PM
45 points
25 comments20 min readLW link

Pro­jec­tLawful.com: Eliezer’s lat­est story, past 1M words

Eliezer YudkowskyMay 11, 2022, 6:18 AM
234 points
112 comments1 min readLW link4 reviews

An In­side View of AI Alignment

Ansh RadhakrishnanMay 11, 2022, 2:16 AM
32 points
2 comments2 min readLW link

Fight­ing in var­i­ous places for a re­ally long time

KatjaGraceMay 11, 2022, 1:50 AM
36 points
12 comments4 min readLW link
(worldspiritsockpuppet.com)

Stuff I might do if I had covid

KatjaGraceMay 11, 2022, 12:00 AM
39 points
9 comments1 min readLW link
(worldspiritsockpuppet.com)

Crises Don’t Need Your Software

GabrielExistsMay 10, 2022, 9:06 PM
59 points
18 comments6 min readLW link

Ceiling Fan Air Filter

jefftkMay 10, 2022, 2:20 PM
18 points
9 comments1 min readLW link
(www.jefftk.com)

The limits of AI safety via debate

Marius HobbhahnMay 10, 2022, 1:33 PM
35 points
8 comments10 min readLW link

Ex­am­in­ing Arm­strong’s cat­e­gory of gen­er­al­ized models

Morgan_RogersMay 10, 2022, 9:07 AM
14 points
0 comments7 min readLW link

Dath Ilani Rule of Law

David UdellMay 10, 2022, 6:17 AM
24 points
25 comments4 min readLW link

AI safety should be made more ac­cessible us­ing non text-based media

MassimogMay 10, 2022, 3:14 AM
2 points
4 comments4 min readLW link

LessWrong Now Has Dark Mode

jimrandomhMay 10, 2022, 1:21 AM
135 points
31 comments1 min readLW link

Con­di­tions for math­e­mat­i­cal equiv­alence of Stochas­tic Gra­di­ent Des­cent and Nat­u­ral Selection

Oliver SourbutMay 9, 2022, 9:38 PM
70 points
19 comments8 min readLW link1 review
(www.oliversourbut.net)

AI Align­ment YouTube Playlists

May 9, 2022, 9:33 PM
30 points
4 comments1 min readLW link

When is AI safety re­search harm­ful?

NathanBarnardMay 9, 2022, 6:19 PM
2 points
0 comments8 min readLW link