Notes from the Qatar Cen­ter for Global Bank­ing and Fi­nance 3rd An­nual Conference

PixelatedPenguin7 Jul 2023 23:48 UTC
2 points
0 comments1 min readLW link

In­tro­duc­ing bayescalc.io

Adele Lopez7 Jul 2023 16:11 UTC
114 points
29 comments1 min readLW link
(bayescalc.io)

Meetup Tip: Ask At­ten­dees To Ex­plain It

Screwtape7 Jul 2023 16:08 UTC
10 points
0 comments4 min readLW link

In­ter­pret­ing Mo­du­lar Ad­di­tion in MLPs

Bart Bussmann7 Jul 2023 9:22 UTC
19 points
0 comments6 min readLW link

In­ter­nal in­de­pen­dent re­view for lan­guage model agent alignment

Seth Herd7 Jul 2023 6:54 UTC
55 points
30 comments11 min readLW link

[Question] Can LessWrong provide me with some­thing I find ob­vi­ously highly use­ful to my own prac­ti­cal life?

agrippa7 Jul 2023 3:08 UTC
32 points
4 comments1 min readLW link

ask me about technology

bhauth7 Jul 2023 2:03 UTC
23 points
42 comments1 min readLW link

Ap­par­ently, of the 195 Million the DoD al­lo­cated in Univer­sity Re­search Fund­ing Awards in 2022, more than half of them con­cerned AI or com­pute hard­ware research

mako yass7 Jul 2023 1:20 UTC
41 points
5 comments2 min readLW link
(www.defense.gov)

What are the best non-LW places to read on al­ign­ment progress?

Raemon7 Jul 2023 0:57 UTC
50 points
14 comments1 min readLW link

Two paths to win the AGI transition

Nathan Helm-Burger6 Jul 2023 21:59 UTC
11 points
8 comments4 min readLW link

Em­piri­cal Ev­i­dence Against “The Longest Train­ing Run”

NickGabs6 Jul 2023 18:32 UTC
24 points
0 comments14 min readLW link

Progress Stud­ies Fel­low­ship look­ing for members

jay ram6 Jul 2023 17:41 UTC
3 points
0 comments1 min readLW link

BOUNTY AVAILABLE: AI ethi­cists, what are your ob­ject-level ar­gu­ments against AI notkil­lev­ery­oneism?

Peter Berggren6 Jul 2023 17:32 UTC
17 points
6 comments2 min readLW link

Lay­er­ing and Tech­ni­cal Debt in the Global Wayfind­ing Model

herschel6 Jul 2023 17:30 UTC
14 points
0 comments3 min readLW link

Lo­cal­iz­ing goal mis­gen­er­al­iza­tion in a maze-solv­ing policy network

jan betley6 Jul 2023 16:21 UTC
37 points
2 comments7 min readLW link

Jesse Hoogland on Devel­op­men­tal In­ter­pretabil­ity and Sin­gu­lar Learn­ing Theory

Michaël Trazzi6 Jul 2023 15:46 UTC
42 points
2 comments4 min readLW link
(theinsideview.ai)

Progress links and tweets, 2023-07-06: Ter­raformer Mark One, Is­raeli wa­ter man­age­ment, & more

jasoncrawford6 Jul 2023 15:35 UTC
18 points
4 comments2 min readLW link
(rootsofprogress.org)

Towards Non-Panop­ti­con AI Alignment

Logan Zoellner6 Jul 2023 15:29 UTC
7 points
0 comments3 min readLW link

A Defense of Work on Math­e­mat­i­cal AI Safety

Davidmanheim6 Jul 2023 14:15 UTC
28 points
13 comments3 min readLW link
(forum.effectivealtruism.org)

Un­der­stand­ing the two most com­mon men­tal health prob­lems in the world

spencerg6 Jul 2023 14:06 UTC
17 points
0 comments1 min readLW link

An­nounc­ing the EA Archive

Aaron Bergman6 Jul 2023 13:49 UTC
13 points
2 comments1 min readLW link

Agency begets agency

Richard_Ngo6 Jul 2023 13:08 UTC
57 points
1 comment4 min readLW link

AI #19: Hofs­tadter, Sutskever, Leike

Zvi6 Jul 2023 12:50 UTC
60 points
16 comments40 min readLW link
(thezvi.wordpress.com)

Do you feel that AGI Align­ment could be achieved in a Type 0 civ­i­liza­tion?

Super AGI6 Jul 2023 4:52 UTC
−2 points
1 comment1 min readLW link

Open Thread—July 2023

Ruby6 Jul 2023 4:50 UTC
11 points
35 comments1 min readLW link

AI Intermediation

jefftk6 Jul 2023 1:50 UTC
12 points
0 comments1 min readLW link
(www.jefftk.com)

An­nounc­ing Man­i­fund Regrants

Austin Chen5 Jul 2023 19:42 UTC
74 points
8 comments1 min readLW link

In­fra-Bayesian Logic

5 Jul 2023 19:16 UTC
15 points
2 comments1 min readLW link

[Linkpost] In­tro­duc­ing Superalignment

beren5 Jul 2023 18:23 UTC
175 points
69 comments1 min readLW link
(openai.com)

If you wish to make an ap­ple pie, you must first be­come dic­ta­tor of the universe

jasoncrawford5 Jul 2023 18:14 UTC
27 points
9 comments13 min readLW link
(rootsofprogress.org)

An AGI kill switch with defined se­cu­rity properties

Peterpiper5 Jul 2023 17:40 UTC
−5 points
6 comments1 min readLW link

The risk-re­ward trade­off of in­ter­pretabil­ity research

5 Jul 2023 17:05 UTC
15 points
1 comment6 min readLW link

(ten­ta­tively) Found 600+ Monose­man­tic Fea­tures in a Small LM Us­ing Sparse Autoencoders

Logan Riggs5 Jul 2023 16:49 UTC
60 points
1 comment7 min readLW link

[Question] What did AI Safety’s spe­cific fund­ing of AGI R&D labs lead to?

Remmelt5 Jul 2023 15:51 UTC
3 points
0 comments1 min readLW link

AISN #13: An in­ter­dis­ci­plinary per­spec­tive on AI proxy failures, new com­peti­tors to ChatGPT, and prompt­ing lan­guage mod­els to misbehave

Dan H5 Jul 2023 15:33 UTC
13 points
0 comments1 min readLW link

Ex­plor­ing Func­tional De­ci­sion The­ory (FDT) and a mod­ified ver­sion (ModFDT)

MiguelDev5 Jul 2023 14:06 UTC
11 points
11 comments15 min readLW link

Op­ti­mized for Some­thing other than Win­ning or: How Cricket Re­sists Moloch and Good­hart’s Law

A.H.5 Jul 2023 12:33 UTC
53 points
26 comments4 min readLW link

Puffer-pope re­al­ity check

Neil 5 Jul 2023 9:27 UTC
20 points
2 comments1 min readLW link

Fi­nal Light­speed Grants cowork­ing/​office hours be­fore the ap­pli­ca­tion deadline

habryka5 Jul 2023 6:03 UTC
13 points
2 comments1 min readLW link

MXR Talk­box Cap?

jefftk5 Jul 2023 1:50 UTC
9 points
0 comments1 min readLW link
(www.jefftk.com)

“Reifi­ca­tion”

herschel5 Jul 2023 0:53 UTC
11 points
4 comments2 min readLW link

Dom­i­nant As­surance Con­tract Ex­per­i­ment #2: Berkeley House Dinners

Arjun Panickssery5 Jul 2023 0:13 UTC
51 points
8 comments1 min readLW link
(arjunpanickssery.substack.com)

Three camps in AI x-risk dis­cus­sions: My per­sonal very over­sim­plified overview

Aryeh Englander4 Jul 2023 20:42 UTC
21 points
0 comments1 min readLW link

Six (and a half) in­tu­itions for SVD

CallumMcDougall4 Jul 2023 19:23 UTC
70 points
1 comment1 min readLW link

An­i­mal Weapons: Les­sons for Hu­mans in the Age of X-Risk

Damin Curtis4 Jul 2023 18:14 UTC
3 points
0 comments10 min readLW link

Apoca­lypse Prep­ping—Con­cise SHTF guide to pre­pare for AGI doomsday

prepper4 Jul 2023 17:41 UTC
−7 points
9 comments1 min readLW link
(prepper.i2phides.me)

Ways I Ex­pect AI Reg­u­la­tion To In­crease Ex­tinc­tion Risk

1a3orn4 Jul 2023 17:32 UTC
227 points
32 comments7 min readLW link

AI labs’ state­ments on governance

Zach Stein-Perlman4 Jul 2023 16:30 UTC
30 points
0 comments36 min readLW link

AIs teams will prob­a­bly be more su­per­in­tel­li­gent than in­di­vi­d­ual AIs

Robert_AIZI4 Jul 2023 14:06 UTC
3 points
1 comment2 min readLW link
(aizi.substack.com)

What I Think About When I Think About History

Jacob G-W4 Jul 2023 14:02 UTC
2 points
4 comments3 min readLW link
(g-w1.github.io)