linkpost: neuro-sym­bolic hy­brid ai

Nathan Helm-BurgerOct 6, 2022, 9:52 PM
17 points
0 comments1 min readLW link
(youtu.be)

So, geez there’s a lot of AI con­tent these days

RaemonOct 6, 2022, 9:32 PM
258 points
140 comments6 min readLW link

Analysing a 2036 Takeover Scenario

ukc10014Oct 6, 2022, 8:48 PM
9 points
2 comments27 min readLW link

Covid 10/​6/​22: Over­re­ac­tions Aplenty

ZviOct 6, 2022, 7:10 PM
37 points
18 comments31 min readLW link
(thezvi.wordpress.com)

A shot at the di­a­mond-al­ign­ment problem

TurnTroutOct 6, 2022, 6:29 PM
95 points
67 comments15 min readLW link

The prob­a­bil­ity that Ar­tifi­cial Gen­eral In­tel­li­gence will be de­vel­oped by 2043 is ex­tremely low.

cveresOct 6, 2022, 6:05 PM
−13 points
8 comments1 min readLW link

Amer­i­can in­ven­tion from the “heroic age” to the sys­tem-build­ing era

jasoncrawfordOct 6, 2022, 5:19 PM
13 points
1 comment10 min readLW link
(rootsofprogress.org)

More Re­cent Progress in the The­ory of Neu­ral Networks

jylin04Oct 6, 2022, 4:57 PM
82 points
6 comments4 min readLW link

Re­search Depri­ori­tiz­ing Ex­ter­nal Communication

jefftkOct 6, 2022, 12:20 PM
34 points
3 comments8 min readLW link
(www.jefftk.com)

Cape Town ACX/​Ra­tion­al­ity Meetup

Jordan PietersOct 6, 2022, 10:39 AM
1 point
0 comments1 min readLW link

Warn­ing Shots Prob­a­bly Wouldn’t Change The Pic­ture Much

So8resOct 6, 2022, 5:15 AM
126 points
42 comments2 min readLW link

Notes on Notes on the Syn­the­sis of Form

VaniverOct 6, 2022, 2:36 AM
24 points
0 comments6 min readLW link

Against Ar­gu­ments For Exploitation

blackstampedeOct 6, 2022, 1:58 AM
18 points
8 comments7 min readLW link

AI Timelines via Cu­mu­la­tive Op­ti­miza­tion Power: Less Long, More Short

jacob_cannellOct 6, 2022, 12:21 AM
139 points
33 comments6 min readLW link

[Question] Do un­cer­tainty/​plan­ning costs make con­vex hulls un­re­al­is­tic?

eapiOct 6, 2022, 12:10 AM
16 points
6 comments1 min readLW link

Gen­er­a­tive, Epi­sodic Ob­jec­tives for Safe AI

Michael GlassOct 5, 2022, 11:18 PM
11 points
3 comments8 min readLW link

Depen­dency Tree For The Devel­op­ment Of Plate Tectonics

ElizabethOct 5, 2022, 10:40 PM
41 points
3 comments4 min readLW link
(acesounderglass.com)

[Question] How does an­thropic rea­son­ing and illu­sion­ism/​elimini­tivism in­ter­act?

ShiroeOct 5, 2022, 10:31 PM
5 points
18 comments1 min readLW link

[Question] Find­ing Great Tutors

Ulisse MiniOct 5, 2022, 10:08 PM
27 points
5 comments1 min readLW link

Progress links and tweets, 2022-10-05

jasoncrawfordOct 5, 2022, 7:24 PM
9 points
1 comment1 min readLW link
(rootsofprogress.org)

A blog post is a very long and com­plex search query to find fas­ci­nat­ing peo­ple and make them route in­ter­est­ing stuff to your inbox

Henrik KarlssonOct 5, 2022, 7:07 PM
89 points
12 comments11 min readLW link
(escapingflatland.substack.com)

Neu­ral Tan­gent Ker­nel Distillation

Oct 5, 2022, 6:11 PM
76 points
20 comments8 min readLW link

Track­ing Com­pute Stocks and Flows: Case Stud­ies?

CullenOct 5, 2022, 5:57 PM
11 points
5 comments1 min readLW link

[Linkpost] “Blueprint for an AI Bill of Rights”—Office of Science and Tech­nol­ogy Policy, USA (2022)

Fer32dwt34r3dfszOct 5, 2022, 4:42 PM
9 points
4 comments2 min readLW link
(www.whitehouse.gov)

Paper: Dis­cov­er­ing novel al­gorithms with AlphaTen­sor [Deep­mind]

LawrenceCOct 5, 2022, 4:20 PM
82 points
18 comments1 min readLW link
(www.deepmind.com)

Reflec­tion Mechanisms as an Align­ment tar­get: A fol­low-up survey

Oct 5, 2022, 2:03 PM
15 points
2 comments7 min readLW link

Char­i­ta­ble Reads of Anti-AGI-X-Risk Ar­gu­ments, Part 1

sstichOct 5, 2022, 5:03 AM
3 points
4 comments3 min readLW link

Sleep Training

jefftkOct 5, 2022, 2:10 AM
36 points
4 comments2 min readLW link
(www.jefftk.com)

Looping

Jarred FilmerOct 5, 2022, 1:47 AM
56 points
6 comments2 min readLW link

How are you deal­ing with on­tol­ogy iden­ti­fi­ca­tion?

Erik JennerOct 4, 2022, 11:28 PM
34 points
10 comments3 min readLW link

Smoke with­out fire is scary

Adam JermynOct 4, 2022, 9:08 PM
52 points
22 comments4 min readLW link

Dep­re­cated: Some hu­mans are fit­ness maximizers

Shoshannah TekofskyOct 4, 2022, 7:38 PM
6 points
22 comments6 min readLW link

Will you let your kid play foot­ball?

5houtOct 4, 2022, 5:48 PM
14 points
12 comments2 min readLW link

Quick notes on “mir­ror neu­rons”

Steven ByrnesOct 4, 2022, 5:39 PM
39 points
2 comments2 min readLW link

Fea­ture re­quest: Filter by read/​ upvoted

Nathan YoungOct 4, 2022, 5:17 PM
16 points
5 comments1 min readLW link

Lay­ers Of Mind

PeteGOct 4, 2022, 4:52 PM
−8 points
4 comments2 min readLW link

[Question] Does Google still hire peo­ple via their foo­bar challenge?

AlgonOct 4, 2022, 3:39 PM
11 points
7 comments1 min readLW link

Rus­sia will do a nu­clear test

sanxiynOct 4, 2022, 2:59 PM
3 points
7 comments1 min readLW link

Paper+Sum­mary: OMNIGROK: GROKKING BEYOND ALGORITHMIC DATA

Marius HobbhahnOct 4, 2022, 7:22 AM
46 points
11 comments1 min readLW link
(arxiv.org)

Frontline of AGI Align­ment

SD MarlowOct 4, 2022, 3:47 AM
−10 points
0 comments1 min readLW link
(robothouse.substack.com)

Ad­ver­sar­ial vs Col­lab­o­ra­tive Contexts

jefftkOct 4, 2022, 2:40 AM
31 points
4 comments2 min readLW link
(www.jefftk.com)

Hu­mans aren’t fit­ness maximizers

So8resOct 4, 2022, 1:31 AM
50 points
46 comments5 min readLW link

Self-defeat­ing con­spir­acy the­o­rists and their theories

M. Y. ZuoOct 4, 2022, 12:48 AM
5 points
12 comments3 min readLW link

No free lunch the­o­rem is irrelevant

CatneeOct 4, 2022, 12:21 AM
18 points
7 comments1 min readLW link

The Village and the River Mon­sters… Or: Less Fight­ing, More Brainstorming

ExCephOct 3, 2022, 11:01 PM
7 points
29 comments8 min readLW link
(ginnungagapfoundation.wordpress.com)

Re­call and Re­gur­gi­ta­tion in GPT2

Megan KinnimentOct 3, 2022, 7:35 PM
43 points
1 comment26 min readLW link

Iver­mectin: Much Less Than You Needed To Know

George3d6Oct 3, 2022, 3:02 PM
31 points
10 comments1 min readLW link
(doyourownresearch.substack.com)

If you want to learn tech­ni­cal AI safety, here’s a list of AI safety courses, read­ing lists, and resources

KatWoodsOct 3, 2022, 12:43 PM
12 points
3 comments1 min readLW link

Oc­to­ber Bu­dapest Less Wrong/​ACX meetup

Richard HorvathOct 3, 2022, 10:53 AM
2 points
0 comments1 min readLW link

A re­view of the Bio-An­chors report

jylin04Oct 3, 2022, 10:27 AM
45 points
4 comments1 min readLW link
(docs.google.com)