[Question] How much back­ground tech­ni­cal knowl­edge do LW read­ers have?

johnswentworth11 Jul 2019 17:38 UTC
30 points
22 comments1 min readLW link

New SSC meetup group in Lisbon

tamkin&popkin11 Jul 2019 12:19 UTC
1 point
0 comments1 min readLW link

[Question] Are we cer­tain that gpt-2 and similar al­gorithms are not self-aware?

Ozyrus11 Jul 2019 8:37 UTC
0 points
12 comments1 min readLW link

[Question] Model­ing AI mile­stones to ad­just AGI ar­rival es­ti­mates?

Ozyrus11 Jul 2019 8:17 UTC
10 points
3 comments1 min readLW link

Please give your links speak­ing names!

rmoehn11 Jul 2019 7:47 UTC
44 points
22 comments1 min readLW link

AI Align­ment “Scaf­fold­ing” Pro­ject Ideas (Re­quest for Ad­vice)

DirectedEvolution11 Jul 2019 4:39 UTC
9 points
1 comment1 min readLW link

The AI Timelines Scam

jessicata11 Jul 2019 2:52 UTC
108 points
105 comments7 min readLW link3 reviews
(unstableontology.com)

Magic is Dead, Give me Attention

Hazard10 Jul 2019 20:15 UTC
40 points
13 comments5 min readLW link

[Question] How can guessti­mates work?

jacobjacob10 Jul 2019 19:33 UTC
24 points
9 comments1 min readLW link

Types of Boltz­mann Brains

avturchin10 Jul 2019 8:22 UTC
8 points
0 comments1 min readLW link
(philpapers.org)

Schism Begets Schism

Davis_Kingsley10 Jul 2019 3:09 UTC
24 points
25 comments3 min readLW link

[Question] Do bond yield curve in­ver­sions re­ally in­di­cate there is likely to be a re­ces­sion?

bgold10 Jul 2019 1:23 UTC
20 points
8 comments1 min readLW link

[Question] Would you join the So­ciety of the Free & Easy?

David Gross10 Jul 2019 1:15 UTC
18 points
1 comment3 min readLW link

Diver­sify Your Friend­ship Portfolio

Davis_Kingsley9 Jul 2019 23:06 UTC
73 points
13 comments2 min readLW link

The I Ching Series (2/​10): How should I pri­ori­tize my ca­reer-build­ing pro­jects?

DirectedEvolution9 Jul 2019 22:55 UTC
14 points
6 comments3 min readLW link

[Question] Are there easy, low cost, ways to freeze per­sonal cell sam­ples for fu­ture ther­a­pies? And is this a good idea?

Eli Tyre9 Jul 2019 21:57 UTC
20 points
4 comments1 min readLW link

Out­line of NIST draft plan for AI standards

ryan_b9 Jul 2019 17:30 UTC
21 points
1 comment7 min readLW link

[Question] How can I help re­search Friendly AI?

avichapman9 Jul 2019 0:15 UTC
22 points
3 comments1 min readLW link

The Re­sults of My First LessWrong-in­spired I Ching Divination

DirectedEvolution8 Jul 2019 21:26 UTC
21 points
3 comments6 min readLW link

“Ra­tion­al­iz­ing” and “Sit­ting Bolt Upright in Alarm.”

Raemon8 Jul 2019 20:34 UTC
45 points
56 comments4 min readLW link

Some Com­ments on Stu­art Arm­strong’s “Re­search Agenda v0.9”

Charlie Steiner8 Jul 2019 19:03 UTC
21 points
12 comments4 min readLW link

[AN #59] How ar­gu­ments for AI risk have changed over time

Rohin Shah8 Jul 2019 17:20 UTC
43 points
4 comments7 min readLW link
(mailchi.mp)

NIST: draft plan for AI stan­dards development

ryan_b8 Jul 2019 14:13 UTC
16 points
1 comment1 min readLW link
(www.nist.gov)

In­differ­ence: mul­ti­ple changes, mul­ti­ple agents

Stuart_Armstrong8 Jul 2019 13:36 UTC
15 points
5 comments8 min readLW link

[Question] Can I au­to­mat­i­cally cross-post to LW via RSS?

lifelonglearner8 Jul 2019 5:04 UTC
9 points
5 comments1 min readLW link

[Question] Is the sum in­di­vi­d­ual in­for­ma­tive­ness of two in­de­pen­dent vari­ables no more than their joint in­for­ma­tive­ness?

Ronny Fernandez8 Jul 2019 2:51 UTC
10 points
3 comments1 min readLW link

[Question] How does the or­ga­ni­za­tion “EthAGI” fit into the broader AI safety land­scape?

Liam Donovan8 Jul 2019 0:46 UTC
4 points
2 comments1 min readLW link

Reli­gion as Goodhart

Shmi8 Jul 2019 0:38 UTC
21 points
6 comments2 min readLW link

First ap­pli­ca­tion round of the EAF Fund

JesseClifton8 Jul 2019 0:20 UTC
20 points
0 comments3 min readLW link
(forum.effectivealtruism.org)

[Question] LW au­thors: How many clusters of norms do you (per­son­ally) want?

Raemon7 Jul 2019 20:27 UTC
38 points
40 comments2 min readLW link

How to make a gi­ant white­board for $14 (plus nails)

eukaryote7 Jul 2019 19:23 UTC
29 points
1 comment1 min readLW link
(eukaryotewritesblog.com)

Mus­ings on Cu­mu­la­tive Cul­tural Evolu­tion and AI

calebo7 Jul 2019 16:46 UTC
19 points
5 comments7 min readLW link

Black hole narratives

Alexei7 Jul 2019 4:07 UTC
25 points
11 comments4 min readLW link

Thank You, Old Man Jevons, For My Job

Closed Limelike Curves6 Jul 2019 18:37 UTC
12 points
4 comments2 min readLW link
(thelimelike.wordpress.com)

87,000 Hours or: Thoughts on Home Ownership

romeostevensit6 Jul 2019 8:01 UTC
20 points
50 comments8 min readLW link

Learn­ing bi­ases and re­wards simultaneously

Rohin Shah6 Jul 2019 1:45 UTC
41 points
3 comments4 min readLW link

Thoughts on The Re­plac­ing Guilt Series⁠ — pt 1

dragohole5 Jul 2019 19:35 UTC
9 points
9 comments7 min readLW link

Here Be Epistemic Dragons

DirectedEvolution4 Jul 2019 22:31 UTC
8 points
0 comments4 min readLW link

What product are you build­ing?

Raemon4 Jul 2019 19:08 UTC
56 points
6 comments2 min readLW link

How to han­dle large num­bers of ques­tions?

Raemon4 Jul 2019 18:22 UTC
12 points
6 comments1 min readLW link

Let’s Read: an es­say on AI Theology

Yuxi_Liu4 Jul 2019 7:50 UTC
23 points
9 comments7 min readLW link

[Question] What are good re­sources for learn­ing func­tional pro­gram­ming?

Matt Goldenberg4 Jul 2019 1:22 UTC
22 points
10 comments1 min readLW link

[Question] What’s state-of-the-art in AI un­der­stand­ing of the­ory of mind?

ioannes3 Jul 2019 23:11 UTC
14 points
13 comments1 min readLW link

Blatant lies are the best kind!

Benquo3 Jul 2019 20:45 UTC
28 points
17 comments5 min readLW link
(benjaminrosshoffman.com)

[Question] What was the offi­cial story for many top physi­cists con­gre­gat­ing in Los Alamos dur­ing the Man­hat­tan Pro­ject?

moses3 Jul 2019 18:05 UTC
13 points
6 comments1 min readLW link

Open Thread July 2019

ryan_b3 Jul 2019 15:07 UTC
15 points
91 comments1 min readLW link

[Question] What would be the signs of AI man­hat­tan pro­jects start­ing? Should a web­site be made watch­ing for these signs?

Ozyrus3 Jul 2019 12:22 UTC
12 points
10 comments1 min readLW link

Self-con­scious­ness wants to make ev­ery­thing about itself

jessicata3 Jul 2019 1:44 UTC
44 points
70 comments6 min readLW link
(unstableontology.com)

Opt­ing into Ex­per­i­men­tal LW Features

Raemon3 Jul 2019 0:51 UTC
20 points
26 comments1 min readLW link

Epi­sode 7 of ‘Tsuyoku Nar­ita’: CoZE

Bae's Theorem2 Jul 2019 22:38 UTC
6 points
0 comments1 min readLW link