RSS

Public Discourse

TagLast edit: 15 Jul 2020 4:27 UTC by jacobjacob

Public discourse refers to our ability to have conversations in large groups, both as a society, and in smaller communities; as well as conversations between a few well-defined participants (such as presidential debates) that take place publicly.

This tag is for understanding the nature of public discourse (How good is it? What makes it succeed or fail?), and ways of improving it using technology or novel institutions.

See also: Conversation (topic)

[Question] Have epistemic con­di­tions always been this bad?

Wei Dai25 Jan 2020 4:42 UTC
210 points
106 comments4 min readLW link1 review

Why it’s so hard to talk about Consciousness

Rafael Harth2 Jul 2023 15:56 UTC
110 points
158 comments9 min readLW link

Rais­ing the San­ity Waterline

Eliezer Yudkowsky12 Mar 2009 4:28 UTC
239 points
232 comments3 min readLW link

As­sume Bad Faith

Zack_M_Davis25 Aug 2023 17:36 UTC
112 points
51 comments7 min readLW link

You Get About Five Words

Raemon12 Mar 2019 20:30 UTC
221 points
80 comments1 min readLW link6 reviews

Public-fac­ing Cen­sor­ship Is Safety Theater, Caus­ing Rep­u­ta­tional Da­m­age

Yitz23 Sep 2022 5:08 UTC
149 points
42 comments6 min readLW link

Ar­bital postmortem

alexei30 Jan 2018 13:48 UTC
228 points
110 comments19 min readLW link

The Dark Arts

19 Dec 2023 4:41 UTC
135 points
49 comments9 min readLW link

Well-Kept Gar­dens Die By Pacifism

Eliezer Yudkowsky21 Apr 2009 2:44 UTC
242 points
324 comments5 min readLW link

Re­quest to AGI or­ga­ni­za­tions: Share your views on paus­ing AI progress

11 Apr 2023 17:30 UTC
141 points
11 comments1 min readLW link

Two easy things that maybe Just Work to im­prove AI discourse

jacobjacob8 Jun 2024 15:51 UTC
189 points
35 comments2 min readLW link

Talk­ing pub­li­cly about AI risk

Jan_Kulveit21 Apr 2023 11:28 UTC
180 points
9 comments6 min readLW link

Why Im­prov­ing Dialogue Feels So Hard

matto20 Jan 2024 21:26 UTC
21 points
8 comments3 min readLW link

The Forces of Bland­ness and the Disagree­able Majority

sarahconstantin28 Apr 2019 19:44 UTC
132 points
27 comments3 min readLW link2 reviews
(srconstantin.wordpress.com)

Did Ben­gio and Teg­mark lose a de­bate about AI x-risk against LeCun and Mitchell?

Karl von Wendt25 Jun 2023 16:59 UTC
106 points
53 comments7 min readLW link

There’s No Fire Alarm for Ar­tifi­cial Gen­eral Intelligence

Eliezer Yudkowsky13 Oct 2017 21:38 UTC
148 points
72 comments25 min readLW link

On the im­por­tance of Less Wrong, or an­other sin­gle con­ver­sa­tional locus

AnnaSalamon27 Nov 2016 17:13 UTC
176 points
365 comments4 min readLW link

A Re­turn to Discussion

sarahconstantin27 Nov 2016 13:59 UTC
58 points
32 comments6 min readLW link

Let’s build a fire alarm for AGI

chaosmage15 May 2023 9:16 UTC
−1 points
0 comments2 min readLW link

Con­tra Yud­kowsky on Epistemic Con­duct for Author Criticism

Zack_M_Davis13 Sep 2023 15:33 UTC
69 points
38 comments7 min readLW link

Creat­ing bet­ter in­fras­truc­ture for con­tro­ver­sial dis­course

Rudi C16 Jun 2020 15:17 UTC
66 points
11 comments2 min readLW link

Book Re­view: Con­scious­ness Ex­plained (as the Great Cat­a­lyst)

Rafael Harth17 Sep 2023 15:30 UTC
18 points
12 comments22 min readLW link

Ac­tu­ally, “per­sonal at­tacks af­ter ob­ject-level ar­gu­ments” is a pretty good rule of epistemic conduct

Max H17 Sep 2023 20:25 UTC
37 points
15 comments7 min readLW link

A re­ply to Agnes Callard

Vaniver28 Jun 2020 3:25 UTC
91 points
36 comments3 min readLW link

In the pres­ence of dis­in­for­ma­tion, col­lec­tive episte­mol­ogy re­quires lo­cal modeling

jessicata15 Dec 2017 9:54 UTC
77 points
39 comments5 min readLW link

Lo­cal Val­idity as a Key to San­ity and Civilization

Eliezer Yudkowsky7 Apr 2018 4:25 UTC
212 points
68 comments13 min readLW link5 reviews

[Question] Has there been a “memetic col­lapse”?

Eli Tyre28 Dec 2019 5:36 UTC
32 points
7 comments1 min readLW link

Ex­pect­ing Short In­fer­en­tial Distances

Eliezer Yudkowsky22 Oct 2007 23:42 UTC
368 points
106 comments3 min readLW link

Is Click­bait De­stroy­ing Our Gen­eral In­tel­li­gence?

Eliezer Yudkowsky16 Nov 2018 23:06 UTC
191 points
65 comments5 min readLW link2 reviews

Trust Me I’m Ly­ing: A Sum­mary and Review

quanticle13 Aug 2018 2:55 UTC
100 points
11 comments7 min readLW link
(quanticle.net)

New York Times, Please Do Not Threaten The Safety of Scott Alexan­der By Re­veal­ing His True Name

Zvi23 Jun 2020 12:20 UTC
153 points
2 comments2 min readLW link
(thezvi.wordpress.com)

Robin Han­son AI X-Risk De­bate — High­lights and Analysis

Liron12 Jul 2024 21:31 UTC
46 points
7 comments45 min readLW link
(www.youtube.com)

Altru­ism and Vi­tal­ism Aren’t Fel­low Travelers

Arjun Panickssery9 Aug 2024 2:01 UTC
24 points
2 comments3 min readLW link
(arjunpanickssery.substack.com)

Even more cu­rated con­ver­sa­tions with brilli­ant rationalists

spencerg21 Mar 2022 23:49 UTC
59 points
0 comments15 min readLW link

Hu­man­ity isn’t re­motely longter­mist, so ar­gu­ments for AGI x-risk should fo­cus on the near term

Seth Herd12 Aug 2024 18:10 UTC
46 points
10 comments1 min readLW link

Par­tial sum­mary of de­bate with Ben­quo and Jes­si­cata [pt 1]

Raemon14 Aug 2019 20:02 UTC
89 points
63 comments22 min readLW link3 reviews

Pro­posal: Twit­ter dis­like button

KatjaGrace17 May 2022 19:40 UTC
13 points
7 comments1 min readLW link
(worldspiritsockpuppet.com)

Pitch­ing an Align­ment Softball

mu_(negative)7 Jun 2022 4:10 UTC
47 points
13 comments10 min readLW link

[Question] What’s a bet­ter term now that “AGI” is too vague?

Seth Herd28 May 2024 18:02 UTC
15 points
9 comments2 min readLW link

Power Law Policy

Ben Turtel23 May 2024 5:28 UTC
4 points
7 comments6 min readLW link
(bturtel.substack.com)

Spread­ing mes­sages to help with the most im­por­tant century

HoldenKarnofsky25 Jan 2023 18:20 UTC
75 points
4 comments18 min readLW link
(www.cold-takes.com)

“Now here’s why I’m punch­ing you...”

philh16 Oct 2018 21:30 UTC
28 points
24 comments4 min readLW link
(reasonableapproximation.net)

“Ra­tion­al­ist Dis­course” Is Like “Physi­cist Mo­tors”

Zack_M_Davis26 Feb 2023 5:58 UTC
136 points
152 comments9 min readLW link

Cur­rent At­ti­tudes Toward AI Provide Lit­tle Data Rele­vant to At­ti­tudes Toward AGI

Seth Herd12 Nov 2024 18:23 UTC
15 points
2 comments4 min readLW link

“Pub­lish or Per­ish” (a quick note on why you should try to make your work leg­ible to ex­ist­ing aca­demic com­mu­ni­ties)

David Scott Krueger (formerly: capybaralet)18 Mar 2023 19:01 UTC
99 points
48 comments1 min readLW link

The Over­ton Win­dow widens: Ex­am­ples of AI risk in the media

Akash23 Mar 2023 17:10 UTC
107 points
24 comments6 min readLW link

On the Con­trary, Steel­man­ning Is Nor­mal; ITT-Pass­ing Is Niche

Zack_M_Davis9 Jan 2024 23:12 UTC
44 points
31 comments4 min readLW link

Guidelines for pro­duc­tive discussions

ambigram8 Apr 2023 6:00 UTC
37 points
0 comments5 min readLW link

A decade of lurk­ing, a month of posting

Max H9 Apr 2023 0:21 UTC
70 points
4 comments5 min readLW link

Stop talk­ing about p(doom)

Isaac King1 Jan 2024 10:57 UTC
38 points
22 comments3 min readLW link

[Question] Snap­shot of nar­ra­tives and frames against reg­u­lat­ing AI

Jan_Kulveit1 Nov 2023 16:30 UTC
36 points
19 comments3 min readLW link

Pro­pa­ganda or Science: A Look at Open Source AI and Bioter­ror­ism Risk

1a3orn2 Nov 2023 18:20 UTC
193 points
79 comments23 min readLW link

Con­scious­ness as a con­fla­tion­ary al­li­ance term for in­trin­si­cally val­ued in­ter­nal experiences

Andrew_Critch10 Jul 2023 8:09 UTC
193 points
47 comments11 min readLW link

Sta­tus 451 on Di­ag­no­sis: Rus­sell Aphasia

Zack_M_Davis6 Aug 2019 4:43 UTC
48 points
1 comment1 min readLW link
(status451.com)

Robin Han­son & Liron Shapira De­bate AI X-Risk

Liron8 Jul 2024 21:45 UTC
34 points
4 comments1 min readLW link
(www.youtube.com)

Sapi­ence, un­der­stand­ing, and “AGI”

Seth Herd24 Nov 2023 15:13 UTC
15 points
3 comments6 min readLW link

Paper Sum­mary: The Effects of Com­mu­ni­cat­ing Uncer­tainty on Public Trust in Facts and Numbers

Jeffrey Heninger9 Jul 2024 16:50 UTC
42 points
2 comments2 min readLW link
(blog.aiimpacts.org)

Pro­posal for im­prov­ing the global on­line dis­course through per­son­al­ised com­ment or­der­ing on all websites

Roman Leventov6 Dec 2023 18:51 UTC
35 points
21 comments6 min readLW link

Com­bin­ing Pre­dic­tion Tech­nolo­gies to Help Moder­ate Discussions

Wei Dai8 Dec 2016 0:19 UTC
21 points
15 comments1 min readLW link

Crowd­sourc­ing mod­er­a­tion with­out sac­ri­fic­ing quality

paulfchristiano2 Dec 2016 21:47 UTC
18 points
26 comments1 min readLW link
(sideways-view.com)

Avoid­ing Selec­tion Bias

the gears to ascension4 Oct 2017 19:10 UTC
20 points
17 comments1 min readLW link

Up­dat­ing My LW Com­ment­ing Policy

curi18 Aug 2020 16:48 UTC
7 points
1 comment4 min readLW link

Com­ment, Don’t Message

jefftk18 Nov 2019 16:00 UTC
30 points
5 comments2 min readLW link
(www.jefftk.com)

Why I’m Stay­ing On Blog­ging­heads.tv

Eliezer Yudkowsky7 Sep 2009 20:15 UTC
31 points
101 comments2 min readLW link

Find your­self a Wor­thy Op­po­nent: a Chavruta

Raw_Power6 Jul 2011 10:59 UTC
48 points
74 comments3 min readLW link

Yes, avoid­ing ex­tinc­tion from AI *is* an ur­gent pri­or­ity: a re­sponse to Seth Lazar, Jeremy Howard, and Arvind Narayanan.

Soroush Pour1 Jun 2023 13:38 UTC
17 points
0 comments5 min readLW link
(www.soroushjp.com)

[Question] What’s the best ap­proach to cu­rat­ing a news­feed to max­i­mize use­ful con­trast­ing POV?

bgold26 Apr 2019 17:29 UTC
25 points
3 comments1 min readLW link

What the Haters Hate

Jacob Falkovich1 Oct 2018 20:29 UTC
29 points
36 comments8 min readLW link

One Web­site To Rule Them All?

anna_macdonald11 Jan 2019 19:14 UTC
30 points
23 comments10 min readLW link

...And Say No More Of It

Eliezer Yudkowsky9 Feb 2009 0:15 UTC
43 points
25 comments5 min readLW link

Col­lec­tive Apa­thy and the Internet

Eliezer Yudkowsky14 Apr 2009 0:02 UTC
52 points
34 comments2 min readLW link

Click­bait might not be de­stroy­ing our gen­eral Intelligence

Donald Hobson19 Nov 2018 0:13 UTC
25 points
13 comments2 min readLW link

[LINK] Why I’m not on the Ra­tion­al­ist Masterlist

Apprentice6 Jan 2014 0:16 UTC
40 points
882 comments1 min readLW link

Why Aca­demic Papers Are A Ter­rible Dis­cus­sion Forum

alyssavance20 Jun 2012 18:15 UTC
44 points
53 comments6 min readLW link

Un­pop­u­lar ideas at­tract poor ad­vo­cates: Be charitable

[deleted]15 Sep 2014 19:30 UTC
43 points
61 comments2 min readLW link

The Paucity of Elites Online

JonahS31 May 2013 1:35 UTC
40 points
42 comments3 min readLW link

Change Con­texts to Im­prove Arguments

palladias8 Jul 2014 15:51 UTC
42 points
19 comments2 min readLW link

Has “poli­tics is the mind-kil­ler” been a mind-kil­ler?

SonnieBailey17 Mar 2019 3:05 UTC
31 points
26 comments3 min readLW link

Less Wrong Should Con­front Wrong­ness Wher­ever it Appears

jimrandomh21 Sep 2010 1:40 UTC
32 points
163 comments3 min readLW link

Do­ing dis­course bet­ter: Stuff I wish I knew

dynomight29 Sep 2020 14:34 UTC
27 points
11 comments1 min readLW link
(dyno-might.github.io)

A re­sponse to the Richards et al.’s “The Illu­sion of AI’s Ex­is­ten­tial Risk”

Harrison Fell26 Jul 2023 17:34 UTC
1 point
0 comments10 min readLW link

Memetic Judo #1: On Dooms­day Prophets v.3

Max TK18 Aug 2023 0:14 UTC
25 points
17 comments3 min readLW link

Memetic Judo #2: In­cor­po­ral Switches and Lev­ers Compendium

Max TK14 Aug 2023 16:53 UTC
19 points
6 comments17 min readLW link

In Defense of Tone Arguments

OrphanWilde19 Jul 2012 19:48 UTC
32 points
175 comments2 min readLW link

Memetic Judo #3: The In­tel­li­gence of Stochas­tic Par­rots v.2

Max TK20 Aug 2023 15:18 UTC
8 points
33 comments6 min readLW link

[Question] Which head­lines and nar­ra­tives are mostly click­bait?

Pontor25 Oct 2020 1:19 UTC
5 points
5 comments2 min readLW link

Poli­tics is work and work needs breaks

KatjaGrace4 Nov 2019 17:10 UTC
19 points
0 comments2 min readLW link
(meteuphoric.com)

When dis­cussing AI doom bar­ri­ers pro­pose spe­cific plau­si­ble scenarios

anithite18 Aug 2023 4:06 UTC
5 points
0 comments3 min readLW link

Cat­e­gory Qual­ifi­ca­tions (w/​ ex­er­cises)

Logan Riggs15 Sep 2019 16:28 UTC
23 points
22 comments5 min readLW link

On De­bates with Trolls

prase12 Apr 2011 8:46 UTC
31 points
247 comments3 min readLW link

Care­less talk on US-China AI com­pe­ti­tion? (and crit­i­cism of CAIS cov­er­age)

Oliver Sourbut20 Sep 2023 12:46 UTC
3 points
0 comments10 min readLW link
(www.oliversourbut.net)

A so­cial norm against un­jus­tified opinions?

Kaj_Sotala29 May 2009 11:25 UTC
16 points
161 comments1 min readLW link

Rea­sons for some­one to “ig­nore” you

Wei Dai8 Oct 2012 19:50 UTC
37 points
57 comments3 min readLW link

“Model UN Solu­tions”

Arjun Panickssery8 Dec 2023 23:06 UTC
36 points
5 comments1 min readLW link
(open.substack.com)

Idea selection

krbouchard1 Mar 2021 14:07 UTC
1 point
0 comments2 min readLW link

Redis­cov­ery, the Mind’s Curare

Erich_Grunewald10 Apr 2021 7:42 UTC
3 points
1 comment3 min readLW link
(www.erichgrunewald.com)

Ar­gu­ing from a Gap of Perspective

ideenrun1 May 2021 22:42 UTC
6 points
1 comment19 min readLW link

Cu­rated con­ver­sa­tions with brilli­ant rationalists

spencerg28 May 2021 14:23 UTC
153 points
18 comments6 min readLW link

For Bet­ter Com­ment­ing, Take an Oath of Re­ply.

DirectedEvolution31 May 2021 6:01 UTC
42 points
17 comments2 min readLW link

Re­quest for com­ment on a novel refer­ence work of understanding

ender12 Aug 2021 0:06 UTC
3 points
0 comments9 min readLW link

[Question] Con­vince me that hu­man­ity is as doomed by AGI as Yud­kowsky et al., seems to believe

Yitz10 Apr 2022 21:02 UTC
92 points
141 comments2 min readLW link

[Question] Con­vince me that hu­man­ity *isn’t* doomed by AGI

Yitz15 Apr 2022 17:26 UTC
61 points
50 comments1 min readLW link

Schism Begets Schism

Davis_Kingsley10 Jul 2019 3:09 UTC
24 points
25 comments3 min readLW link

90% of any­thing should be bad (& the pre­ci­sion-re­call trade­off)

cartografie8 Sep 2022 1:20 UTC
33 points
22 comments6 min readLW link

Re­spond­ing to ‘Beyond Hyper­an­thro­po­mor­phism’

ukc1001414 Sep 2022 20:37 UTC
8 points
0 comments16 min readLW link

How Strong is Our Con­nec­tion to Truth?

anorangicc17 Feb 2021 0:10 UTC
1 point
5 comments3 min readLW link

Defense Against The Dark Arts: An Introduction

Lyrongolem25 Dec 2023 6:36 UTC
24 points
36 comments20 min readLW link

[Question] Ter­minol­ogy: <some­thing>-ware for ML?

Oliver Sourbut3 Jan 2024 11:42 UTC
17 points
27 comments1 min readLW link

[Question] What is the point of 2v2 de­bates?

Axel Ahlqvist20 Aug 2024 21:59 UTC
2 points
1 comment1 min readLW link

Ten Modes of Cul­ture War Discourse

jchan31 Jan 2024 13:58 UTC
54 points
15 comments15 min readLW link

Anal­ogy Bank for AI Safety

utilistrutil29 Jan 2024 2:35 UTC
23 points
0 comments8 min readLW link

[Question] What’s Your Best AI Safety “Quip”?

False Name26 Mar 2024 15:35 UTC
−2 points
0 comments1 min readLW link

The Nat­u­ral Selec­tion of Bad Vibes (Part 1)

Kevin Dorst12 May 2024 8:28 UTC
13 points
3 comments7 min readLW link
(kevindorst.substack.com)

AI x Hu­man Flour­ish­ing: In­tro­duc­ing the Cos­mos Institute

Brendan McCord5 Sep 2024 18:23 UTC
14 points
5 comments6 min readLW link
(cosmosinstitute.substack.com)

Check­ing pub­lic figures on whether they “an­swered the ques­tion” quick anal­y­sis from Har­ris/​Trump de­bate, and a proposal

david reinstein11 Sep 2024 20:25 UTC
7 points
4 comments1 min readLW link
(open.substack.com)

Cri­tique of ‘Many Peo­ple Fear A.I. They Shouldn’t’ by David Brooks.

Axel Ahlqvist15 Aug 2024 18:38 UTC
12 points
8 comments3 min readLW link

When to join a re­spectabil­ity cascade

B Jacobs24 Sep 2024 7:54 UTC
10 points
1 comment2 min readLW link
(bobjacobs.substack.com)

Ca­pa­bil­ities De­nial: The Danger of Un­der­es­ti­mat­ing AI

Christopher King21 Mar 2023 1:24 UTC
6 points
5 comments3 min readLW link

Miss­ing fore­cast­ing tools: from cat­a­logs to a new kind of pre­dic­tion market

MichaelLatowicki29 Mar 2023 9:55 UTC
14 points
1 comment5 min readLW link

AI scares and chang­ing pub­lic beliefs

Seth Herd6 Apr 2023 18:51 UTC
45 points
21 comments6 min readLW link

[SEE NEW EDITS] No, *You* Need to Write Clearer

Nicholas / Heather Kross29 Apr 2023 5:04 UTC
261 points
65 comments5 min readLW link
(www.thinkingmuchbetter.com)

Cis fragility

[deactivated]30 Nov 2023 4:14 UTC
−51 points
9 comments3 min readLW link

Defect­ing by Ac­ci­dent—A Flaw Com­mon to An­a­lyt­i­cal People

lionhearted (Sebastian Marshall)1 Dec 2010 8:25 UTC
125 points
433 comments15 min readLW link

[Question] Why is so much dis­cus­sion hap­pen­ing in pri­vate Google Docs?

Wei Dai12 Jan 2019 2:19 UTC
100 points
22 comments1 min readLW link

Dis­in­cen­tives for par­ti­ci­pat­ing on LW/​AF

Wei Dai10 May 2019 19:46 UTC
86 points
45 comments2 min readLW link

Models of moderation

habryka2 Feb 2018 23:29 UTC
30 points
33 comments7 min readLW link

Modesty and di­ver­sity: a con­crete suggestion

[deleted]8 Nov 2017 20:42 UTC
30 points
6 comments1 min readLW link

LW Up­date 2018-12-06 – Table of Con­tents and Q&A

Raemon8 Dec 2018 0:47 UTC
55 points
28 comments4 min readLW link

Iso­lat­ing Con­tent can Create Affordances

Davis_Kingsley23 Aug 2018 8:28 UTC
49 points
12 comments1 min readLW link

Moder­a­tor’s Dilemma: The Risks of Par­tial Intervention

Chris_Leong29 Sep 2017 1:47 UTC
33 points
17 comments4 min readLW link

You’re Cal­ling *Who* A Cult Leader?

Eliezer Yudkowsky22 Mar 2009 6:57 UTC
67 points
121 comments5 min readLW link

“Poli­tics is the mind-kil­ler” is the mind-killer

thomblake26 Jan 2012 15:55 UTC
58 points
99 comments1 min readLW link

Poli­tics is hard mode

Rob Bensinger21 Jul 2014 22:14 UTC
58 points
109 comments6 min readLW link

When None Dare Urge Res­traint, pt. 2

Jay_Schweikert30 May 2012 15:28 UTC
84 points
92 comments3 min readLW link

Break­ing the vi­cious cycle

XiXiDu23 Nov 2014 18:25 UTC
66 points
131 comments2 min readLW link

[Question] Why Don’t Creators Switch to their Own Plat­forms?

Jacob Falkovich23 Dec 2018 4:46 UTC
42 points
17 comments1 min readLW link

Tak­ing “cor­re­la­tion does not im­ply cau­sa­tion” back from the internet

sixes_and_sevens3 Oct 2012 12:18 UTC
62 points
70 comments1 min readLW link

Poli­ti­cal top­ics at­tract par­ti­ci­pants in­clined to use the norms of main­stream poli­ti­cal de­bate, risk­ing a tip­ping point to lower qual­ity discussion

emr26 Mar 2015 0:14 UTC
67 points
71 comments1 min readLW link

Don’t Be Afraid of Ask­ing Per­son­ally Im­por­tant Ques­tions of Less Wrong

Evan_Gaensbauer17 Mar 2015 6:54 UTC
80 points
47 comments3 min readLW link

Only You Can Prevent Your Mind From Get­ting Killed By Politics

ChrisHallquist26 Oct 2013 13:59 UTC
61 points
144 comments5 min readLW link

Lit­tle­wood’s Law and the Global Media

gwern12 Jan 2019 17:46 UTC
37 points
3 comments1 min readLW link
(www.gwern.net)

Defense against discourse

Benquo17 Oct 2017 9:10 UTC
38 points
15 comments6 min readLW link
(benjaminrosshoffman.com)

Offense ver­sus harm minimization

Scott Alexander16 Apr 2011 1:06 UTC
87 points
429 comments9 min readLW link

False Friends and Tone Policing

palladias18 Jun 2014 18:20 UTC
71 points
49 comments3 min readLW link

On memetic weapons

ioannes1 Sep 2018 3:25 UTC
42 points
28 comments5 min readLW link

Free Speech as Le­gal Right vs. Eth­i­cal Value

ozymandias28 Nov 2017 16:49 UTC
14 points
8 comments2 min readLW link

PCAST Work­ing Group on Gen­er­a­tive AI In­vites Public Input

Christopher King13 May 2023 22:49 UTC
7 points
0 comments1 min readLW link
(terrytao.wordpress.com)

[Question] What new tech­nol­ogy, for what in­sti­tu­tions?

bhauth14 May 2023 17:33 UTC
29 points
6 comments3 min readLW link

Speak­ing up pub­li­cly is heroic

jefftk2 Nov 2019 12:00 UTC
43 points
2 comments1 min readLW link
(www.jefftk.com)

Nice­ness Stealth-Bombing

things_which_are_not_on_fire8 Jan 2018 22:16 UTC
24 points
4 comments3 min readLW link

The Case for a Big­ger Audience

John_Maxwell9 Feb 2019 7:22 UTC
68 points
58 comments2 min readLW link

Dialogue on Ap­peals to Consequences

jessicata18 Jul 2019 2:34 UTC
33 points
87 comments7 min readLW link
(unstableontology.com)

Ap­peal to Con­se­quence, Value Ten­sions, And Ro­bust Organizations

Matt Goldenberg19 Jul 2019 22:09 UTC
45 points
90 comments5 min readLW link

[Question] What pro­jects and efforts are there to pro­mote AI safety re­search?

Christopher King24 May 2023 0:33 UTC
4 points
0 comments1 min readLW link

Easy wins aren’t news

PhilGoetz19 Feb 2015 19:38 UTC
60 points
19 comments1 min readLW link

Wikipe­dia pageviews: still in decline

VipulNaik26 Sep 2017 23:03 UTC
24 points
19 comments3 min readLW link

Don’t Ap­ply the Prin­ci­ple of Char­ity to Yourself

UnclGhost19 Nov 2011 19:26 UTC
82 points
23 comments2 min readLW link

Drive-By Low-Effort Criticism

lionhearted (Sebastian Marshall)31 Jul 2019 11:51 UTC
32 points
61 comments2 min readLW link

Mak­ing Fun of Things is Easy

katydee27 Sep 2013 3:10 UTC
47 points
76 comments1 min readLW link

Ra­tion­ally End­ing Discussions

curi12 Aug 2020 20:34 UTC
−7 points
27 comments14 min readLW link

Strength­en­ing the foun­da­tions un­der the Over­ton Win­dow with­out mov­ing it

KatjaGrace14 Mar 2018 2:20 UTC
12 points
7 comments3 min readLW link
(meteuphoric.wordpress.com)
No comments.