RSS

Public Discourse

TagLast edit: Jul 15, 2020, 4:27 AM by Bird Concept

Public discourse refers to our ability to have conversations in large groups, both as a society, and in smaller communities; as well as conversations between a few well-defined participants (such as presidential debates) that take place publicly.

This tag is for understanding the nature of public discourse (How good is it? What makes it succeed or fail?), and ways of improving it using technology or novel institutions.

See also: Conversation (topic)

[Question] Have epistemic con­di­tions always been this bad?

Wei DaiJan 25, 2020, 4:42 AM
210 points
106 comments4 min readLW link1 review

Why it’s so hard to talk about Consciousness

Rafael HarthJul 2, 2023, 3:56 PM
162 points
212 comments9 min readLW link3 reviews

Rais­ing the San­ity Waterline

Eliezer YudkowskyMar 12, 2009, 4:28 AM
241 points
233 comments3 min readLW link

As­sume Bad Faith

Zack_M_DavisAug 25, 2023, 5:36 PM
150 points
63 comments7 min readLW link3 reviews

You Get About Five Words

RaemonMar 12, 2019, 8:30 PM
243 points
81 comments1 min readLW link6 reviews

Public-fac­ing Cen­sor­ship Is Safety Theater, Caus­ing Rep­u­ta­tional Da­m­age

YitzSep 23, 2022, 5:08 AM
149 points
42 comments6 min readLW link

The Dark Arts

Dec 19, 2023, 4:41 AM
134 points
49 comments9 min readLW link

Lo­cal Val­idity as a Key to San­ity and Civilization

Eliezer YudkowskyApr 7, 2018, 4:25 AM
222 points
68 comments13 min readLW link5 reviews

Ar­bital postmortem

alexeiJan 30, 2018, 1:48 PM
230 points
110 comments19 min readLW link

Well-Kept Gar­dens Die By Pacifism

Eliezer YudkowskyApr 21, 2009, 2:44 AM
250 points
324 comments5 min readLW link

Re­quest to AGI or­ga­ni­za­tions: Share your views on paus­ing AI progress

Apr 11, 2023, 5:30 PM
141 points
11 comments1 min readLW link

The Forces of Bland­ness and the Disagree­able Majority

sarahconstantinApr 28, 2019, 7:44 PM
132 points
27 comments3 min readLW link2 reviews
(srconstantin.wordpress.com)

Why Im­prov­ing Dialogue Feels So Hard

mattoJan 20, 2024, 9:26 PM
21 points
8 comments3 min readLW link

Talk­ing pub­li­cly about AI risk

Jan_KulveitApr 21, 2023, 11:28 AM
180 points
9 comments6 min readLW link

Did Ben­gio and Teg­mark lose a de­bate about AI x-risk against LeCun and Mitchell?

Karl von WendtJun 25, 2023, 4:59 PM
106 points
53 comments7 min readLW link

There’s No Fire Alarm for Ar­tifi­cial Gen­eral Intelligence

Eliezer YudkowskyOct 13, 2017, 9:38 PM
150 points
72 comments25 min readLW link

Two easy things that maybe Just Work to im­prove AI discourse

Bird ConceptJun 8, 2024, 3:51 PM
190 points
35 comments2 min readLW link

Hu­man­ity isn’t re­motely longter­mist, so ar­gu­ments for AGI x-risk should fo­cus on the near term

Seth HerdAug 12, 2024, 6:10 PM
46 points
10 comments1 min readLW link

Pro­posal for im­prov­ing the global on­line dis­course through per­son­al­ised com­ment or­der­ing on all websites

Roman LeventovDec 6, 2023, 6:51 PM
35 points
21 comments6 min readLW link

Com­bin­ing Pre­dic­tion Tech­nolo­gies to Help Moder­ate Discussions

Wei DaiDec 8, 2016, 12:19 AM
21 points
15 comments1 min readLW link

Crowd­sourc­ing mod­er­a­tion with­out sac­ri­fic­ing quality

paulfchristianoDec 2, 2016, 9:47 PM
18 points
26 comments1 min readLW link
(sideways-view.com)

Con­tra Yud­kowsky on Epistemic Con­duct for Author Criticism

Zack_M_DavisSep 13, 2023, 3:33 PM
69 points
38 comments7 min readLW link

Let’s build a fire alarm for AGI

chaosmageMay 15, 2023, 9:16 AM
−1 points
0 comments2 min readLW link

Book Re­view: Con­scious­ness Ex­plained (as the Great Cat­a­lyst)

Rafael HarthSep 17, 2023, 3:30 PM
23 points
14 comments22 min readLW link1 review

Ac­tu­ally, “per­sonal at­tacks af­ter ob­ject-level ar­gu­ments” is a pretty good rule of epistemic conduct

Max HSep 17, 2023, 8:25 PM
37 points
15 comments7 min readLW link

On the im­por­tance of Less Wrong, or an­other sin­gle con­ver­sa­tional locus

AnnaSalamonNov 27, 2016, 5:13 PM
176 points
365 comments4 min readLW link

A Re­turn to Discussion

sarahconstantinNov 27, 2016, 1:59 PM
58 points
32 comments6 min readLW link

Creat­ing bet­ter in­fras­truc­ture for con­tro­ver­sial dis­course

Rudi CJun 16, 2020, 3:17 PM
66 points
11 comments2 min readLW link

A re­ply to Agnes Callard

VaniverJun 28, 2020, 3:25 AM
91 points
36 comments3 min readLW link

In the pres­ence of dis­in­for­ma­tion, col­lec­tive episte­mol­ogy re­quires lo­cal modeling

jessicataDec 15, 2017, 9:54 AM
77 points
39 comments5 min readLW link

[Question] Has there been a “memetic col­lapse”?

Eli TyreDec 28, 2019, 5:36 AM
32 points
7 comments1 min readLW link

Ex­pect­ing Short In­fer­en­tial Distances

Eliezer YudkowskyOct 22, 2007, 11:42 PM
382 points
106 comments3 min readLW link

Is Click­bait De­stroy­ing Our Gen­eral In­tel­li­gence?

Eliezer YudkowskyNov 16, 2018, 11:06 PM
191 points
65 comments5 min readLW link2 reviews

Trust Me I’m Ly­ing: A Sum­mary and Review

quanticleAug 13, 2018, 2:55 AM
100 points
11 comments7 min readLW link
(quanticle.net)

New York Times, Please Do Not Threaten The Safety of Scott Alexan­der By Re­veal­ing His True Name

ZviJun 23, 2020, 12:20 PM
153 points
2 comments2 min readLW link
(thezvi.wordpress.com)

Even more cu­rated con­ver­sa­tions with brilli­ant rationalists

spencergMar 21, 2022, 11:49 PM
59 points
0 comments15 min readLW link

[Question] What’s a bet­ter term now that “AGI” is too vague?

Seth HerdMay 28, 2024, 6:02 PM
15 points
9 comments2 min readLW link

Power Law Policy

Ben TurtelMay 23, 2024, 5:28 AM
4 points
7 comments6 min readLW link
(bturtel.substack.com)

Pro­posal: Twit­ter dis­like button

KatjaGraceMay 17, 2022, 7:40 PM
13 points
7 comments1 min readLW link
(worldspiritsockpuppet.com)

Pitch­ing an Align­ment Softball

mu_(negative)Jun 7, 2022, 4:10 AM
47 points
13 comments10 min readLW link

Four Types of Disagreement

silentbobApr 13, 2025, 11:22 AM
50 points
2 comments5 min readLW link

Par­tial sum­mary of de­bate with Ben­quo and Jes­si­cata [pt 1]

RaemonAug 14, 2019, 8:02 PM
89 points
63 comments22 min readLW link3 reviews

Tech­ni­cal Claims

Vladimir_NesovApr 3, 2025, 12:30 AM
18 points
0 comments1 min readLW link

Spread­ing mes­sages to help with the most im­por­tant century

HoldenKarnofskyJan 25, 2023, 6:20 PM
75 points
4 comments18 min readLW link
(www.cold-takes.com)

Cur­rent At­ti­tudes Toward AI Provide Lit­tle Data Rele­vant to At­ti­tudes Toward AGI

Seth HerdNov 12, 2024, 6:23 PM
17 points
2 comments4 min readLW link

“Ra­tion­al­ist Dis­course” Is Like “Physi­cist Mo­tors”

Zack_M_DavisFeb 26, 2023, 5:58 AM
136 points
153 comments9 min readLW link1 review

“Pub­lish or Per­ish” (a quick note on why you should try to make your work leg­ible to ex­ist­ing aca­demic com­mu­ni­ties)

David Scott Krueger (formerly: capybaralet)Mar 18, 2023, 7:01 PM
112 points
49 comments1 min readLW link1 review

“Now here’s why I’m punch­ing you...”

philhOct 16, 2018, 9:30 PM
28 points
24 comments4 min readLW link
(reasonableapproximation.net)

Yud­kowsky on The Tra­jec­tory podcast

Seth HerdJan 24, 2025, 7:52 PM
70 points
39 comments2 min readLW link
(www.youtube.com)

On the Con­trary, Steel­man­ning Is Nor­mal; ITT-Pass­ing Is Niche

Zack_M_DavisJan 9, 2024, 11:12 PM
45 points
31 comments4 min readLW link

The Over­ton Win­dow widens: Ex­am­ples of AI risk in the media

Orpheus16Mar 23, 2023, 5:10 PM
107 points
24 comments6 min readLW link

Stop talk­ing about p(doom)

Isaac KingJan 1, 2024, 10:57 AM
42 points
22 comments3 min readLW link

Robin Han­son & Liron Shapira De­bate AI X-Risk

LironJul 8, 2024, 9:45 PM
34 points
4 comments1 min readLW link
(www.youtube.com)

Guidelines for pro­duc­tive discussions

ambigramApr 8, 2023, 6:00 AM
38 points
0 comments5 min readLW link

A decade of lurk­ing, a month of posting

Max HApr 9, 2023, 12:21 AM
70 points
4 comments5 min readLW link

Paper Sum­mary: The Effects of Com­mu­ni­cat­ing Uncer­tainty on Public Trust in Facts and Numbers

Jeffrey HeningerJul 9, 2024, 4:50 PM
42 points
2 comments2 min readLW link
(blog.aiimpacts.org)

Con­scious­ness as a con­fla­tion­ary al­li­ance term for in­trin­si­cally val­ued in­ter­nal experiences

Andrew_CritchJul 10, 2023, 8:09 AM
212 points
54 comments11 min readLW link2 reviews

Sta­tus 451 on Di­ag­no­sis: Rus­sell Aphasia

Zack_M_DavisAug 6, 2019, 4:43 AM
48 points
1 comment1 min readLW link
(status451.com)

[Question] Snap­shot of nar­ra­tives and frames against reg­u­lat­ing AI

Jan_KulveitNov 1, 2023, 4:30 PM
36 points
19 comments3 min readLW link

Pro­pa­ganda or Science: A Look at Open Source AI and Bioter­ror­ism Risk

1a3ornNov 2, 2023, 6:20 PM
193 points
79 comments23 min readLW link

Robin Han­son AI X-Risk De­bate — High­lights and Analysis

LironJul 12, 2024, 9:31 PM
46 points
7 comments45 min readLW link
(www.youtube.com)

Altru­ism and Vi­tal­ism Aren’t Fel­low Travelers

Arjun PanicksseryAug 9, 2024, 2:01 AM
24 points
2 comments3 min readLW link
(arjunpanickssery.substack.com)

Sapi­ence, un­der­stand­ing, and “AGI”

Seth HerdNov 24, 2023, 3:13 PM
15 points
3 comments6 min readLW link

Avoid­ing Selec­tion Bias

the gears to ascensionOct 4, 2017, 7:10 PM
20 points
17 comments1 min readLW link

Up­dat­ing My LW Com­ment­ing Policy

curiAug 18, 2020, 4:48 PM
7 points
1 comment4 min readLW link

Com­ment, Don’t Message

jefftkNov 18, 2019, 4:00 PM
30 points
5 comments2 min readLW link
(www.jefftk.com)

Why I’m Stay­ing On Blog­ging­heads.tv

Eliezer YudkowskySep 7, 2009, 8:15 PM
31 points
101 comments2 min readLW link

Find your­self a Wor­thy Op­po­nent: a Chavruta

Raw_PowerJul 6, 2011, 10:59 AM
48 points
74 comments3 min readLW link

Yes, avoid­ing ex­tinc­tion from AI *is* an ur­gent pri­or­ity: a re­sponse to Seth Lazar, Jeremy Howard, and Arvind Narayanan.

Soroush PourJun 1, 2023, 1:38 PM
17 points
0 comments5 min readLW link
(www.soroushjp.com)

[Question] What’s the best ap­proach to cu­rat­ing a news­feed to max­i­mize use­ful con­trast­ing POV?

Ben GoldhaberApr 26, 2019, 5:29 PM
25 points
3 comments1 min readLW link

What the Haters Hate

Jacob FalkovichOct 1, 2018, 8:29 PM
29 points
36 comments8 min readLW link

One Web­site To Rule Them All?

anna_macdonaldJan 11, 2019, 7:14 PM
30 points
23 comments10 min readLW link

...And Say No More Of It

Eliezer YudkowskyFeb 9, 2009, 12:15 AM
43 points
25 comments5 min readLW link

Col­lec­tive Apa­thy and the Internet

Eliezer YudkowskyApr 14, 2009, 12:02 AM
53 points
34 comments2 min readLW link

Click­bait might not be de­stroy­ing our gen­eral Intelligence

Donald HobsonNov 19, 2018, 12:13 AM
25 points
13 comments2 min readLW link

[LINK] Why I’m not on the Ra­tion­al­ist Masterlist

ApprenticeJan 6, 2014, 12:16 AM
40 points
882 comments1 min readLW link

Why Aca­demic Papers Are A Ter­rible Dis­cus­sion Forum

alyssavanceJun 20, 2012, 6:15 PM
44 points
53 comments6 min readLW link

Un­pop­u­lar ideas at­tract poor ad­vo­cates: Be charitable

[deleted]Sep 15, 2014, 7:30 PM
43 points
61 comments2 min readLW link

The Paucity of Elites Online

JonahSMay 31, 2013, 1:35 AM
40 points
42 comments3 min readLW link

Change Con­texts to Im­prove Arguments

palladiasJul 8, 2014, 3:51 PM
42 points
19 comments2 min readLW link

Has “poli­tics is the mind-kil­ler” been a mind-kil­ler?

SonnieBaileyMar 17, 2019, 3:05 AM
31 points
26 comments3 min readLW link

Less Wrong Should Con­front Wrong­ness Wher­ever it Appears

jimrandomhSep 21, 2010, 1:40 AM
32 points
163 comments3 min readLW link

Do­ing dis­course bet­ter: Stuff I wish I knew

dynomightSep 29, 2020, 2:34 PM
27 points
11 comments1 min readLW link
(dyno-might.github.io)

A re­sponse to the Richards et al.’s “The Illu­sion of AI’s Ex­is­ten­tial Risk”

Harrison FellJul 26, 2023, 5:34 PM
1 point
0 comments10 min readLW link

Memetic Judo #1: On Dooms­day Prophets v.3

Max TKAug 18, 2023, 12:14 AM
25 points
17 comments3 min readLW link

Memetic Judo #2: In­cor­po­ral Switches and Lev­ers Compendium

Max TKAug 14, 2023, 4:53 PM
19 points
6 comments17 min readLW link

In Defense of Tone Arguments

OrphanWildeJul 19, 2012, 7:48 PM
32 points
175 comments2 min readLW link

Memetic Judo #3: The In­tel­li­gence of Stochas­tic Par­rots v.2

Max TKAug 20, 2023, 3:18 PM
8 points
33 comments6 min readLW link

[Question] Which head­lines and nar­ra­tives are mostly click­bait?

PontorOct 25, 2020, 1:19 AM
5 points
5 comments2 min readLW link

Poli­tics is work and work needs breaks

KatjaGraceNov 4, 2019, 5:10 PM
19 points
0 comments2 min readLW link
(meteuphoric.com)

When dis­cussing AI doom bar­ri­ers pro­pose spe­cific plau­si­ble scenarios

anithiteAug 18, 2023, 4:06 AM
5 points
0 comments3 min readLW link

Cat­e­gory Qual­ifi­ca­tions (w/​ ex­er­cises)

Logan RiggsSep 15, 2019, 4:28 PM
23 points
22 comments5 min readLW link

On De­bates with Trolls

praseApr 12, 2011, 8:46 AM
31 points
247 comments3 min readLW link

Care­less talk on US-China AI com­pe­ti­tion? (and crit­i­cism of CAIS cov­er­age)

Oliver SourbutSep 20, 2023, 12:46 PM
16 points
3 comments10 min readLW link3 reviews
(www.oliversourbut.net)

A so­cial norm against un­jus­tified opinions?

Kaj_SotalaMay 29, 2009, 11:25 AM
16 points
161 comments1 min readLW link

Rea­sons for some­one to “ig­nore” you

Wei DaiOct 8, 2012, 7:50 PM
37 points
57 comments3 min readLW link

How Strong is Our Con­nec­tion to Truth?

anorangiccFeb 17, 2021, 12:10 AM
1 point
5 comments3 min readLW link

“Model UN Solu­tions”

Arjun PanicksseryDec 8, 2023, 11:06 PM
36 points
5 comments1 min readLW link
(open.substack.com)

Redis­cov­ery, the Mind’s Curare

Erich_GrunewaldApr 10, 2021, 7:42 AM
3 points
1 comment3 min readLW link
(www.erichgrunewald.com)

Ar­gu­ing from a Gap of Perspective

ideenrunMay 1, 2021, 10:42 PM
6 points
1 comment19 min readLW link

Cu­rated con­ver­sa­tions with brilli­ant rationalists

spencergMay 28, 2021, 2:23 PM
154 points
18 comments6 min readLW link

For Bet­ter Com­ment­ing, Take an Oath of Re­ply.

DirectedEvolutionMay 31, 2021, 6:01 AM
42 points
17 comments2 min readLW link

Re­quest for com­ment on a novel refer­ence work of understanding

enderAug 12, 2021, 12:06 AM
3 points
0 comments9 min readLW link

[Question] Con­vince me that hu­man­ity is as doomed by AGI as Yud­kowsky et al., seems to believe

YitzApr 10, 2022, 9:02 PM
92 points
141 comments2 min readLW link

[Question] Con­vince me that hu­man­ity *isn’t* doomed by AGI

YitzApr 15, 2022, 5:26 PM
61 points
50 comments1 min readLW link

Schism Begets Schism

Davis_KingsleyJul 10, 2019, 3:09 AM
24 points
25 comments3 min readLW link

90% of any­thing should be bad (& the pre­ci­sion-re­call trade­off)

cartografieSep 8, 2022, 1:20 AM
33 points
22 comments6 min readLW link

Re­spond­ing to ‘Beyond Hyper­an­thro­po­mor­phism’

ukc10014Sep 14, 2022, 8:37 PM
9 points
0 comments16 min readLW link

Idea selection

krbouchardMar 1, 2021, 2:07 PM
1 point
0 comments2 min readLW link

Defense Against The Dark Arts: An Introduction

LyrongolemDec 25, 2023, 6:36 AM
24 points
36 comments20 min readLW link

[Question] Ter­minol­ogy: <some­thing>-ware for ML?

Oliver SourbutJan 3, 2024, 11:42 AM
17 points
27 comments1 min readLW link

[Question] What is the point of 2v2 de­bates?

Axel AhlqvistAug 20, 2024, 9:59 PM
2 points
1 comment1 min readLW link

Ten Modes of Cul­ture War Discourse

jchanJan 31, 2024, 1:58 PM
54 points
15 comments15 min readLW link

Anal­ogy Bank for AI Safety

utilistrutilJan 29, 2024, 2:35 AM
23 points
0 comments8 min readLW link

[Question] What’s Your Best AI Safety “Quip”?

False NameMar 26, 2024, 3:35 PM
−2 points
0 comments1 min readLW link

The Nat­u­ral Selec­tion of Bad Vibes (Part 1)

Kevin DorstMay 12, 2024, 8:28 AM
13 points
3 comments7 min readLW link
(kevindorst.substack.com)

AI x Hu­man Flour­ish­ing: In­tro­duc­ing the Cos­mos Institute

Brendan McCordSep 5, 2024, 6:23 PM
14 points
5 comments6 min readLW link
(cosmosinstitute.substack.com)

Check­ing pub­lic figures on whether they “an­swered the ques­tion” quick anal­y­sis from Har­ris/​Trump de­bate, and a proposal

david reinsteinSep 11, 2024, 8:25 PM
7 points
4 comments1 min readLW link
(open.substack.com)

Cri­tique of ‘Many Peo­ple Fear A.I. They Shouldn’t’ by David Brooks.

Axel AhlqvistAug 15, 2024, 6:38 PM
12 points
8 comments3 min readLW link

When to join a re­spectabil­ity cascade

B JacobsSep 24, 2024, 7:54 AM
10 points
1 comment2 min readLW link
(bobjacobs.substack.com)

[Question] shouldn’t we try to get me­dia at­ten­tion?

KvmanThinkingMar 4, 2025, 1:39 AM
6 points
1 comment1 min readLW link

To know or not to know

arisAlexisJan 27, 2025, 1:17 PM
0 points
3 comments6 min readLW link

Elite Co­or­di­na­tion via the Con­sen­sus of Power

Richard_NgoMar 19, 2025, 6:56 AM
91 points
15 comments12 min readLW link
(www.mindthefuture.info)

Ca­pa­bil­ities De­nial: The Danger of Un­der­es­ti­mat­ing AI

Christopher KingMar 21, 2023, 1:24 AM
6 points
5 comments3 min readLW link

Miss­ing fore­cast­ing tools: from cat­a­logs to a new kind of pre­dic­tion market

MichaelLatowickiMar 29, 2023, 9:55 AM
14 points
3 comments5 min readLW link

AI scares and chang­ing pub­lic beliefs

Seth HerdApr 6, 2023, 6:51 PM
46 points
21 comments6 min readLW link

[SEE NEW EDITS] No, *You* Need to Write Clearer

Nicholas / Heather KrossApr 29, 2023, 5:04 AM
262 points
65 comments5 min readLW link
(www.thinkingmuchbetter.com)

Cis fragility

[deactivated]Nov 30, 2023, 4:14 AM
−51 points
9 comments3 min readLW link

Defect­ing by Ac­ci­dent—A Flaw Com­mon to An­a­lyt­i­cal People

lionhearted (Sebastian Marshall)Dec 1, 2010, 8:25 AM
127 points
433 comments15 min readLW link

[Question] Why is so much dis­cus­sion hap­pen­ing in pri­vate Google Docs?

Wei DaiJan 12, 2019, 2:19 AM
101 points
22 comments1 min readLW link

Dis­in­cen­tives for par­ti­ci­pat­ing on LW/​AF

Wei DaiMay 10, 2019, 7:46 PM
86 points
45 comments2 min readLW link

Models of moderation

habrykaFeb 2, 2018, 11:29 PM
30 points
33 comments7 min readLW link

Modesty and di­ver­sity: a con­crete suggestion

[deleted]Nov 8, 2017, 8:42 PM
30 points
6 comments1 min readLW link

LW Up­date 2018-12-06 – Table of Con­tents and Q&A

RaemonDec 8, 2018, 12:47 AM
55 points
28 comments4 min readLW link

Iso­lat­ing Con­tent can Create Affordances

Davis_KingsleyAug 23, 2018, 8:28 AM
49 points
12 comments1 min readLW link

Moder­a­tor’s Dilemma: The Risks of Par­tial Intervention

Chris_LeongSep 29, 2017, 1:47 AM
33 points
17 comments4 min readLW link

You’re Cal­ling *Who* A Cult Leader?

Eliezer YudkowskyMar 22, 2009, 6:57 AM
60 points
121 comments5 min readLW link

“Poli­tics is the mind-kil­ler” is the mind-killer

thomblakeJan 26, 2012, 3:55 PM
58 points
99 comments1 min readLW link

Poli­tics is hard mode

Rob BensingerJul 21, 2014, 10:14 PM
59 points
109 comments6 min readLW link

When None Dare Urge Res­traint, pt. 2

Jay_SchweikertMay 30, 2012, 3:28 PM
84 points
92 comments3 min readLW link

Break­ing the vi­cious cycle

XiXiDuNov 23, 2014, 6:25 PM
66 points
131 comments2 min readLW link

[Question] Why Don’t Creators Switch to their Own Plat­forms?

Jacob FalkovichDec 23, 2018, 4:46 AM
42 points
17 comments1 min readLW link

Tak­ing “cor­re­la­tion does not im­ply cau­sa­tion” back from the internet

sixes_and_sevensOct 3, 2012, 12:18 PM
62 points
70 comments1 min readLW link

Poli­ti­cal top­ics at­tract par­ti­ci­pants in­clined to use the norms of main­stream poli­ti­cal de­bate, risk­ing a tip­ping point to lower qual­ity discussion

emrMar 26, 2015, 12:14 AM
67 points
71 comments1 min readLW link

Don’t Be Afraid of Ask­ing Per­son­ally Im­por­tant Ques­tions of Less Wrong

Evan_GaensbauerMar 17, 2015, 6:54 AM
80 points
47 comments3 min readLW link

Only You Can Prevent Your Mind From Get­ting Killed By Politics

ChrisHallquistOct 26, 2013, 1:59 PM
61 points
144 comments5 min readLW link

Lit­tle­wood’s Law and the Global Media

gwernJan 12, 2019, 5:46 PM
37 points
3 comments1 min readLW link
(www.gwern.net)

Defense against discourse

BenquoOct 17, 2017, 9:10 AM
38 points
15 comments6 min readLW link
(benjaminrosshoffman.com)

Offense ver­sus harm minimization

Scott AlexanderApr 16, 2011, 1:06 AM
87 points
429 comments9 min readLW link

False Friends and Tone Policing

palladiasJun 18, 2014, 6:20 PM
71 points
49 comments3 min readLW link

On memetic weapons

ioannesSep 1, 2018, 3:25 AM
42 points
28 comments5 min readLW link

Free Speech as Le­gal Right vs. Eth­i­cal Value

ozymandiasNov 28, 2017, 4:49 PM
14 points
8 comments2 min readLW link

PCAST Work­ing Group on Gen­er­a­tive AI In­vites Public Input

Christopher KingMay 13, 2023, 10:49 PM
7 points
0 comments1 min readLW link
(terrytao.wordpress.com)

[Question] What new tech­nol­ogy, for what in­sti­tu­tions?

bhauthMay 14, 2023, 5:33 PM
29 points
6 comments3 min readLW link

Speak­ing up pub­li­cly is heroic

jefftkNov 2, 2019, 12:00 PM
44 points
2 comments1 min readLW link
(www.jefftk.com)

Nice­ness Stealth-Bombing

things_which_are_not_on_fireJan 8, 2018, 10:16 PM
25 points
4 comments3 min readLW link

The Case for a Big­ger Audience

John_MaxwellFeb 9, 2019, 7:22 AM
68 points
58 comments2 min readLW link

Dialogue on Ap­peals to Consequences

jessicataJul 18, 2019, 2:34 AM
33 points
87 comments7 min readLW link
(unstableontology.com)

Ap­peal to Con­se­quence, Value Ten­sions, And Ro­bust Organizations

Matt GoldenbergJul 19, 2019, 10:09 PM
45 points
90 comments5 min readLW link

[Question] What pro­jects and efforts are there to pro­mote AI safety re­search?

Christopher KingMay 24, 2023, 12:33 AM
4 points
0 comments1 min readLW link

Easy wins aren’t news

PhilGoetzFeb 19, 2015, 7:38 PM
60 points
19 comments1 min readLW link

Wikipe­dia pageviews: still in decline

VipulNaikSep 26, 2017, 11:03 PM
24 points
20 comments3 min readLW link

Don’t Ap­ply the Prin­ci­ple of Char­ity to Yourself

UnclGhostNov 19, 2011, 7:26 PM
82 points
23 comments2 min readLW link

Drive-By Low-Effort Criticism

lionhearted (Sebastian Marshall)Jul 31, 2019, 11:51 AM
32 points
61 comments2 min readLW link

Mak­ing Fun of Things is Easy

katydeeSep 27, 2013, 3:10 AM
48 points
76 comments1 min readLW link

Ra­tion­ally End­ing Discussions

curiAug 12, 2020, 8:34 PM
−7 points
27 comments14 min readLW link

Strength­en­ing the foun­da­tions un­der the Over­ton Win­dow with­out mov­ing it

KatjaGraceMar 14, 2018, 2:20 AM
12 points
7 comments3 min readLW link
(meteuphoric.wordpress.com)
No comments.