hu­man in­tel­li­gence may be al­ign­ment-limited

bhauth15 Jun 2023 22:32 UTC
16 points
3 comments2 min readLW link

Devel­op­ing a tech­nol­ogy with safety in mind: Les­sons from the Wright Brothers

jasoncrawford15 Jun 2023 21:08 UTC
30 points
4 comments3 min readLW link
(rootsofprogress.org)

AXRP Epi­sode 22 - Shard The­ory with Quintin Pope

DanielFilan15 Jun 2023 19:00 UTC
52 points
11 comments93 min readLW link

Can we ac­cel­er­ate hu­man progress? Moder­ated Con­ver­sa­tion in NYC

Jannik Schg15 Jun 2023 17:33 UTC
1 point
0 comments1 min readLW link

Group Pri­ori­tar­i­anism: Why AI Should Not Re­place Hu­man­ity [draft]

fsh15 Jun 2023 17:33 UTC
8 points
0 comments25 min readLW link

Press the hap­piness but­ton!

Spiarrow15 Jun 2023 17:30 UTC
5 points
3 comments2 min readLW link

AI #16: AI in the UK

Zvi15 Jun 2023 13:20 UTC
46 points
20 comments54 min readLW link
(thezvi.wordpress.com)

I still think it’s very un­likely we’re ob­serv­ing alien aircraft

dynomight15 Jun 2023 13:01 UTC
180 points
70 comments5 min readLW link
(dynomight.net)

Aligned Ob­jec­tives Prize Competition

Prometheus15 Jun 2023 12:42 UTC
8 points
0 comments2 min readLW link
(app.impactmarkets.io)

A more effec­tive Ele­va­tor Pitch for AI risk

Iknownothing15 Jun 2023 12:39 UTC
2 points
0 comments1 min readLW link

Why “AI al­ign­ment” would bet­ter be re­named into “Ar­tifi­cial In­ten­tion re­search”

chaosmage15 Jun 2023 10:32 UTC
29 points
12 comments2 min readLW link

Matt Taibbi’s COVID reporting

ChristianKl15 Jun 2023 9:49 UTC
21 points
34 comments1 min readLW link
(www.racket.news)

Look­ing Back On Ads

jefftk15 Jun 2023 2:10 UTC
30 points
11 comments3 min readLW link
(www.jefftk.com)

Why liber­tar­i­ans are ad­vo­cat­ing for reg­u­la­tion on AI

RobertM14 Jun 2023 20:59 UTC
35 points
13 comments4 min readLW link

In­stru­men­tal Con­ver­gence? [Draft]

J. Dmitri Gallow14 Jun 2023 20:21 UTC
48 points
20 comments33 min readLW link

On the Ap­ple Vi­sion Pro

Zvi14 Jun 2023 17:50 UTC
44 points
17 comments11 min readLW link
(thezvi.wordpress.com)

Progress links and tweets, 2023-06-14

jasoncrawford14 Jun 2023 16:30 UTC
19 points
1 comment2 min readLW link
(rootsofprogress.org)

Philo­soph­i­cal Cy­borg (Part 1)

14 Jun 2023 16:20 UTC
31 points
4 comments13 min readLW link

Is the con­fir­ma­tion bias re­ally a bias?

Lionel14 Jun 2023 14:06 UTC
−2 points
6 comments1 min readLW link
(lionelpage.substack.com)

NA East ACX & Ra­tion­al­ity Meetup Or­ga­niz­ers Retreat

Willa14 Jun 2023 13:39 UTC
8 points
0 comments1 min readLW link

Light­cone In­fras­truc­ture/​LessWrong is look­ing for funding

habryka14 Jun 2023 4:45 UTC
205 points
39 comments1 min readLW link

An­thropic | Chart­ing a Path to AI Accountability

Gabe M14 Jun 2023 4:43 UTC
34 points
2 comments3 min readLW link
(www.anthropic.com)

De­mys­tify­ing Born’s rule

Christopher King14 Jun 2023 3:16 UTC
5 points
26 comments3 min readLW link

My guess for why I was wrong about US housing

romeostevensit14 Jun 2023 0:37 UTC
110 points
13 comments1 min readLW link

Notes from the Bank of England Talk by Gio­vanni Dosi on Agent-based Model­ing for Macroeconomics

PixelatedPenguin13 Jun 2023 22:25 UTC
3 points
0 comments1 min readLW link

In­tro­duc­ing The Long Game Pro­ject: Im­prov­ing De­ci­sion-Mak­ing Through Table­top Ex­er­cises and Si­mu­lated Experience

Dan Stuart13 Jun 2023 21:45 UTC
4 points
0 comments4 min readLW link

In­tel­li­gence al­lo­ca­tion from a Mean Field Game The­ory perspective

Marv K13 Jun 2023 19:52 UTC
13 points
2 comments2 min readLW link

Mul­ti­ple stages of fal­lacy—jus­tifi­ca­tions and non-jus­tifi­ca­tions for the mul­ti­ple stage fallacy

AronT13 Jun 2023 17:37 UTC
33 points
2 comments5 min readLW link
(coordinationishard.substack.com)

TryCon­tra Events

jefftk13 Jun 2023 17:30 UTC
2 points
0 comments1 min readLW link
(www.jefftk.com)

Me­taAI: less is less for al­ign­ment.

Cleo Nardo13 Jun 2023 14:08 UTC
68 points
17 comments5 min readLW link

The Dial of Progress

Zvi13 Jun 2023 13:40 UTC
160 points
119 comments11 min readLW link
(thezvi.wordpress.com)

Vir­tual AI Safety Un­con­fer­ence (VAISU)

13 Jun 2023 9:56 UTC
15 points
0 comments1 min readLW link

Seat­tle ACX Meetup—Sum­mer 2023

Optimization Process13 Jun 2023 5:14 UTC
5 points
0 comments1 min readLW link

TASRA: A Tax­on­omy and Anal­y­sis of So­cietal-Scale Risks from AI

Andrew_Critch13 Jun 2023 5:04 UTC
64 points
1 comment1 min readLW link

<$750k grants for Gen­eral Pur­pose AI As­surance/​Safety Research

Phosphorous13 Jun 2023 4:45 UTC
37 points
1 comment1 min readLW link
(cset.georgetown.edu)

UFO Bet­ting: Put Up or Shut Up

RatsWrongAboutUAP13 Jun 2023 4:05 UTC
250 points
215 comments2 min readLW link

A bunch of videos in comments

the gears to ascension12 Jun 2023 22:31 UTC
10 points
62 comments1 min readLW link

[Linkpost] The neu­ro­con­nec­tion­ist re­search programme

Bogdan Ionut Cirstea12 Jun 2023 21:58 UTC
5 points
1 comment1 min readLW link

Contin­gency: A Con­cep­tual Tool from Evolu­tion­ary Biol­ogy for Alignment

clem_acs12 Jun 2023 20:54 UTC
57 points
2 comments14 min readLW link
(acsresearch.org)

Book Re­view: Autoheterosexuality

tailcalled12 Jun 2023 20:11 UTC
27 points
9 comments24 min readLW link

Aura as a pro­pri­o­cep­tive glitch

pchvykov12 Jun 2023 19:30 UTC
37 points
4 comments4 min readLW link

Align­ing Math­e­mat­i­cal No­tions of In­finity with Hu­man Intuition

London L.12 Jun 2023 19:19 UTC
1 point
10 comments9 min readLW link
(medium.com)

ARC is hiring the­o­ret­i­cal researchers

12 Jun 2023 18:50 UTC
126 points
12 comments4 min readLW link
(www.alignment.org)

In­tro­duc­tion to Towards Causal Foun­da­tions of Safe AGI

12 Jun 2023 17:55 UTC
67 points
6 comments4 min readLW link

Man­i­fold Pre­dicted the AI Ex­tinc­tion State­ment and CAIS Wanted it Deleted

David Chee12 Jun 2023 15:54 UTC
71 points
15 comments12 min readLW link

Explicitness

TsviBT12 Jun 2023 15:05 UTC
29 points
0 comments15 min readLW link

If you are too stressed, walk away from the front lines

Neil 12 Jun 2023 14:26 UTC
44 points
14 comments5 min readLW link

UK PM: $125M for AI safety

Hauke Hillebrandt12 Jun 2023 12:33 UTC
31 points
11 comments1 min readLW link
(twitter.com)

[Question] Could in­duced and sta­bi­lized hy­po­ma­nia be a de­sir­able men­tal state?

MvB12 Jun 2023 12:13 UTC
8 points
22 comments2 min readLW link

Non-loss of con­trol AGI-re­lated catas­tro­phes are out of con­trol too

12 Jun 2023 12:01 UTC
0 points
3 comments24 min readLW link