High level dis­course struc­ture in ChatGPT: Part 2 [Quasi-sym­bolic?]

Bill BenzonDec 10, 2022, 10:26 PM
7 points
0 comments6 min readLW link

Poll Re­sults on AGI

Niclas KupperDec 10, 2022, 9:25 PM
18 points
0 comments2 min readLW link

Reflect­ing on the 2022 Guild of the Rose Workshops

moridinamaelDec 10, 2022, 9:21 PM
26 points
7 comments8 min readLW link

[Question] Rev­ers­ing a quan­tum simu­la­tion on the plane­tary scale

MythopoeistDec 10, 2022, 8:26 PM
2 points
3 comments1 min readLW link

ACX Zurich De­cem­ber Meetup

MBDec 10, 2022, 7:23 PM
1 point
0 comments1 min readLW link

[ASoT] Nat­u­ral ab­strac­tions and AlphaZero

Ulisse MiniDec 10, 2022, 5:53 PM
33 points
1 comment1 min readLW link
(arxiv.org)

[Question] How promis­ing are le­gal av­enues to re­strict AI train­ing data?

thehalliardDec 10, 2022, 4:31 PM
9 points
2 comments1 min readLW link

In­spira­tion as a Scarce Resource

zenbu zenbu zenbu zenbuDec 10, 2022, 3:23 PM
7 points
0 comments4 min readLW link
(inflorescence.substack.com)

Will Man­i­fold Mar­kets/​Me­tac­u­lus have built-in sup­port for re­flec­tive la­tent vari­ables by 2025?

tailcalledDec 10, 2022, 1:55 PM
34 points
0 comments1 min readLW link

My thoughts on OpenAI’s Align­ment plan

Donald HobsonDec 10, 2022, 10:35 AM
25 points
1 comment6 min readLW link

[Question] How would you im­prove ChatGPT’s fil­ter­ing?

Noah ScalesDec 10, 2022, 8:05 AM
9 points
6 comments1 min readLW link

[Question] A thought experiment

sisyphusDec 10, 2022, 5:23 AM
3 points
12 comments1 min readLW link

pa­tio11′s “Ob­ser­va­tions from an EA-ad­ja­cent (?) char­i­ta­ble effort”

RobertMDec 10, 2022, 12:27 AM
43 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

A dy­nam­i­cal sys­tems primer for en­tropy and optimization

Alex_AltairDec 10, 2022, 12:13 AM
45 points
3 comments7 min readLW link

[Linkpost] The Story Of VaccinateCA

hathDec 9, 2022, 11:54 PM
103 points
4 comments10 min readLW link
(www.worksinprogress.co)

Pro­saic mis­al­ign­ment from the Solomonoff Predictor

Cleo NardoDec 9, 2022, 5:53 PM
42 points
3 comments5 min readLW link

Take 8: Queer the in­ner/​outer al­ign­ment di­chotomy.

Charlie SteinerDec 9, 2022, 5:46 PM
31 points
2 comments2 min readLW link

[Question] Does a LLM have a util­ity func­tion?

DagonDec 9, 2022, 5:19 PM
17 points
11 comments1 min readLW link

Monthly Roundup #1

ZviDec 9, 2022, 5:10 PM
31 points
6 comments21 min readLW link
(thezvi.wordpress.com)

Work­ing to­wards AI al­ign­ment is better

Johannes C. MayerDec 9, 2022, 3:39 PM
8 points
2 comments2 min readLW link

You can still fetch the coffee to­day if you’re dead tomorrow

davidadDec 9, 2022, 2:06 PM
96 points
19 comments5 min readLW link

ChatGPT’s Misal­ign­ment Isn’t What You Think

stavrosDec 9, 2022, 11:11 AM
3 points
12 comments1 min readLW link

ML Safety at NeurIPS & Paradig­matic AI Safety? MLAISU W49

Dec 9, 2022, 10:38 AM
19 points
0 comments4 min readLW link
(newsletter.apartresearch.com)

[Question] What are your thoughts on the fu­ture of AI-as­sisted soft­ware de­vel­op­ment?

RomanHaukssonDec 9, 2022, 10:04 AM
4 points
4 comments1 min readLW link

Fear miti­gated the nu­clear threat, can it do the same to AGI risks?

Igor IvanovDec 9, 2022, 10:04 AM
6 points
8 comments5 min readLW link

Set­ting the Zero Point

Duncan Sabien (Deactivated)Dec 9, 2022, 6:06 AM
91 points
43 comments20 min readLW link1 review

Sys­tems of Survival

VaniverDec 9, 2022, 5:13 AM
63 points
5 comments5 min readLW link

[Question] Do You Have an In­ter­nal Monologue?

belkarxDec 9, 2022, 3:04 AM
23 points
7 comments1 min readLW link

[Question] How is the “sharp left turn defined”?

Chris_LeongDec 9, 2022, 12:04 AM
14 points
4 comments1 min readLW link

Linkpost for a gen­er­al­ist al­gorith­mic learner: ca­pa­ble of car­ry­ing out sort­ing, short­est paths, string match­ing, con­vex hull find­ing in one network

lovetheusersDec 9, 2022, 12:02 AM
7 points
1 comment1 min readLW link
(twitter.com)

[Question] Where’s the eco­nomic in­cen­tive for wok­ism com­ing from?

ValentineDec 8, 2022, 11:28 PM
12 points
105 comments1 min readLW link

I Believe we are in a Hard­ware Overhang

nemDec 8, 2022, 11:18 PM
8 points
0 comments1 min readLW link

Of pump­kins, the Fal­con Heavy, and Grou­cho Marx: High-Level dis­course struc­ture in ChatGPT

Bill BenzonDec 8, 2022, 10:25 PM
2 points
0 comments8 min readLW link

How Many Lives Does X-Risk Work Save From Nonex­is­tence On Aver­age?

Jordan ArelDec 8, 2022, 9:57 PM
4 points
5 comments14 min readLW link

AI Safety Seems Hard to Measure

HoldenKarnofskyDec 8, 2022, 7:50 PM
71 points
6 comments14 min readLW link
(www.cold-takes.com)

Play­ing shell games with definitions

weverkaDec 8, 2022, 7:35 PM
9 points
3 comments1 min readLW link

Notes on OpenAI’s al­ign­ment plan

Alex FlintDec 8, 2022, 7:13 PM
40 points
5 comments7 min readLW link

Rele­vant to nat­u­ral ab­strac­tions: Eu­clidean Sym­me­try Equiv­ar­i­ant Ma­chine Learn­ing—Overview, Ap­pli­ca­tions, and Open Questions

the gears to ascensionDec 8, 2022, 6:01 PM
8 points
0 comments1 min readLW link
(youtu.be)

I’ve started pub­lish­ing the novel I wrote to pro­mote EA

Timothy UnderwoodDec 8, 2022, 5:30 PM
10 points
3 comments1 min readLW link

Neu­ral net­works bi­ased to­wards ge­o­met­ri­cally sim­ple func­tions?

DavidHolmesDec 8, 2022, 4:16 PM
16 points
2 comments3 min readLW link

If Went­worth is right about nat­u­ral ab­strac­tions, it would be bad for alignment

Wuschel SchulzDec 8, 2022, 3:19 PM
29 points
5 comments4 min readLW link

Covid 12/​8/​22: Another Win­ter Wave

ZviDec 8, 2022, 2:40 PM
23 points
8 comments11 min readLW link
(thezvi.wordpress.com)

Why I’m Scep­ti­cal of Foom

DragonGodDec 8, 2022, 10:01 AM
20 points
36 comments3 min readLW link

Take 7: You should talk about “the hu­man’s util­ity func­tion” less.

Charlie SteinerDec 8, 2022, 8:14 AM
50 points
22 comments2 min readLW link

Ma­chine Learn­ing Consent

jefftkDec 8, 2022, 3:50 AM
38 points
14 comments3 min readLW link
(www.jefftk.com)

Riffing on the agent type

QuinnDec 8, 2022, 12:19 AM
21 points
3 comments4 min readLW link

[Question] Look­ing for ideas of pub­lic as­sets (stocks, funds, ETFs) that I can in­vest in to have a chance at prof­it­ing from the mass adop­tion and com­mer­cial­iza­tion of AI technology

AnnapurnaDec 7, 2022, 10:35 PM
15 points
9 comments1 min readLW link

A Fal­li­bil­ist Wordview

Toni MUENDELDec 7, 2022, 8:59 PM
−13 points
2 comments13 min readLW link

Thoughts on AGI or­ga­ni­za­tions and ca­pa­bil­ities work

Dec 7, 2022, 7:46 PM
102 points
17 comments5 min readLW link

How to Think About Cli­mate Models and How to Im­prove Them

clansDec 7, 2022, 7:37 PM
7 points
0 comments2 min readLW link
(locationtbd.home.blog)