RSS

DragonGod

Karma: 2,479

Theoretical Computer Science Msc student at the University of [Redacted] in the United Kingdom.

I’m an aspiring alignment theorist; my research vibes are descriptive formal theories of intelligent systems (and their safety properties) with a bias towards constructive theories.

I think it’s important that our theories of intelligent systems remain rooted in the characteristics of real world intelligent systems; we cannot develop adequate theory from the null string as input.

Open & Wel­come Thread—Jan­uary 2023

DragonGodJan 7, 2023, 11:16 AM
15 points
37 comments1 min readLW link

[Question] [Dis­cus­sion] How Broad is the Hu­man Cog­ni­tive Spec­trum?

DragonGodJan 7, 2023, 12:56 AM
29 points
51 comments2 min readLW link

The Limit of Lan­guage Models

DragonGodJan 6, 2023, 11:53 PM
44 points
26 comments4 min readLW link

[Question] De­fault Sort for Short­forms is Very Bad; How Do I Change It?

DragonGodJan 2, 2023, 9:50 PM
15 points
0 comments1 min readLW link

[Question] Why The Fo­cus on Ex­pected Utility Max­imisers?

DragonGodDec 27, 2022, 3:49 PM
118 points
84 comments3 min readLW link

Against Agents as an Ap­proach to Aligned Trans­for­ma­tive AI

DragonGodDec 27, 2022, 12:47 AM
12 points
9 comments2 min readLW link

Con­tra Steiner on Too Many Nat­u­ral Abstractions

DragonGodDec 24, 2022, 5:42 PM
10 points
6 comments1 min readLW link

[Question] [DISC] Are Values Ro­bust?

DragonGodDec 21, 2022, 1:00 AM
12 points
9 comments2 min readLW link

[In­com­plete] What is Com­pu­ta­tion Any­way?

DragonGodDec 14, 2022, 4:17 PM
16 points
1 comment13 min readLW link
(arxiv.org)

Why I’m Scep­ti­cal of Foom

DragonGodDec 8, 2022, 10:01 AM
20 points
36 comments3 min readLW link

“Far Co­or­di­na­tion”

DragonGodNov 23, 2022, 5:14 PM
6 points
17 comments9 min readLW link

X-risk Miti­ga­tion Does Ac­tu­ally Re­quire Longter­mism

DragonGodNov 14, 2022, 12:54 PM
6 points
1 comment1 min readLW link

In Defence of Tem­po­ral Dis­count­ing in Longter­mist Ethics

DragonGodNov 13, 2022, 9:54 PM
25 points
4 comments1 min readLW link

[Question] Should I Pur­sue a PhD?

DragonGodNov 6, 2022, 10:58 AM
8 points
8 comments2 min readLW link

[Sketch] Val­idity Cri­te­rion for Log­i­cal Counterfactuals

DragonGodOct 11, 2022, 1:31 PM
6 points
0 comments6 min readLW link

“Free Will” in a Com­pu­ta­tional Universe

DragonGodSep 22, 2022, 9:25 PM
5 points
6 comments14 min readLW link

Ini­tial Thoughts on Dis­solv­ing “Could­ness”

DragonGodSep 22, 2022, 9:23 PM
6 points
1 comment3 min readLW link

[Question] Are Hu­man Brains Univer­sal?

DragonGodSep 15, 2022, 3:15 PM
16 points
28 comments5 min readLW link

[Question] Why Do Peo­ple Think Hu­mans Are Stupid?

DragonGodSep 14, 2022, 1:55 PM
22 points
41 comments3 min readLW link

[Question] Are Speed Su­per­in­tel­li­gences Fea­si­ble for Modern ML Tech­niques?

DragonGodSep 14, 2022, 12:59 PM
9 points
7 comments1 min readLW link