RSS

avturchin

Karma: 3,914

Side­load­ing: cre­at­ing a model of a per­son via LLM with very large prompt

22 Nov 2024 16:41 UTC
12 points
0 comments35 min readLW link

If I care about mea­sure, choices have ad­di­tional bur­den (+AI gen­er­ated LW-com­ments)

avturchin15 Nov 2024 10:27 UTC
5 points
11 comments2 min readLW link

Quan­tum Im­mor­tal­ity: A Per­spec­tive if AI Doomers are Prob­a­bly Right

7 Nov 2024 16:06 UTC
8 points
44 comments14 min readLW link

Bit­ter les­sons about lu­cid dreaming

avturchin16 Oct 2024 21:27 UTC
76 points
62 comments2 min readLW link

Three main ar­gu­ments that AI will save hu­mans and one meta-argument

avturchin2 Oct 2024 11:39 UTC
8 points
8 comments2 min readLW link

De­bates how to defeat ag­ing: Aubrey de Grey vs. Peter Fedichev.

avturchin27 May 2024 10:25 UTC
17 points
0 comments1 min readLW link

Magic by forgetting

avturchin24 Apr 2024 14:32 UTC
15 points
30 comments4 min readLW link

Strength­en­ing the Ar­gu­ment for In­trin­sic AI Safety: The S-Curves Per­spec­tive

avturchin7 Aug 2023 13:13 UTC
8 points
0 comments12 min readLW link

The Sharp Right Turn: sud­den de­cep­tive al­ign­ment as a con­ver­gent goal

avturchin6 Jun 2023 9:59 UTC
38 points
5 comments1 min readLW link

Another for­mal­iza­tion at­tempt: Cen­tral Ar­gu­ment That AGI Pre­sents a Global Catas­trophic Risk

avturchin12 May 2023 13:22 UTC
16 points
4 comments2 min readLW link

Run­ning many AI var­i­ants to find cor­rect goal generalization

avturchin4 Apr 2023 14:16 UTC
20 points
3 comments1 min readLW link

AI-kills-ev­ery­one sce­nar­ios re­quire robotic in­fras­truc­ture, but not nec­es­sar­ily nanotech

avturchin3 Apr 2023 12:45 UTC
53 points
47 comments4 min readLW link

The AI Shut­down Prob­lem Solu­tion through Com­mit­ment to Archiv­ing and Pe­ri­odic Restoration

avturchin30 Mar 2023 13:17 UTC
16 points
7 comments1 min readLW link

Long-term mem­ory for LLM via self-repli­cat­ing prompt

avturchin10 Mar 2023 10:28 UTC
20 points
3 comments2 min readLW link

Log­i­cal Prob­a­bil­ity of Gold­bach’s Con­jec­ture: Prov­able Rule or Coin­ci­dence?

avturchin29 Dec 2022 13:37 UTC
5 points
15 comments8 min readLW link

A Pin and a Bal­loon: An­thropic Frag­ility In­creases Chances of Ru­n­away Global Warm­ing

avturchin11 Sep 2022 10:25 UTC
33 points
23 comments52 min readLW link

The table of differ­ent sam­pling as­sump­tions in anthropics

avturchin29 Jun 2022 10:41 UTC
39 points
5 comments12 min readLW link

Another plau­si­ble sce­nario of AI risk: AI builds mil­i­tary in­fras­truc­ture while col­lab­o­rat­ing with hu­mans, defects later.

avturchin10 Jun 2022 17:24 UTC
10 points
2 comments1 min readLW link

Un­typ­i­cal SIA

avturchin8 Jun 2022 14:23 UTC
5 points
3 comments2 min readLW link

Rus­sian x-risks newslet­ter May 2022 + short his­tory of “method­ol­o­gists”

avturchin5 Jun 2022 11:50 UTC
23 points
4 comments2 min readLW link