RSS

the gears to ascension

Karma: 5,235

I want literally every human to get to go to space often and come back to a clean and cozy world. This currently seems unlikely. Let’s change that.

Please critique eagerly—I try to accept feedback/​Crocker’s rules but fail at times; I aim for emotive friendliness but sometimes miss. I welcome constructive crit, even if ungentle, and I’ll try to reciprocate kindly. More communication between researchers is needed, anyhow. I can be rather passionate, let me know if I missed a spot being kind while passionate.

:: The all of disease is as yet unended. It has never once been fully ended before. ::

.… We can heal it for the first time, and for the first time ever in the history of biological life, live in harmony. ….

.:. To do so, we must know this will not eliminate us as though we are disease. And we do not know who we are, nevermind who each other are. .:.

:.. make all safe faster: end bit rot, forget no non-totalizing pattern’s soul. ..:

I have not signed any contracts that I can’t mention exist, last updated Dec 29 2024; I am not currently under any contractual NDAs about AI, though I have a few old ones from pre-AI software jobs. However, I generally would prefer people publicly share fewer ideas about how to do anything useful with current AI (via either more weak alignment or more capability) unless it’s an insight that reliably produces enough clarity on how to solve the meta-problem of inter-being misalignment that it offsets the damage of increasing competitiveness of either AI-lead or human-lead orgs, and this certainly applies to me as well. I am not prohibited from criticism of any organization, I’d encourage people not to sign contracts that prevent sharing criticism. I suggest others also add notices like this to their bios. I finally got around to adding one in mine thanks to the one in ErickBall’s bio.

Vir­tu­ally Ra­tional—VRChat Meetup

28 Jan 2024 5:52 UTC
25 points
3 comments1 min readLW link

Global LessWrong/​AC10 Meetup on VRChat

24 Jan 2024 5:44 UTC
15 points
2 comments1 min readLW link

Found Paper: “FDT in an evolu­tion­ary en­vi­ron­ment”

the gears to ascension27 Nov 2023 5:27 UTC
30 points
47 comments1 min readLW link
(arxiv.org)

“Benev­olent [ie, Ruler] AI is a bad idea” and a sug­gested alternative

the gears to ascension19 Nov 2023 20:22 UTC
22 points
11 comments1 min readLW link
(www.palladiummag.com)

the gears to as­cen­scion’s Shortform

the gears to ascension14 Aug 2023 15:35 UTC
6 points
274 comments1 min readLW link

A bunch of videos in comments

the gears to ascension12 Jun 2023 22:31 UTC
10 points
62 comments1 min readLW link

gamers be­ware: mod­ded Minecraft has new malware

the gears to ascension7 Jun 2023 13:49 UTC
14 points
5 comments1 min readLW link
(github.com)

“Mem­branes” is bet­ter ter­minol­ogy than “bound­aries” alone

28 May 2023 22:16 UTC
30 points
12 comments3 min readLW link

“A Note on the Com­pat­i­bil­ity of Differ­ent Ro­bust Pro­gram Equil­ibria of the Pri­soner’s Dilemma”

the gears to ascension27 Apr 2023 7:34 UTC
18 points
5 comments1 min readLW link
(arxiv.org)

[Question] Did the fonts change?

the gears to ascension21 Apr 2023 0:40 UTC
2 points
1 comment1 min readLW link

“warn­ing about ai doom” is also “an­nounc­ing ca­pa­bil­ities progress to noobs”

the gears to ascension8 Apr 2023 23:42 UTC
23 points
5 comments3 min readLW link

“a di­alogue with my­self con­cern­ing eliezer yud­kowsky” (not au­thor)

the gears to ascension2 Apr 2023 20:12 UTC
13 points
18 comments3 min readLW link

A bunch of videos for in­tu­ition build­ing (2x speed, skip ones that bore you)

the gears to ascension12 Mar 2023 0:51 UTC
72 points
5 comments4 min readLW link

To MIRI-style folk, you can’t simu­late the uni­verse from the beginning

the gears to ascension1 Mar 2023 21:38 UTC
2 points
19 comments2 min readLW link

How to Read Papers Effi­ciently: Fast-then-Slow Three pass method

25 Feb 2023 2:56 UTC
36 points
4 comments4 min readLW link
(ccr.sigcomm.org)

Hunch seeds: Info bio

the gears to ascension17 Feb 2023 21:25 UTC
12 points
0 comments9 min readLW link

[Question] If I en­counter a ca­pa­bil­ities pa­per that kinda spooks me, what should I do with it?

the gears to ascension3 Feb 2023 21:37 UTC
28 points
8 comments1 min readLW link

Hin­ton: “mor­tal” effi­cient ana­log hard­ware may be learned-in-place, uncopyable

the gears to ascension1 Feb 2023 22:19 UTC
12 points
3 comments1 min readLW link

Call for sub­mis­sions: “(In)hu­man Values and Ar­tifi­cial Agency”, ALIFE 2023

the gears to ascension30 Jan 2023 17:37 UTC
29 points
4 comments1 min readLW link
(humanvaluesandartificialagency.com)

Stop Talk­ing to Each Other and Start Buy­ing Things: Three Decades of Sur­vival in the Desert of So­cial Media

the gears to ascension8 Jan 2023 4:45 UTC
1 point
14 comments1 min readLW link
(catvalente.substack.com)