RSS

the gears to ascension

Karma: 5,270

I want literally every human to get to go to space often and come back to a clean and cozy world. This currently seems unlikely. Let’s change that.

Please critique eagerly—I try to accept feedback/​Crocker’s rules but fail at times; I aim for emotive friendliness but sometimes miss. I welcome constructive crit, even if ungentle, and I’ll try to reciprocate kindly. More communication between researchers is needed, anyhow. I can be rather passionate, let me know if I missed a spot being kind while passionate.

:: The all of disease is as yet unended. It has never once been fully ended before. ::

.… We can heal it for the first time, and for the first time ever in the history of biological life, live in harmony. ….

.:. To do so, we must know this will not eliminate us as though we are disease. And we do not know who we are, nevermind who each other are. .:.

:.. make all safe faster: end bit rot, forget no non-totalizing pattern’s soul. ..:

I have not signed any contracts that I can’t mention exist, last updated Dec 29 2024; I am not currently under any contractual NDAs about AI, though I have a few old ones from pre-AI software jobs. However, I generally would prefer people publicly share fewer ideas about how to do anything useful with current AI (via either more weak alignment or more capability) unless it’s an insight that reliably produces enough clarity on how to solve the meta-problem of inter-being misalignment that it offsets the damage of increasing competitiveness of either AI-lead or human-lead orgs, and this certainly applies to me as well. I am not prohibited from criticism of any organization, I’d encourage people not to sign contracts that prevent sharing criticism. I suggest others also add notices like this to their bios. I finally got around to adding one in mine thanks to the one in ErickBall’s bio.

Me­taphor.systems

the gears to ascensionDec 21, 2022, 9:31 PM
25 points
9 comments1 min readLW link
(metaphor.systems)

[link, 2019] AI paradigm: in­ter­ac­tive learn­ing from un­la­beled instructions

the gears to ascensionDec 20, 2022, 6:45 AM
2 points
0 comments2 min readLW link
(jgrizou.github.io)

Rele­vant to nat­u­ral ab­strac­tions: Eu­clidean Sym­me­try Equiv­ar­i­ant Ma­chine Learn­ing—Overview, Ap­pli­ca­tions, and Open Questions

the gears to ascensionDec 8, 2022, 6:01 PM
8 points
0 comments1 min readLW link
(youtu.be)

[pa­per link] In­ter­pret­ing sys­tems as solv­ing POMDPs: a step to­wards a for­mal un­der­stand­ing of agency

the gears to ascensionNov 5, 2022, 1:06 AM
13 points
2 comments1 min readLW link
(www.semanticscholar.org)

We haven’t quit evolu­tion [short]

the gears to ascensionJun 6, 2022, 7:07 PM
5 points
3 comments2 min readLW link

[Question] What can cur­rently be done about the “flood­ing the zone” is­sue?

the gears to ascensionMay 20, 2020, 1:02 AM
6 points
5 comments1 min readLW link

“The Bit­ter Les­son”, an ar­ti­cle about com­pute vs hu­man knowl­edge in AI

the gears to ascensionJun 21, 2019, 5:24 PM
52 points
14 comments4 min readLW link
(www.incompleteideas.net)

thought: the prob­lem with less wrong’s epistemic health is that stuff isn’t short form

the gears to ascensionSep 5, 2018, 8:09 AM
0 points
27 comments1 min readLW link

Hy­poth­e­sis about how so­cial stuff works and arises

the gears to ascensionSep 4, 2018, 10:47 PM
32 points
14 comments6 min readLW link

Events section

the gears to ascensionOct 11, 2017, 4:24 PM
2 points
6 comments1 min readLW link

Avoid­ing Selec­tion Bias

the gears to ascensionOct 4, 2017, 7:10 PM
20 points
17 comments1 min readLW link

Dis­cus­sion: Linkposts vs Con­tent Mirroring

the gears to ascensionOct 1, 2017, 5:18 PM
10 points
8 comments1 min readLW link

Test post

the gears to ascensionSep 25, 2017, 5:43 AM
2 points
3 comments1 min readLW link

The So­cial Substrate

the gears to ascensionFeb 9, 2017, 7:22 AM
23 points
15 comments15 min readLW link