RSS

Gra­di­ent Descent

TagLast edit: Sep 8, 2021, 6:16 PM by Ruby

stub

Hy­poth­e­sis: gra­di­ent de­scent prefers gen­eral circuits

Quintin PopeFeb 8, 2022, 9:12 PM
46 points
26 comments11 min readLW link

Why Align­ing an LLM is Hard, and How to Make it Easier

RogerDearnaleyJan 23, 2025, 6:44 AM
30 points
3 comments4 min readLW link

Gra­di­ent de­scent is not just more effi­cient ge­netic algorithms

leogaoSep 8, 2021, 4:23 PM
55 points
14 comments1 min readLW link

A “Bit­ter Les­son” Ap­proach to Align­ing AGI and ASI

RogerDearnaleyJul 6, 2024, 1:23 AM
60 points
39 comments24 min readLW link

We Need To Know About Con­tinual Learning

michael_mjdApr 22, 2023, 5:08 PM
30 points
14 comments4 min readLW link

Con­di­tions for math­e­mat­i­cal equiv­alence of Stochas­tic Gra­di­ent Des­cent and Nat­u­ral Selection

Oliver SourbutMay 9, 2022, 9:38 PM
70 points
19 comments8 min readLW link1 review
(www.oliversourbut.net)

The Hu­man’s Role in Mesa Optimization

silentbobMay 9, 2024, 12:07 PM
5 points
0 comments2 min readLW link
No comments.