RSS

habryka

Karma: 41,922

Running Lightcone Infrastructure, which runs LessWrong and Lighthaven.space. You can reach me at habryka@lesswrong.com.

(I have signed no contracts or agreements whose existence I cannot mention, which I am mentioning here as a canary)

2025 Pre­dic­tion Thread

habrykaDec 30, 2024, 1:50 AM
77 points
21 comments1 min readLW link

Open Thread Win­ter 2024/​2025

habrykaDec 25, 2024, 9:02 PM
23 points
59 comments1 min readLW link

The Deep Lore of LightHaven, with Oliver Habryka (TBC epi­sode 228)

Dec 24, 2024, 10:45 PM
45 points
4 comments91 min readLW link
(thebayesianconspiracy.substack.com)

An­nounc­ing the Q1 2025 Long-Term Fu­ture Fund grant round

Dec 20, 2024, 2:20 AM
33 points
2 comments2 min readLW link
(forum.effectivealtruism.org)

Sorry for the down­time, looks like we got DDosd

habrykaDec 2, 2024, 4:14 AM
111 points
13 comments1 min readLW link

(The) Light­cone is noth­ing with­out its peo­ple: LW + Lighthaven’s big fundraiser

habrykaNov 30, 2024, 2:55 AM
609 points
268 comments42 min readLW link

OpenAI Email Archives (from Musk v. Alt­man and OpenAI blog)

habrykaNov 16, 2024, 6:38 AM
530 points
80 comments51 min readLW link

Us­ing Danger­ous AI, But Safely?

habrykaNov 16, 2024, 4:29 AM
17 points
2 comments43 min readLW link

Open Thread Fall 2024

habrykaOct 5, 2024, 10:28 PM
44 points
193 comments1 min readLW link

If-Then Com­mit­ments for AI Risk Re­duc­tion [by Holden Karnofsky]

habrykaSep 13, 2024, 7:38 PM
28 points
0 comments20 min readLW link
(carnegieendowment.org)

Open Thread Sum­mer 2024

habrykaJun 11, 2024, 8:57 PM
22 points
99 comments1 min readLW link

“AI Safety for Fleshy Hu­mans” an AI Safety ex­plainer by Nicky Case

habrykaMay 3, 2024, 6:10 PM
90 points
11 comments4 min readLW link
(aisafety.dance)

Goal ori­ented cog­ni­tion in “a sin­gle for­ward pass”

Apr 22, 2024, 5:03 AM
20 points
15 comments26 min readLW link

Ex­press in­ter­est in an “FHI of the West”

habrykaApr 18, 2024, 3:32 AM
268 points
41 comments3 min readLW link

Struc­tured Trans­parency: a frame­work for ad­dress­ing use/​mis-use trade-offs when shar­ing information

habrykaApr 11, 2024, 6:35 PM
25 points
0 comments2 min readLW link
(arxiv.org)

LessWrong’s (first) album: I Have Been A Good Bing

Apr 1, 2024, 7:33 AM
567 points
179 comments11 min readLW link

How use­ful is “AI Con­trol” as a fram­ing on AI X-Risk?

Mar 14, 2024, 6:06 PM
70 points
4 comments34 min readLW link

Open Thread Spring 2024

habrykaMar 11, 2024, 7:17 PM
22 points
160 comments1 min readLW link

[Question] Is a ran­dom box of gas pre­dictable af­ter 20 sec­onds?

Jan 24, 2024, 11:00 PM
37 points
35 comments1 min readLW link

[Question] Will quan­tum ran­dom­ness af­fect the 2028 elec­tion?

Jan 24, 2024, 10:54 PM
66 points
52 comments1 min readLW link