RSS

abergal

Karma: 768

Fund­ing for pro­grams and events on global catas­trophic risk, effec­tive al­tru­ism, and other topics

14 Aug 2024 23:59 UTC
7 points
0 comments2 min readLW link

Fund­ing for work that builds ca­pac­ity to ad­dress risks from trans­for­ma­tive AI

14 Aug 2024 23:52 UTC
14 points
0 comments5 min readLW link

Up­dates to Open Phil’s ca­reer de­vel­op­ment and tran­si­tion fund­ing program

4 Dec 2023 18:10 UTC
28 points
0 comments2 min readLW link

The Long-Term Fu­ture Fund is look­ing for a full-time fund chair

5 Oct 2023 22:18 UTC
52 points
0 comments7 min readLW link
(forum.effectivealtruism.org)

Long-Term Fu­ture Fund Ask Us Any­thing (Septem­ber 2023)

31 Aug 2023 0:28 UTC
33 points
6 comments1 min readLW link
(forum.effectivealtruism.org)

Long-Term Fu­ture Fund: April 2023 grant recommendations

2 Aug 2023 7:54 UTC
81 points
3 comments50 min readLW link

[AMA] An­nounc­ing Open Phil’s Univer­sity Group Or­ga­nizer and Cen­tury Fel­low­ships [x-post]

6 Aug 2022 21:48 UTC
14 points
0 comments13 min readLW link
(forum.effectivealtruism.org)

Truth­ful and hon­est AI

29 Oct 2021 7:28 UTC
42 points
1 comment13 min readLW link

Interpretability

29 Oct 2021 7:28 UTC
60 points
13 comments12 min readLW link

Tech­niques for en­hanc­ing hu­man feedback

29 Oct 2021 7:27 UTC
22 points
0 comments2 min readLW link

Mea­sur­ing and fore­cast­ing risks

29 Oct 2021 7:27 UTC
20 points
0 comments12 min readLW link

Re­quest for pro­pos­als for pro­jects in AI al­ign­ment that work with deep learn­ing systems

29 Oct 2021 7:26 UTC
87 points
0 comments5 min readLW link

Provide feed­back on Open Philan­thropy’s AI al­ign­ment RFP

20 Aug 2021 19:52 UTC
56 points
6 comments1 min readLW link

Open Philan­thropy is seek­ing pro­pos­als for out­reach projects

16 Jul 2021 21:19 UTC
61 points
2 comments10 min readLW link

aber­gal’s Shortform

abergal1 Mar 2021 15:56 UTC
3 points
2 comments1 min readLW link

Take­aways from safety by de­fault interviews

3 Apr 2020 17:20 UTC
28 points
2 comments13 min readLW link
(aiimpacts.org)

AGI in a vuln­er­a­ble world

26 Mar 2020 0:10 UTC
42 points
21 comments1 min readLW link
(aiimpacts.org)

[Question] Could you save lives in your com­mu­nity by buy­ing oxy­gen con­cen­tra­tors from Alibaba?

abergal16 Mar 2020 8:58 UTC
18 points
12 comments1 min readLW link

Robin Han­son on the fu­tur­ist fo­cus on AI

abergal13 Nov 2019 21:50 UTC
31 points
24 comments1 min readLW link
(aiimpacts.org)

Ro­hin Shah on rea­sons for AI optimism

abergal31 Oct 2019 12:10 UTC
40 points
58 comments1 min readLW link
(aiimpacts.org)