RSS

Aryeh Englander

Karma: 1,505

I work on applied mathematics and AI at the Johns Hopkins University Applied Physics Laboratory (APL). I am also currently pursuing a PhD in Information Systems at the University of Maryland, Baltimore County (UMBC). My PhD research focuses on decision and risk analysis under extreme uncertainty, with a particular focus on potential existential risks from very advanced AI.

Some “meta-cruxes” for AI x-risk debates

Aryeh Englander19 May 2024 0:21 UTC
20 points
2 comments3 min readLW link

In­ter­na­tional Scien­tific Re­port on the Safety of Ad­vanced AI: Key Information

Aryeh Englander18 May 2024 1:45 UTC
37 points
0 comments13 min readLW link

Un­der­stand­ing ra­tio­nal­ity vs. ide­ol­ogy debates

Aryeh Englander12 May 2024 19:20 UTC
13 points
1 comment6 min readLW link

List your AI X-Risk cruxes!

Aryeh Englander28 Apr 2024 18:26 UTC
40 points
7 comments2 min readLW link

[Question] Good tax­onomies of all risks (small or large) from AI?

Aryeh Englander5 Mar 2024 18:15 UTC
6 points
1 comment1 min readLW link

High level overview on how to go about es­ti­mat­ing “p(doom)” or the like

Aryeh Englander27 Aug 2023 16:01 UTC
16 points
0 comments5 min readLW link

A Model-based Ap­proach to AI Ex­is­ten­tial Risk

25 Aug 2023 10:32 UTC
44 points
9 comments32 min readLW link

Flowchart: How might rogue AIs defeat all hu­mans?

Aryeh Englander12 Jul 2023 19:23 UTC
12 points
0 comments1 min readLW link

Three camps in AI x-risk dis­cus­sions: My per­sonal very over­sim­plified overview

Aryeh Englander4 Jul 2023 20:42 UTC
21 points
0 comments1 min readLW link

Re­quest: Put Carl Shul­man’s re­cent pod­cast into an or­ga­nized writ­ten format

Aryeh Englander28 Jun 2023 2:58 UTC
19 points
4 comments1 min readLW link

[Question] De­cep­tive AI vs. shift­ing in­stru­men­tal incentives

Aryeh Englander26 Jun 2023 18:09 UTC
7 points
2 comments3 min readLW link