RSS

EJT

Karma: 755

I’m a Postdoctoral Research Fellow at Oxford University’s Global Priorities Institute.

Previously, I was a Philosophy Fellow at the Center for AI Safety.

So far, my work has mostly been about the moral importance of future generations. Going forward, it will mostly be about AI.

You can email me at elliott.thornley@philosophy.ox.ac.uk.

Towards shut­down­able agents via stochas­tic choice

8 Jul 2024 10:14 UTC
59 points
11 comments23 min readLW link
(arxiv.org)

The Shut­down Prob­lem: In­com­plete Prefer­ences as a Solution

EJT23 Feb 2024 16:01 UTC
52 points
27 comments42 min readLW link

The Shut­down Prob­lem: An AI Eng­ineer­ing Puz­zle for De­ci­sion Theorists

EJT23 Oct 2023 21:00 UTC
79 points
22 comments1 min readLW link
(philpapers.org)

The price is right

EJT16 Oct 2023 16:34 UTC
42 points
3 comments1 min readLW link
(openairopensea.substack.com)

[Question] What are some ex­am­ples of AIs in­stan­ti­at­ing the ‘near­est un­blocked strat­egy prob­lem’?

EJT4 Oct 2023 11:05 UTC
6 points
4 comments1 min readLW link

EJT’s Shortform

EJT26 Sep 2023 15:19 UTC
4 points
16 comments1 min readLW link

There are no co­her­ence theorems

20 Feb 2023 21:25 UTC
145 points
124 comments19 min readLW link