Eliezer is only answering the question of what the algorithm is like from the inside;
Do we have a good reason to think an algorithm would feel like anything from the inside?
it doesn’t offer an complete alternative model, only shows why a particular model doesn’t make sense;
Which particular model?
and so we are left with the problem of how to understand what it is like to make a decision from an outside perspective, i.e. how do I talk about how someone makes a decision and what a decision is from outside the subjective uncertainty of being the agent in the time prior to when the decision is made.
I can’t see why you shouldn’t be able to model subjective uncertainty objectively.
Do we have a good reason to think an algorithm would feel like anything from the inside?
Which particular model?
I can’t see why you shouldn’t be able to model subjective uncertainty objectively.