Eliezer is only answering the question of what the algorithm is like from the inside;
it doesn’t offer an complete alternative model, only shows why a particular model doesn’t make sense;
and so we are left with the problem of how to understand what it is like to make a decision from an outside perspective, i.e. how do I talk about how someone makes a decision and what a decision is from outside the subjective uncertainty of being the agent in the time prior to when the decision is made.
Finally, I don’t think it totally rules out the possibility of talking about possible alternatives, only talking about them via a particular method, and thus maybe there is some other way to have an outside view on degrees of freedom in decision making after a decision has already been made.
how do I talk about how someone makes a decision and what a decision is from outside the subjective uncertainty of being the agent in the time prior to when the decision is made.
I am confused… Are you asking how would Omega describe someone’s decision-making process? That would be like watching an open-source program execute. For example, if you know that the optimization algorithm is steepest descent, and you know the landscape it is run on, you can see every step it makes, including picking one of several possible paths.
Essentially yes, but with the caveat that I want to find a model in which to frame that description that doesn’t require constant appeal to subjective experience to explain what a decision is while also not knowing what the program will do until it’s done it (no hypercomputation) and is not dependent on constantly modeling Omega’s uncertainty. Maybe that’s too much to ask, but it’s annoying to constantly have to frame things in terms of what was known at a particular time to have the notion of a decision or choice make sense, so ideally we find a framework for talking about these things that remains sensible while abstracting that detail away.
I must admit I can’t make any sense of your objections. There aren’t any deep philosophical issues with understanding decision algorithms from an outside perspective. That’s the normal case! For instance, A*
Eliezer is only answering the question of what the algorithm is like from the inside;
Do we have a good reason to think an algorithm would feel like anything from the inside?
it doesn’t offer an complete alternative model, only shows why a particular model doesn’t make sense;
Which particular model?
and so we are left with the problem of how to understand what it is like to make a decision from an outside perspective, i.e. how do I talk about how someone makes a decision and what a decision is from outside the subjective uncertainty of being the agent in the time prior to when the decision is made.
I can’t see why you shouldn’t be able to model subjective uncertainty objectively.
It’s a great post, just doesn’t quite go far enough...
I agree. I think Jessica does a good job of incidentally capturing why it doesn’t, but to reiterate:
Eliezer is only answering the question of what the algorithm is like from the inside;
it doesn’t offer an complete alternative model, only shows why a particular model doesn’t make sense;
and so we are left with the problem of how to understand what it is like to make a decision from an outside perspective, i.e. how do I talk about how someone makes a decision and what a decision is from outside the subjective uncertainty of being the agent in the time prior to when the decision is made.
Finally, I don’t think it totally rules out the possibility of talking about possible alternatives, only talking about them via a particular method, and thus maybe there is some other way to have an outside view on degrees of freedom in decision making after a decision has already been made.
I am confused… Are you asking how would Omega describe someone’s decision-making process? That would be like watching an open-source program execute. For example, if you know that the optimization algorithm is steepest descent, and you know the landscape it is run on, you can see every step it makes, including picking one of several possible paths.
Essentially yes, but with the caveat that I want to find a model in which to frame that description that doesn’t require constant appeal to subjective experience to explain what a decision is while also not knowing what the program will do until it’s done it (no hypercomputation) and is not dependent on constantly modeling Omega’s uncertainty. Maybe that’s too much to ask, but it’s annoying to constantly have to frame things in terms of what was known at a particular time to have the notion of a decision or choice make sense, so ideally we find a framework for talking about these things that remains sensible while abstracting that detail away.
I must admit I can’t make any sense of your objections. There aren’t any deep philosophical issues with understanding decision algorithms from an outside perspective. That’s the normal case! For instance, A*
Do we have a good reason to think an algorithm would feel like anything from the inside?
Which particular model?
I can’t see why you shouldn’t be able to model subjective uncertainty objectively.