If you know your own actions, why would you reason about taking different actions? Wouldn’t you reason about someone who is almost like you, but just different enough to make a different choice?
Notice (well, you already know that) that accepting that identical agents make identical decisions (superrationality, as it were) and to make different decisions in identical circumstances the agents must necessarily be different, gets you out of many pickles. For example, in the 5&10 game an agent would examine its own algorithm, see that it leads to taking $10 and stop there. There is no “what would happen if you took a different action”, because the agent taking a different action would not be you, not exactly. So, no Lobian obstacle. In return, you give up something a lot more emotionally valuable: the delusion of making conscious decisions. Pick your poison.
For example, in the 5&10 game an agent would examine its own algorithm, see that it leads to taking $10 and stop there.
Why do even that much if this reasoning could not be used? The question is about the reasoning that could contribute to the decision, that could describe the algorithm, and so has the option to not “stop there”. What if you see that your algorithm leads to taking the $10 and instead of stopping there, you take the $5?
Nothing stops you. This is the “chicken rule” and it solves some issues, but more importantly illustrates the possibility in how a decision algorithm can function. The fact that this is a thing is evidence that there may be something wrong with the “stop there” proposal. Specifically, you usually don’t know that your reasoning is actual, that it’s even logically possible and not part of an impossible counterfactual, but this is not a hopeless hypothetical where nothing matters. Nothing compels you to affirm what you know about your actions or conclusions, this is not a necessity in a decision making algorithm, but different things you do may have an impact on what happens, because the situation may be actual after all, depending on what happens or what you decide, or it may be predicted from within an actual situation and influence what happens there. This motivates learning to reason in and about possibly impossible situations.
What if you examine your algorithm and find that it takes the $5 instead? It could be the same algorithm that takes the $10, but you don’t know that, instead you arrive at the $5 conclusion using reasoning that could be impossible, but that you don’t know to be impossible, that you haven’t decided yet to make impossible. One way to solve the issue is to render the situation where that holds impossible, by contradicting the conclusion with your action, or in some other way. To know when to do that, you should be able to reason about and within such situations that could be impossible, or could be made impossible, including by the decisions made in them. This makes the way you reason in them relevant, even when in the end these situations don’t occur, because you don’t a priori know that they don’t occur.
(The 5-and-10 problem is not specifically about this issue, and explicit reasoning about impossible situations may be avoided, perhaps should be avoided, but my guess is that the crux in this comment thread is about things like usefulness of reasoning from within possibly impossible situations, where even your own knowledge arrived at by pure computation isn’t necessarily correct.)
Thank you for your explanation! Still trying to understand it. I understand that there is no point examining one’s algorithm if you already execute it and see what it does.
What if you see that your algorithm leads to taking the $10 and instead of stopping there, you take the $5?
I don’t understand that point. you say “nothing stops you”, but that is only possible if you could act contrary to your own algorithm, no? Which makes no sense to me, unless the same algorithm gives different outcomes for different inputs, e.g. “if I simply run the algorithm, I take $10, but if I examine the algorithm before running it and then run it, I take $5″. But it doesn’t seem like the thing you mean, so I am confused.
What if you examine your algorithm and find that it takes the $5 instead?
How can it be possible? if your examination of your algorithm is accurate, it gives the same outcome as mindlessly running it, with is taking $10, no?
It could be the same algorithm that takes the $10, but you don’t know that, instead you arrive at the $5 conclusion using reasoning that could be impossible, but that you don’t know to be impossible, that you haven’t decided yet to make impossible.
So your reasoning is inaccurate, in that you arrive to a wrong conclusion about the algorithm output, right? You just don’t know where the error lies, or even that there is an error to begin with. But in this case you would arrive to a wrong conclusion about the same algorithm run by a different agent, right? So there is nothing special about it being your own algorithm and not someone else’s. If so, the issue is reduced to finding an accurate algorithm analysis tool, for an algorithm that demonstrably halts in a very short time, producing one of the two possible outcomes. This seems to have little to do with decision theory issues, so I am lost as to how this is relevant to the situation.
I am clearly missing some of your logic here, but I still have no idea what the missing piece is, unless it’s the libertarian free will thing, where one can act contrary to one’s programming. Any further help would be greatly appreciated.
I understand that there is no point examining one’s algorithm if you already execute it and see what it does.
Rather there is no point if you are not going to do anything with the results of the examination. It may be useful if you make the decision based on what you observe (about how you make the decision).
you say “nothing stops you”, but that is only possible if you could act contrary to your own algorithm, no?
You can, for a certain value of “can”. It won’t have happened, of course, but you may still decide to act contrary to how you act, two different outcomes of the same algorithm. The contradiction proves that you didn’t face the situation that triggers it in actuality, but the contradiction results precisely from deciding to act contrary to the observed way in which you act, in a situation that a priori could be actual, but is rendered counterlogical as a result of your decision. If instead you affirm the observed action, then there is no contradiction and so it’s possible that you have faced the situation in actuality. Thus the “chicken rule”, playing chicken with the universe, making the present situation impossible when you don’t like it.
So your reasoning is inaccurate
You don’t know that it’s inaccurate, you’ve just run the computation and it said $5. Maybe this didn’t actually happen, but you are considering this situation without knowing if it’s actual. If you ignore the computation, then why run it? If you run it, you need responses to all possible results, and all possible results except one are not actual, yet you should be ready to respond to them without knowing which is which. So I’m discussing what you might do for the result that says that you take the $5. And in the end, the use you make of the results is by choosing to take the $5 or the $10.
This map from predictions to decisions could be anything. It’s trivial to write an algorithm that includes such a map. Of course, if the map diagonalizes, then the predictor will fail (won’t give a prediction), but the map is your reasoning in these hypothetical situations, and the fact that the map may say anything corresponds to the fact that you may decide anything. The map doesn’t have to be identity, decision doesn’t have to reflect prediction, because you may write an algorithm where it’s not identity.
You can, for a certain value of “can”. It won’t have happened, of course, but you may still decide to act contrary to how you act, two different outcomes of the same algorithm.
This confuses me even more. You can imagine act contrary to your own algorithm, but the imagining different possible outcomes is a side effect of running the main algorithm that takes $10. It is never the outcome of it. Or an outcome. Since you know you will end up taking $10, I also don’t understand the idea of playing chicken with the universe. Are there any references for it?
You don’t know that it’s inaccurate, you’ve just run the computation and it said $5.
Wait, what? We started with the assumption that examining the algorithm, or running it, shows that you will take $10, no? I guess I still don’t understand how
What if you see that your algorithm leads to taking the $10 and instead of stopping there, you take the $5?
is even possible, or worth considering.
This map from predictions to decisions could be anything.
Hmm, maybe this is where I miss some of the logic. If the predictions are accurate, the map is bijective. If the predictions are inaccurate, you need a better algorithm analysis tool.
The map doesn’t have to be identity, decision doesn’t have to reflect prediction, because you may write an algorithm where it’s not identity.
To me this screams “get a better algorithm analyzer!” and has nothing to do with whether it’s your own algorithm, or someone else’s. Can you maybe give an example where one ends up in a situation where there is no obvious algorithm analyzer one can apply?
If you know your own actions, why would you reason about taking different actions? Wouldn’t you reason about someone who is almost like you, but just different enough to make a different choice?
Sure. How do you do that?
Notice (well, you already know that) that accepting that identical agents make identical decisions (superrationality, as it were) and to make different decisions in identical circumstances the agents must necessarily be different, gets you out of many pickles. For example, in the 5&10 game an agent would examine its own algorithm, see that it leads to taking $10 and stop there. There is no “what would happen if you took a different action”, because the agent taking a different action would not be you, not exactly. So, no Lobian obstacle. In return, you give up something a lot more emotionally valuable: the delusion of making conscious decisions. Pick your poison.
Why do even that much if this reasoning could not be used? The question is about the reasoning that could contribute to the decision, that could describe the algorithm, and so has the option to not “stop there”. What if you see that your algorithm leads to taking the $10 and instead of stopping there, you take the $5?
Nothing stops you. This is the “chicken rule” and it solves some issues, but more importantly illustrates the possibility in how a decision algorithm can function. The fact that this is a thing is evidence that there may be something wrong with the “stop there” proposal. Specifically, you usually don’t know that your reasoning is actual, that it’s even logically possible and not part of an impossible counterfactual, but this is not a hopeless hypothetical where nothing matters. Nothing compels you to affirm what you know about your actions or conclusions, this is not a necessity in a decision making algorithm, but different things you do may have an impact on what happens, because the situation may be actual after all, depending on what happens or what you decide, or it may be predicted from within an actual situation and influence what happens there. This motivates learning to reason in and about possibly impossible situations.
What if you examine your algorithm and find that it takes the $5 instead? It could be the same algorithm that takes the $10, but you don’t know that, instead you arrive at the $5 conclusion using reasoning that could be impossible, but that you don’t know to be impossible, that you haven’t decided yet to make impossible. One way to solve the issue is to render the situation where that holds impossible, by contradicting the conclusion with your action, or in some other way. To know when to do that, you should be able to reason about and within such situations that could be impossible, or could be made impossible, including by the decisions made in them. This makes the way you reason in them relevant, even when in the end these situations don’t occur, because you don’t a priori know that they don’t occur.
(The 5-and-10 problem is not specifically about this issue, and explicit reasoning about impossible situations may be avoided, perhaps should be avoided, but my guess is that the crux in this comment thread is about things like usefulness of reasoning from within possibly impossible situations, where even your own knowledge arrived at by pure computation isn’t necessarily correct.)
Thank you for your explanation! Still trying to understand it. I understand that there is no point examining one’s algorithm if you already execute it and see what it does.
I don’t understand that point. you say “nothing stops you”, but that is only possible if you could act contrary to your own algorithm, no? Which makes no sense to me, unless the same algorithm gives different outcomes for different inputs, e.g. “if I simply run the algorithm, I take $10, but if I examine the algorithm before running it and then run it, I take $5″. But it doesn’t seem like the thing you mean, so I am confused.
How can it be possible? if your examination of your algorithm is accurate, it gives the same outcome as mindlessly running it, with is taking $10, no?
So your reasoning is inaccurate, in that you arrive to a wrong conclusion about the algorithm output, right? You just don’t know where the error lies, or even that there is an error to begin with. But in this case you would arrive to a wrong conclusion about the same algorithm run by a different agent, right? So there is nothing special about it being your own algorithm and not someone else’s. If so, the issue is reduced to finding an accurate algorithm analysis tool, for an algorithm that demonstrably halts in a very short time, producing one of the two possible outcomes. This seems to have little to do with decision theory issues, so I am lost as to how this is relevant to the situation.
I am clearly missing some of your logic here, but I still have no idea what the missing piece is, unless it’s the libertarian free will thing, where one can act contrary to one’s programming. Any further help would be greatly appreciated.
Rather there is no point if you are not going to do anything with the results of the examination. It may be useful if you make the decision based on what you observe (about how you make the decision).
You can, for a certain value of “can”. It won’t have happened, of course, but you may still decide to act contrary to how you act, two different outcomes of the same algorithm. The contradiction proves that you didn’t face the situation that triggers it in actuality, but the contradiction results precisely from deciding to act contrary to the observed way in which you act, in a situation that a priori could be actual, but is rendered counterlogical as a result of your decision. If instead you affirm the observed action, then there is no contradiction and so it’s possible that you have faced the situation in actuality. Thus the “chicken rule”, playing chicken with the universe, making the present situation impossible when you don’t like it.
You don’t know that it’s inaccurate, you’ve just run the computation and it said $5. Maybe this didn’t actually happen, but you are considering this situation without knowing if it’s actual. If you ignore the computation, then why run it? If you run it, you need responses to all possible results, and all possible results except one are not actual, yet you should be ready to respond to them without knowing which is which. So I’m discussing what you might do for the result that says that you take the $5. And in the end, the use you make of the results is by choosing to take the $5 or the $10.
This map from predictions to decisions could be anything. It’s trivial to write an algorithm that includes such a map. Of course, if the map diagonalizes, then the predictor will fail (won’t give a prediction), but the map is your reasoning in these hypothetical situations, and the fact that the map may say anything corresponds to the fact that you may decide anything. The map doesn’t have to be identity, decision doesn’t have to reflect prediction, because you may write an algorithm where it’s not identity.
This confuses me even more. You can imagine act contrary to your own algorithm, but the imagining different possible outcomes is a side effect of running the main algorithm that takes $10. It is never the outcome of it. Or an outcome. Since you know you will end up taking $10, I also don’t understand the idea of playing chicken with the universe. Are there any references for it?
Wait, what? We started with the assumption that examining the algorithm, or running it, shows that you will take $10, no? I guess I still don’t understand how
is even possible, or worth considering.
Hmm, maybe this is where I miss some of the logic. If the predictions are accurate, the map is bijective. If the predictions are inaccurate, you need a better algorithm analysis tool.
To me this screams “get a better algorithm analyzer!” and has nothing to do with whether it’s your own algorithm, or someone else’s. Can you maybe give an example where one ends up in a situation where there is no obvious algorithm analyzer one can apply?