I understand that there is no point examining one’s algorithm if you already execute it and see what it does.
Rather there is no point if you are not going to do anything with the results of the examination. It may be useful if you make the decision based on what you observe (about how you make the decision).
you say “nothing stops you”, but that is only possible if you could act contrary to your own algorithm, no?
You can, for a certain value of “can”. It won’t have happened, of course, but you may still decide to act contrary to how you act, two different outcomes of the same algorithm. The contradiction proves that you didn’t face the situation that triggers it in actuality, but the contradiction results precisely from deciding to act contrary to the observed way in which you act, in a situation that a priori could be actual, but is rendered counterlogical as a result of your decision. If instead you affirm the observed action, then there is no contradiction and so it’s possible that you have faced the situation in actuality. Thus the “chicken rule”, playing chicken with the universe, making the present situation impossible when you don’t like it.
So your reasoning is inaccurate
You don’t know that it’s inaccurate, you’ve just run the computation and it said $5. Maybe this didn’t actually happen, but you are considering this situation without knowing if it’s actual. If you ignore the computation, then why run it? If you run it, you need responses to all possible results, and all possible results except one are not actual, yet you should be ready to respond to them without knowing which is which. So I’m discussing what you might do for the result that says that you take the $5. And in the end, the use you make of the results is by choosing to take the $5 or the $10.
This map from predictions to decisions could be anything. It’s trivial to write an algorithm that includes such a map. Of course, if the map diagonalizes, then the predictor will fail (won’t give a prediction), but the map is your reasoning in these hypothetical situations, and the fact that the map may say anything corresponds to the fact that you may decide anything. The map doesn’t have to be identity, decision doesn’t have to reflect prediction, because you may write an algorithm where it’s not identity.
You can, for a certain value of “can”. It won’t have happened, of course, but you may still decide to act contrary to how you act, two different outcomes of the same algorithm.
This confuses me even more. You can imagine act contrary to your own algorithm, but the imagining different possible outcomes is a side effect of running the main algorithm that takes $10. It is never the outcome of it. Or an outcome. Since you know you will end up taking $10, I also don’t understand the idea of playing chicken with the universe. Are there any references for it?
You don’t know that it’s inaccurate, you’ve just run the computation and it said $5.
Wait, what? We started with the assumption that examining the algorithm, or running it, shows that you will take $10, no? I guess I still don’t understand how
What if you see that your algorithm leads to taking the $10 and instead of stopping there, you take the $5?
is even possible, or worth considering.
This map from predictions to decisions could be anything.
Hmm, maybe this is where I miss some of the logic. If the predictions are accurate, the map is bijective. If the predictions are inaccurate, you need a better algorithm analysis tool.
The map doesn’t have to be identity, decision doesn’t have to reflect prediction, because you may write an algorithm where it’s not identity.
To me this screams “get a better algorithm analyzer!” and has nothing to do with whether it’s your own algorithm, or someone else’s. Can you maybe give an example where one ends up in a situation where there is no obvious algorithm analyzer one can apply?
Rather there is no point if you are not going to do anything with the results of the examination. It may be useful if you make the decision based on what you observe (about how you make the decision).
You can, for a certain value of “can”. It won’t have happened, of course, but you may still decide to act contrary to how you act, two different outcomes of the same algorithm. The contradiction proves that you didn’t face the situation that triggers it in actuality, but the contradiction results precisely from deciding to act contrary to the observed way in which you act, in a situation that a priori could be actual, but is rendered counterlogical as a result of your decision. If instead you affirm the observed action, then there is no contradiction and so it’s possible that you have faced the situation in actuality. Thus the “chicken rule”, playing chicken with the universe, making the present situation impossible when you don’t like it.
You don’t know that it’s inaccurate, you’ve just run the computation and it said $5. Maybe this didn’t actually happen, but you are considering this situation without knowing if it’s actual. If you ignore the computation, then why run it? If you run it, you need responses to all possible results, and all possible results except one are not actual, yet you should be ready to respond to them without knowing which is which. So I’m discussing what you might do for the result that says that you take the $5. And in the end, the use you make of the results is by choosing to take the $5 or the $10.
This map from predictions to decisions could be anything. It’s trivial to write an algorithm that includes such a map. Of course, if the map diagonalizes, then the predictor will fail (won’t give a prediction), but the map is your reasoning in these hypothetical situations, and the fact that the map may say anything corresponds to the fact that you may decide anything. The map doesn’t have to be identity, decision doesn’t have to reflect prediction, because you may write an algorithm where it’s not identity.
This confuses me even more. You can imagine act contrary to your own algorithm, but the imagining different possible outcomes is a side effect of running the main algorithm that takes $10. It is never the outcome of it. Or an outcome. Since you know you will end up taking $10, I also don’t understand the idea of playing chicken with the universe. Are there any references for it?
Wait, what? We started with the assumption that examining the algorithm, or running it, shows that you will take $10, no? I guess I still don’t understand how
is even possible, or worth considering.
Hmm, maybe this is where I miss some of the logic. If the predictions are accurate, the map is bijective. If the predictions are inaccurate, you need a better algorithm analysis tool.
To me this screams “get a better algorithm analyzer!” and has nothing to do with whether it’s your own algorithm, or someone else’s. Can you maybe give an example where one ends up in a situation where there is no obvious algorithm analyzer one can apply?